Jan 22 09:05:54 crc systemd[1]: Starting Kubernetes Kubelet... Jan 22 09:05:54 crc restorecon[4554]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 22 09:05:54 crc restorecon[4554]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 22 09:05:54 crc restorecon[4554]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 22 09:05:54 crc restorecon[4554]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 22 09:05:54 crc restorecon[4554]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 22 09:05:54 crc restorecon[4554]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 22 09:05:54 crc restorecon[4554]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 22 09:05:54 crc restorecon[4554]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 22 09:05:54 crc restorecon[4554]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 22 09:05:54 crc restorecon[4554]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 22 09:05:54 crc restorecon[4554]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 22 09:05:54 crc restorecon[4554]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 22 09:05:54 crc restorecon[4554]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 22 09:05:54 crc restorecon[4554]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 22 09:05:54 crc restorecon[4554]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 22 09:05:54 crc restorecon[4554]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 22 09:05:54 crc restorecon[4554]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 22 09:05:54 crc restorecon[4554]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 22 09:05:54 crc restorecon[4554]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 22 09:05:54 crc restorecon[4554]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 22 09:05:54 crc restorecon[4554]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 22 09:05:54 crc restorecon[4554]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 22 09:05:54 crc restorecon[4554]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 22 09:05:54 crc restorecon[4554]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 22 09:05:54 crc restorecon[4554]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 22 09:05:54 crc restorecon[4554]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 22 09:05:54 crc restorecon[4554]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 22 09:05:54 crc restorecon[4554]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 22 09:05:54 crc restorecon[4554]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 22 09:05:54 crc restorecon[4554]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 22 09:05:54 crc restorecon[4554]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 22 09:05:54 crc restorecon[4554]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 22 09:05:54 crc restorecon[4554]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 22 09:05:54 crc restorecon[4554]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 22 09:05:54 crc restorecon[4554]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 22 09:05:54 crc restorecon[4554]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 22 09:05:54 crc restorecon[4554]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 22 09:05:54 crc restorecon[4554]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 22 09:05:54 crc restorecon[4554]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 09:05:54 crc restorecon[4554]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 09:05:54 crc restorecon[4554]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 09:05:54 crc restorecon[4554]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 09:05:54 crc restorecon[4554]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 09:05:54 crc restorecon[4554]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 09:05:54 crc restorecon[4554]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 09:05:54 crc restorecon[4554]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 09:05:54 crc restorecon[4554]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 09:05:54 crc restorecon[4554]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 09:05:54 crc restorecon[4554]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 09:05:54 crc restorecon[4554]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 22 09:05:54 crc restorecon[4554]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 22 09:05:54 crc restorecon[4554]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 09:05:54 crc restorecon[4554]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 22 09:05:54 crc restorecon[4554]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 22 09:05:54 crc restorecon[4554]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 09:05:54 crc restorecon[4554]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 22 09:05:54 crc restorecon[4554]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 22 09:05:54 crc restorecon[4554]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 09:05:54 crc restorecon[4554]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 22 09:05:54 crc restorecon[4554]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 22 09:05:54 crc restorecon[4554]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 09:05:54 crc restorecon[4554]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 22 09:05:54 crc restorecon[4554]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 22 09:05:54 crc restorecon[4554]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 09:05:54 crc restorecon[4554]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 22 09:05:54 crc restorecon[4554]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 22 09:05:54 crc restorecon[4554]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 22 09:05:55 crc restorecon[4554]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 22 09:05:55 crc restorecon[4554]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 22 09:05:55 crc kubenswrapper[4811]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 22 09:05:55 crc kubenswrapper[4811]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 22 09:05:55 crc kubenswrapper[4811]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 22 09:05:55 crc kubenswrapper[4811]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 22 09:05:55 crc kubenswrapper[4811]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 22 09:05:55 crc kubenswrapper[4811]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.856773 4811 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.859711 4811 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.859729 4811 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.859734 4811 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.859739 4811 feature_gate.go:330] unrecognized feature gate: Example Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.859743 4811 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.859746 4811 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.859751 4811 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.859755 4811 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.859759 4811 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.859762 4811 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.859766 4811 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.859770 4811 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.859773 4811 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.859777 4811 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.859780 4811 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.859785 4811 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.859790 4811 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.859794 4811 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.859798 4811 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.859801 4811 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.859805 4811 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.859809 4811 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.859813 4811 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.859817 4811 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.859820 4811 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.859824 4811 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.859827 4811 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.859831 4811 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.859834 4811 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.859842 4811 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.859846 4811 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.859849 4811 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.859853 4811 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.859856 4811 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.859860 4811 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.859863 4811 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.859866 4811 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.859871 4811 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.859874 4811 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.859879 4811 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.859882 4811 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.859885 4811 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.859889 4811 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.859893 4811 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.859896 4811 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.859900 4811 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.859903 4811 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.859907 4811 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.859910 4811 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.859915 4811 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.859919 4811 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.859922 4811 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.859925 4811 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.859929 4811 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.859932 4811 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.859935 4811 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.859939 4811 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.859942 4811 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.859945 4811 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.859948 4811 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.859952 4811 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.859957 4811 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.859961 4811 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.859966 4811 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.859970 4811 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.859973 4811 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.859976 4811 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.859980 4811 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.859984 4811 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.859988 4811 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.859992 4811 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860068 4811 flags.go:64] FLAG: --address="0.0.0.0" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860076 4811 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860081 4811 flags.go:64] FLAG: --anonymous-auth="true" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860086 4811 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860101 4811 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860104 4811 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860109 4811 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860114 4811 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860118 4811 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860122 4811 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860126 4811 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860130 4811 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860134 4811 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860138 4811 flags.go:64] FLAG: --cgroup-root="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860142 4811 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860145 4811 flags.go:64] FLAG: --client-ca-file="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860149 4811 flags.go:64] FLAG: --cloud-config="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860152 4811 flags.go:64] FLAG: --cloud-provider="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860156 4811 flags.go:64] FLAG: --cluster-dns="[]" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860161 4811 flags.go:64] FLAG: --cluster-domain="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860165 4811 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860169 4811 flags.go:64] FLAG: --config-dir="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860172 4811 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860176 4811 flags.go:64] FLAG: --container-log-max-files="5" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860181 4811 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860184 4811 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860188 4811 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860192 4811 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860196 4811 flags.go:64] FLAG: --contention-profiling="false" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860200 4811 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860204 4811 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860208 4811 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860212 4811 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860217 4811 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860221 4811 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860226 4811 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860229 4811 flags.go:64] FLAG: --enable-load-reader="false" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860233 4811 flags.go:64] FLAG: --enable-server="true" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860237 4811 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860242 4811 flags.go:64] FLAG: --event-burst="100" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860245 4811 flags.go:64] FLAG: --event-qps="50" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860250 4811 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860253 4811 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860259 4811 flags.go:64] FLAG: --eviction-hard="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860267 4811 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860271 4811 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860276 4811 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860279 4811 flags.go:64] FLAG: --eviction-soft="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860283 4811 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860287 4811 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860292 4811 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860295 4811 flags.go:64] FLAG: --experimental-mounter-path="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860299 4811 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860303 4811 flags.go:64] FLAG: --fail-swap-on="true" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860306 4811 flags.go:64] FLAG: --feature-gates="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860310 4811 flags.go:64] FLAG: --file-check-frequency="20s" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860314 4811 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860318 4811 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860322 4811 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860325 4811 flags.go:64] FLAG: --healthz-port="10248" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860329 4811 flags.go:64] FLAG: --help="false" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860335 4811 flags.go:64] FLAG: --hostname-override="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860339 4811 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860342 4811 flags.go:64] FLAG: --http-check-frequency="20s" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860346 4811 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860351 4811 flags.go:64] FLAG: --image-credential-provider-config="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860355 4811 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860359 4811 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860364 4811 flags.go:64] FLAG: --image-service-endpoint="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860368 4811 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860373 4811 flags.go:64] FLAG: --kube-api-burst="100" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860377 4811 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860381 4811 flags.go:64] FLAG: --kube-api-qps="50" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860385 4811 flags.go:64] FLAG: --kube-reserved="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860389 4811 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860392 4811 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860396 4811 flags.go:64] FLAG: --kubelet-cgroups="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860400 4811 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860403 4811 flags.go:64] FLAG: --lock-file="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860409 4811 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860413 4811 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860417 4811 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860422 4811 flags.go:64] FLAG: --log-json-split-stream="false" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860426 4811 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860430 4811 flags.go:64] FLAG: --log-text-split-stream="false" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860434 4811 flags.go:64] FLAG: --logging-format="text" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860437 4811 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860441 4811 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860445 4811 flags.go:64] FLAG: --manifest-url="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860449 4811 flags.go:64] FLAG: --manifest-url-header="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860454 4811 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860459 4811 flags.go:64] FLAG: --max-open-files="1000000" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860463 4811 flags.go:64] FLAG: --max-pods="110" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860466 4811 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860470 4811 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860474 4811 flags.go:64] FLAG: --memory-manager-policy="None" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860479 4811 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860484 4811 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860498 4811 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860502 4811 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860511 4811 flags.go:64] FLAG: --node-status-max-images="50" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860515 4811 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860519 4811 flags.go:64] FLAG: --oom-score-adj="-999" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860523 4811 flags.go:64] FLAG: --pod-cidr="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860526 4811 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860533 4811 flags.go:64] FLAG: --pod-manifest-path="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860536 4811 flags.go:64] FLAG: --pod-max-pids="-1" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860540 4811 flags.go:64] FLAG: --pods-per-core="0" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860544 4811 flags.go:64] FLAG: --port="10250" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860548 4811 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860551 4811 flags.go:64] FLAG: --provider-id="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860555 4811 flags.go:64] FLAG: --qos-reserved="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860559 4811 flags.go:64] FLAG: --read-only-port="10255" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860563 4811 flags.go:64] FLAG: --register-node="true" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860567 4811 flags.go:64] FLAG: --register-schedulable="true" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860572 4811 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860578 4811 flags.go:64] FLAG: --registry-burst="10" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860582 4811 flags.go:64] FLAG: --registry-qps="5" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860585 4811 flags.go:64] FLAG: --reserved-cpus="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860589 4811 flags.go:64] FLAG: --reserved-memory="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860593 4811 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860598 4811 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860602 4811 flags.go:64] FLAG: --rotate-certificates="false" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860606 4811 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860609 4811 flags.go:64] FLAG: --runonce="false" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860613 4811 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860617 4811 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860639 4811 flags.go:64] FLAG: --seccomp-default="false" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860643 4811 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860646 4811 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860650 4811 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860654 4811 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860658 4811 flags.go:64] FLAG: --storage-driver-password="root" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860661 4811 flags.go:64] FLAG: --storage-driver-secure="false" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860665 4811 flags.go:64] FLAG: --storage-driver-table="stats" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860669 4811 flags.go:64] FLAG: --storage-driver-user="root" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860672 4811 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860676 4811 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860680 4811 flags.go:64] FLAG: --system-cgroups="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860683 4811 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860689 4811 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860693 4811 flags.go:64] FLAG: --tls-cert-file="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860696 4811 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860702 4811 flags.go:64] FLAG: --tls-min-version="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860706 4811 flags.go:64] FLAG: --tls-private-key-file="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860710 4811 flags.go:64] FLAG: --topology-manager-policy="none" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860713 4811 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860717 4811 flags.go:64] FLAG: --topology-manager-scope="container" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860721 4811 flags.go:64] FLAG: --v="2" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860726 4811 flags.go:64] FLAG: --version="false" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860730 4811 flags.go:64] FLAG: --vmodule="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860735 4811 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.860739 4811 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.860838 4811 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.860844 4811 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.860848 4811 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.860853 4811 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.860857 4811 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.860861 4811 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.860866 4811 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.860869 4811 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.860873 4811 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.860877 4811 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.860880 4811 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.860884 4811 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.860887 4811 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.860891 4811 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.860895 4811 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.860899 4811 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.860902 4811 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.860905 4811 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.860908 4811 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.860911 4811 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.860914 4811 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.860919 4811 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.860923 4811 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.860926 4811 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.860930 4811 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.860933 4811 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.860936 4811 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.860940 4811 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.860943 4811 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.860946 4811 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.860950 4811 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.860953 4811 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.860957 4811 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.860960 4811 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.860964 4811 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.860968 4811 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.860971 4811 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.860975 4811 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.860978 4811 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.860982 4811 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.860985 4811 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.860989 4811 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.860992 4811 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.860995 4811 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.860999 4811 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.861005 4811 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.861008 4811 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.861012 4811 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.861016 4811 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.861020 4811 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.861024 4811 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.861028 4811 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.861032 4811 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.861035 4811 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.861039 4811 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.861042 4811 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.861046 4811 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.861049 4811 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.861053 4811 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.861056 4811 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.861060 4811 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.861063 4811 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.861066 4811 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.861070 4811 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.861073 4811 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.861076 4811 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.861080 4811 feature_gate.go:330] unrecognized feature gate: Example Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.861083 4811 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.861086 4811 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.861090 4811 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.861094 4811 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.861099 4811 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.869237 4811 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.869268 4811 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.869339 4811 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.869350 4811 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.869355 4811 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.869359 4811 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.869362 4811 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.869366 4811 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.869370 4811 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.869373 4811 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.869376 4811 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.869380 4811 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.869383 4811 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.869386 4811 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.869389 4811 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.869392 4811 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.869395 4811 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.869398 4811 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.869401 4811 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.869406 4811 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.869412 4811 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.869416 4811 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.869419 4811 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.869423 4811 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.869426 4811 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.869430 4811 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.869433 4811 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.869437 4811 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.869441 4811 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.869445 4811 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.869449 4811 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.869453 4811 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.869456 4811 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.869460 4811 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.869463 4811 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.869466 4811 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.869469 4811 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.869472 4811 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.869476 4811 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.869480 4811 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.869484 4811 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.869496 4811 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.869500 4811 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.869503 4811 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.869506 4811 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.869510 4811 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.869513 4811 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.869516 4811 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.869519 4811 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.869523 4811 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.869526 4811 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.869529 4811 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.869532 4811 feature_gate.go:330] unrecognized feature gate: Example Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.869535 4811 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.869538 4811 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.869541 4811 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.869545 4811 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.869548 4811 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.869552 4811 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.869557 4811 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.869561 4811 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.869566 4811 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.869569 4811 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.869573 4811 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.869576 4811 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.869580 4811 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.869583 4811 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.869586 4811 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.869589 4811 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.869592 4811 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.869595 4811 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.869599 4811 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.869602 4811 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.869608 4811 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.869750 4811 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.869756 4811 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.869760 4811 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.869765 4811 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.869769 4811 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.869772 4811 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.869776 4811 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.869780 4811 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.869783 4811 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.869787 4811 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.869790 4811 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.869793 4811 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.869796 4811 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.869799 4811 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.869802 4811 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.869805 4811 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.869809 4811 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.869813 4811 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.869818 4811 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.869822 4811 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.869826 4811 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.869830 4811 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.869833 4811 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.869837 4811 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.869840 4811 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.869844 4811 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.869847 4811 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.869850 4811 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.869854 4811 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.869857 4811 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.869860 4811 feature_gate.go:330] unrecognized feature gate: Example Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.869863 4811 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.869867 4811 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.869870 4811 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.869874 4811 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.869877 4811 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.869880 4811 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.869883 4811 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.869886 4811 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.869890 4811 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.869894 4811 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.869898 4811 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.869901 4811 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.869905 4811 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.869909 4811 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.869912 4811 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.869916 4811 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.869920 4811 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.869924 4811 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.869927 4811 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.869931 4811 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.869934 4811 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.869938 4811 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.869941 4811 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.869945 4811 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.869949 4811 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.869952 4811 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.869955 4811 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.869958 4811 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.869962 4811 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.869965 4811 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.869968 4811 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.869972 4811 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.869975 4811 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.869978 4811 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.869981 4811 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.869984 4811 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.869987 4811 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.869991 4811 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.869994 4811 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.869997 4811 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.870003 4811 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.870445 4811 server.go:940] "Client rotation is on, will bootstrap in background" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.872912 4811 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.872988 4811 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.873804 4811 server.go:997] "Starting client certificate rotation" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.873829 4811 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.873962 4811 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-03 07:45:44.083509043 +0000 UTC Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.874013 4811 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.886471 4811 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 22 09:05:55 crc kubenswrapper[4811]: E0122 09:05:55.887850 4811 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.26.94:6443: connect: connection refused" logger="UnhandledError" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.889507 4811 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.899706 4811 log.go:25] "Validated CRI v1 runtime API" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.919473 4811 log.go:25] "Validated CRI v1 image API" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.920906 4811 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.924198 4811 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-22-09-01-24-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.924236 4811 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:49 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/containers/storage/overlay-containers/75d81934760b26101869fbd8e4b5954c62b019c1cc3e5a0c9f82ed8de46b3b22/userdata/shm:{mountpoint:/var/lib/containers/storage/overlay-containers/75d81934760b26101869fbd8e4b5954c62b019c1cc3e5a0c9f82ed8de46b3b22/userdata/shm major:0 minor:42 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:50 fsType:tmpfs blockSize:0} overlay_0-43:{mountpoint:/var/lib/containers/storage/overlay/94b752e0a51c0134b00ddef6dc7a933a9d7c1d9bdc88a18dae4192a0d557d623/merged major:0 minor:43 fsType:overlay blockSize:0}] Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.935677 4811 manager.go:217] Machine: {Timestamp:2026-01-22 09:05:55.93454064 +0000 UTC m=+0.256727763 CPUVendorID:AuthenticAMD NumCores:8 NumPhysicalCores:1 NumSockets:8 CpuFrequency:2445404 MemoryCapacity:25199480832 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:463fdb35-dd0c-4804-a85e-31cd33c59ce4 BootID:0066ca33-f035-4af3-9028-0da78d54d55e Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:12599738368 Type:vfs Inodes:3076108 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:12599742464 Type:vfs Inodes:1048576 HasInodes:true} {Device:overlay_0-43 DeviceMajor:0 DeviceMinor:43 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:49 Capacity:2519945216 Type:vfs Inodes:615221 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:50 Capacity:1073741824 Type:vfs Inodes:3076108 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:5039898624 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/var/lib/containers/storage/overlay-containers/75d81934760b26101869fbd8e4b5954c62b019c1cc3e5a0c9f82ed8de46b3b22/userdata/shm DeviceMajor:0 DeviceMinor:42 Capacity:65536000 Type:vfs Inodes:3076108 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:429496729600 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:5d:0f:d7 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:enp3s0 MacAddress:fa:16:3e:5d:0f:d7 Speed:-1 Mtu:1500} {Name:enp7s0 MacAddress:fa:16:3e:8b:4a:44 Speed:-1 Mtu:1440} {Name:enp7s0.20 MacAddress:52:54:00:9e:8a:ec Speed:-1 Mtu:1436} {Name:enp7s0.21 MacAddress:52:54:00:4a:f5:4b Speed:-1 Mtu:1436} {Name:enp7s0.22 MacAddress:52:54:00:98:c0:1b Speed:-1 Mtu:1436} {Name:enp7s0.23 MacAddress:52:54:00:3c:79:a8 Speed:-1 Mtu:1436} {Name:eth10 MacAddress:ce:3a:60:87:92:47 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:be:41:e5:47:17:4a Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:25199480832 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:65536 Type:Data Level:1} {Id:0 Size:65536 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:65536 Type:Data Level:1} {Id:1 Size:65536 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:65536 Type:Data Level:1} {Id:2 Size:65536 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:65536 Type:Data Level:1} {Id:3 Size:65536 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:65536 Type:Data Level:1} {Id:4 Size:65536 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:65536 Type:Data Level:1} {Id:5 Size:65536 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:65536 Type:Data Level:1} {Id:6 Size:65536 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:65536 Type:Data Level:1} {Id:7 Size:65536 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.935836 4811 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.935929 4811 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.936179 4811 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.936332 4811 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.936358 4811 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.936544 4811 topology_manager.go:138] "Creating topology manager with none policy" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.936555 4811 container_manager_linux.go:303] "Creating device plugin manager" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.936987 4811 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.937018 4811 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.937113 4811 state_mem.go:36] "Initialized new in-memory state store" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.937384 4811 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.939410 4811 kubelet.go:418] "Attempting to sync node with API server" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.939432 4811 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.939452 4811 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.939469 4811 kubelet.go:324] "Adding apiserver pod source" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.939480 4811 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.942885 4811 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.943557 4811 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.26.94:6443: connect: connection refused Jan 22 09:05:55 crc kubenswrapper[4811]: E0122 09:05:55.943636 4811 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.26.94:6443: connect: connection refused" logger="UnhandledError" Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.943609 4811 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 192.168.26.94:6443: connect: connection refused Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.943656 4811 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 22 09:05:55 crc kubenswrapper[4811]: E0122 09:05:55.943673 4811 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 192.168.26.94:6443: connect: connection refused" logger="UnhandledError" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.946887 4811 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.947739 4811 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.947762 4811 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.947770 4811 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.947778 4811 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.947792 4811 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.947801 4811 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.947808 4811 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.947818 4811 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.947829 4811 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.947837 4811 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.947855 4811 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.947862 4811 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.948234 4811 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.949287 4811 server.go:1280] "Started kubelet" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.949532 4811 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 192.168.26.94:6443: connect: connection refused Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.950295 4811 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.950325 4811 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 22 09:05:55 crc systemd[1]: Started Kubernetes Kubelet. Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.950824 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.950838 4811 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.950848 4811 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.951144 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 16:15:20.436062966 +0000 UTC Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.951770 4811 server.go:460] "Adding debug handlers to kubelet server" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.952107 4811 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.953107 4811 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 22 09:05:55 crc kubenswrapper[4811]: E0122 09:05:55.952250 4811 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.953088 4811 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.952807 4811 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.26.94:6443: connect: connection refused Jan 22 09:05:55 crc kubenswrapper[4811]: E0122 09:05:55.953479 4811 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.26.94:6443: connect: connection refused" logger="UnhandledError" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.953036 4811 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.953601 4811 factory.go:55] Registering systemd factory Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.953661 4811 factory.go:221] Registration of the systemd container factory successfully Jan 22 09:05:55 crc kubenswrapper[4811]: E0122 09:05:55.954053 4811 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.26.94:6443: connect: connection refused" interval="200ms" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.954299 4811 factory.go:153] Registering CRI-O factory Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.954325 4811 factory.go:221] Registration of the crio container factory successfully Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.954360 4811 factory.go:103] Registering Raw factory Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.954382 4811 manager.go:1196] Started watching for new ooms in manager Jan 22 09:05:55 crc kubenswrapper[4811]: E0122 09:05:55.952216 4811 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 192.168.26.94:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188d024c702d0bd3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 09:05:55.949243347 +0000 UTC m=+0.271430470,LastTimestamp:2026-01-22 09:05:55.949243347 +0000 UTC m=+0.271430470,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.955179 4811 manager.go:319] Starting recovery of all containers Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.959853 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.959886 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.959919 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.959951 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.959970 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.959982 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.959994 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.960005 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.960058 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.960069 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.960078 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.960098 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.960107 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.960117 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.960137 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.960146 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.960156 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.960165 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.960174 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.960192 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.960201 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.960212 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.960224 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.960233 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.960241 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.960253 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.960264 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.960283 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.960292 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.960301 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.960310 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.960319 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.960329 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.960337 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.960345 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.960363 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.960382 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.960390 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.960420 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.960429 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.960451 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.960459 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.960467 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.960524 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.960555 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.960564 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.960573 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.960581 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.960590 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.960599 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.960608 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.960640 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.960664 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.960673 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.960682 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.960691 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.960716 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.960724 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.960733 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.960771 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.960779 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.960788 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.960796 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.960804 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.960812 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.960820 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.960871 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.960889 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.960908 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.960926 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.960935 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.960942 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.960960 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.960968 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.960995 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.961014 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.961043 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.961051 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.961058 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.961065 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.961074 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.961102 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.961110 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.961127 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.961136 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.961153 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.961161 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.961168 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.961177 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.961183 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.961190 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.961213 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.961227 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.961239 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.961275 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.961284 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.961292 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.961299 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.961306 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.961333 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.961428 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.961467 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.961477 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.961484 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.961580 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.961593 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.961601 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.961620 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.961642 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.961650 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.961679 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.961713 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.961721 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.961734 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.961742 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.961774 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.961809 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.961826 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.961836 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.961843 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.961852 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.961859 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.961884 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.961900 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.961908 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.961918 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.961925 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.961932 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.961942 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.961968 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.961975 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.962017 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.962040 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.962059 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.962065 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.962072 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.962089 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.962095 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.962102 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.962123 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.962130 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.962147 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.962153 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.962179 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.962186 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.962193 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.962220 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.962236 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.962242 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.962259 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.962285 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.966860 4811 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.966909 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.966929 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.966948 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.966991 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.967008 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.967026 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.967040 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.967054 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.967069 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.967086 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.967100 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.967113 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.967135 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.967151 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.967163 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.967185 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.967198 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.967212 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.967225 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.967237 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.967251 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.967262 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.967276 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.967292 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.967302 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.967313 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.967323 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.967336 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.967348 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.967357 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.967376 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.967386 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.967398 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.967411 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.967421 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.967430 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.967445 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.967456 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.967470 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.967483 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.967502 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.967515 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.967528 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.967541 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.967552 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.967561 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.967572 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.967584 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.967601 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.967609 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.967618 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.967645 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.967654 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.967665 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.967674 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.967682 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.967694 4811 reconstruct.go:97] "Volume reconstruction finished" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.967702 4811 reconciler.go:26] "Reconciler: start to sync state" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.981140 4811 manager.go:324] Recovery completed Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.989071 4811 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.990764 4811 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.990811 4811 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.990839 4811 kubelet.go:2335] "Starting kubelet main sync loop" Jan 22 09:05:55 crc kubenswrapper[4811]: E0122 09:05:55.990887 4811 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 22 09:05:55 crc kubenswrapper[4811]: W0122 09:05:55.991647 4811 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.26.94:6443: connect: connection refused Jan 22 09:05:55 crc kubenswrapper[4811]: E0122 09:05:55.991696 4811 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.26.94:6443: connect: connection refused" logger="UnhandledError" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.992978 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.994236 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.994280 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.994297 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.996170 4811 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.996189 4811 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 22 09:05:55 crc kubenswrapper[4811]: I0122 09:05:55.996205 4811 state_mem.go:36] "Initialized new in-memory state store" Jan 22 09:05:56 crc kubenswrapper[4811]: I0122 09:05:56.002639 4811 policy_none.go:49] "None policy: Start" Jan 22 09:05:56 crc kubenswrapper[4811]: I0122 09:05:56.003912 4811 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 22 09:05:56 crc kubenswrapper[4811]: I0122 09:05:56.003956 4811 state_mem.go:35] "Initializing new in-memory state store" Jan 22 09:05:56 crc kubenswrapper[4811]: I0122 09:05:56.045411 4811 manager.go:334] "Starting Device Plugin manager" Jan 22 09:05:56 crc kubenswrapper[4811]: I0122 09:05:56.045659 4811 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 22 09:05:56 crc kubenswrapper[4811]: I0122 09:05:56.045672 4811 server.go:79] "Starting device plugin registration server" Jan 22 09:05:56 crc kubenswrapper[4811]: I0122 09:05:56.045988 4811 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 22 09:05:56 crc kubenswrapper[4811]: I0122 09:05:56.046006 4811 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 22 09:05:56 crc kubenswrapper[4811]: I0122 09:05:56.046173 4811 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 22 09:05:56 crc kubenswrapper[4811]: I0122 09:05:56.046275 4811 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 22 09:05:56 crc kubenswrapper[4811]: I0122 09:05:56.046286 4811 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 22 09:05:56 crc kubenswrapper[4811]: E0122 09:05:56.052600 4811 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 22 09:05:56 crc kubenswrapper[4811]: I0122 09:05:56.091821 4811 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 22 09:05:56 crc kubenswrapper[4811]: I0122 09:05:56.091907 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:05:56 crc kubenswrapper[4811]: I0122 09:05:56.093211 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:05:56 crc kubenswrapper[4811]: I0122 09:05:56.093259 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:05:56 crc kubenswrapper[4811]: I0122 09:05:56.093278 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:05:56 crc kubenswrapper[4811]: I0122 09:05:56.093440 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:05:56 crc kubenswrapper[4811]: I0122 09:05:56.093652 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 22 09:05:56 crc kubenswrapper[4811]: I0122 09:05:56.093690 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:05:56 crc kubenswrapper[4811]: I0122 09:05:56.094351 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:05:56 crc kubenswrapper[4811]: I0122 09:05:56.094378 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:05:56 crc kubenswrapper[4811]: I0122 09:05:56.094432 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:05:56 crc kubenswrapper[4811]: I0122 09:05:56.094431 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:05:56 crc kubenswrapper[4811]: I0122 09:05:56.094595 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:05:56 crc kubenswrapper[4811]: I0122 09:05:56.094609 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:05:56 crc kubenswrapper[4811]: I0122 09:05:56.094976 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:05:56 crc kubenswrapper[4811]: I0122 09:05:56.095085 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 22 09:05:56 crc kubenswrapper[4811]: I0122 09:05:56.095136 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:05:56 crc kubenswrapper[4811]: I0122 09:05:56.096031 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:05:56 crc kubenswrapper[4811]: I0122 09:05:56.096060 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:05:56 crc kubenswrapper[4811]: I0122 09:05:56.096070 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:05:56 crc kubenswrapper[4811]: I0122 09:05:56.096238 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:05:56 crc kubenswrapper[4811]: I0122 09:05:56.096337 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 09:05:56 crc kubenswrapper[4811]: I0122 09:05:56.096411 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:05:56 crc kubenswrapper[4811]: I0122 09:05:56.096781 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:05:56 crc kubenswrapper[4811]: I0122 09:05:56.096803 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:05:56 crc kubenswrapper[4811]: I0122 09:05:56.096845 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:05:56 crc kubenswrapper[4811]: I0122 09:05:56.097079 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:05:56 crc kubenswrapper[4811]: I0122 09:05:56.097103 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:05:56 crc kubenswrapper[4811]: I0122 09:05:56.097113 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:05:56 crc kubenswrapper[4811]: I0122 09:05:56.097285 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:05:56 crc kubenswrapper[4811]: I0122 09:05:56.098007 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 09:05:56 crc kubenswrapper[4811]: I0122 09:05:56.098229 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:05:56 crc kubenswrapper[4811]: I0122 09:05:56.098171 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:05:56 crc kubenswrapper[4811]: I0122 09:05:56.098304 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:05:56 crc kubenswrapper[4811]: I0122 09:05:56.098326 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:05:56 crc kubenswrapper[4811]: I0122 09:05:56.098530 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:05:56 crc kubenswrapper[4811]: I0122 09:05:56.098552 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:05:56 crc kubenswrapper[4811]: I0122 09:05:56.098564 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:05:56 crc kubenswrapper[4811]: I0122 09:05:56.098783 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 09:05:56 crc kubenswrapper[4811]: I0122 09:05:56.098825 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:05:56 crc kubenswrapper[4811]: I0122 09:05:56.100095 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:05:56 crc kubenswrapper[4811]: I0122 09:05:56.100146 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:05:56 crc kubenswrapper[4811]: I0122 09:05:56.100150 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:05:56 crc kubenswrapper[4811]: I0122 09:05:56.100156 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:05:56 crc kubenswrapper[4811]: I0122 09:05:56.100217 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:05:56 crc kubenswrapper[4811]: I0122 09:05:56.100248 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:05:56 crc kubenswrapper[4811]: I0122 09:05:56.146244 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:05:56 crc kubenswrapper[4811]: I0122 09:05:56.147173 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:05:56 crc kubenswrapper[4811]: I0122 09:05:56.147213 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:05:56 crc kubenswrapper[4811]: I0122 09:05:56.147225 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:05:56 crc kubenswrapper[4811]: I0122 09:05:56.147254 4811 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 22 09:05:56 crc kubenswrapper[4811]: E0122 09:05:56.147897 4811 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 192.168.26.94:6443: connect: connection refused" node="crc" Jan 22 09:05:56 crc kubenswrapper[4811]: E0122 09:05:56.155387 4811 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.26.94:6443: connect: connection refused" interval="400ms" Jan 22 09:05:56 crc kubenswrapper[4811]: I0122 09:05:56.169478 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 22 09:05:56 crc kubenswrapper[4811]: I0122 09:05:56.169519 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 09:05:56 crc kubenswrapper[4811]: I0122 09:05:56.169569 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 09:05:56 crc kubenswrapper[4811]: I0122 09:05:56.169589 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 09:05:56 crc kubenswrapper[4811]: I0122 09:05:56.169607 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 09:05:56 crc kubenswrapper[4811]: I0122 09:05:56.169653 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 09:05:56 crc kubenswrapper[4811]: I0122 09:05:56.169673 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 09:05:56 crc kubenswrapper[4811]: I0122 09:05:56.169692 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 09:05:56 crc kubenswrapper[4811]: I0122 09:05:56.169727 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 09:05:56 crc kubenswrapper[4811]: I0122 09:05:56.169752 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 09:05:56 crc kubenswrapper[4811]: I0122 09:05:56.169848 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 22 09:05:56 crc kubenswrapper[4811]: I0122 09:05:56.169904 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 09:05:56 crc kubenswrapper[4811]: I0122 09:05:56.169955 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 09:05:56 crc kubenswrapper[4811]: I0122 09:05:56.169993 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 09:05:56 crc kubenswrapper[4811]: I0122 09:05:56.170026 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 09:05:56 crc kubenswrapper[4811]: I0122 09:05:56.271153 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 09:05:56 crc kubenswrapper[4811]: I0122 09:05:56.271194 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 09:05:56 crc kubenswrapper[4811]: I0122 09:05:56.271218 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 09:05:56 crc kubenswrapper[4811]: I0122 09:05:56.271244 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 09:05:56 crc kubenswrapper[4811]: I0122 09:05:56.271272 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 09:05:56 crc kubenswrapper[4811]: I0122 09:05:56.271293 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 09:05:56 crc kubenswrapper[4811]: I0122 09:05:56.271314 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 09:05:56 crc kubenswrapper[4811]: I0122 09:05:56.271334 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 09:05:56 crc kubenswrapper[4811]: I0122 09:05:56.271338 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 09:05:56 crc kubenswrapper[4811]: I0122 09:05:56.271409 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 09:05:56 crc kubenswrapper[4811]: I0122 09:05:56.271355 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 09:05:56 crc kubenswrapper[4811]: I0122 09:05:56.271350 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 09:05:56 crc kubenswrapper[4811]: I0122 09:05:56.271449 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 09:05:56 crc kubenswrapper[4811]: I0122 09:05:56.271600 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 09:05:56 crc kubenswrapper[4811]: I0122 09:05:56.271612 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 09:05:56 crc kubenswrapper[4811]: I0122 09:05:56.271557 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 09:05:56 crc kubenswrapper[4811]: I0122 09:05:56.271578 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 09:05:56 crc kubenswrapper[4811]: I0122 09:05:56.271577 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 09:05:56 crc kubenswrapper[4811]: I0122 09:05:56.271544 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 09:05:56 crc kubenswrapper[4811]: I0122 09:05:56.271552 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 09:05:56 crc kubenswrapper[4811]: I0122 09:05:56.271693 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 22 09:05:56 crc kubenswrapper[4811]: I0122 09:05:56.271756 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 22 09:05:56 crc kubenswrapper[4811]: I0122 09:05:56.271775 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 09:05:56 crc kubenswrapper[4811]: I0122 09:05:56.271797 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 09:05:56 crc kubenswrapper[4811]: I0122 09:05:56.271802 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 09:05:56 crc kubenswrapper[4811]: I0122 09:05:56.271818 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 22 09:05:56 crc kubenswrapper[4811]: I0122 09:05:56.271849 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 09:05:56 crc kubenswrapper[4811]: I0122 09:05:56.271892 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 09:05:56 crc kubenswrapper[4811]: I0122 09:05:56.271910 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 22 09:05:56 crc kubenswrapper[4811]: I0122 09:05:56.271934 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 09:05:56 crc kubenswrapper[4811]: I0122 09:05:56.348434 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:05:56 crc kubenswrapper[4811]: I0122 09:05:56.349920 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:05:56 crc kubenswrapper[4811]: I0122 09:05:56.349950 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:05:56 crc kubenswrapper[4811]: I0122 09:05:56.349959 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:05:56 crc kubenswrapper[4811]: I0122 09:05:56.349983 4811 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 22 09:05:56 crc kubenswrapper[4811]: E0122 09:05:56.350281 4811 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 192.168.26.94:6443: connect: connection refused" node="crc" Jan 22 09:05:56 crc kubenswrapper[4811]: I0122 09:05:56.422664 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 22 09:05:56 crc kubenswrapper[4811]: I0122 09:05:56.426578 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 22 09:05:56 crc kubenswrapper[4811]: I0122 09:05:56.434420 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 09:05:56 crc kubenswrapper[4811]: W0122 09:05:56.447840 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-47ac9936fe4139bcf0a613207f756257f718d30a49b57b4bb151b33fa59fdce3 WatchSource:0}: Error finding container 47ac9936fe4139bcf0a613207f756257f718d30a49b57b4bb151b33fa59fdce3: Status 404 returned error can't find the container with id 47ac9936fe4139bcf0a613207f756257f718d30a49b57b4bb151b33fa59fdce3 Jan 22 09:05:56 crc kubenswrapper[4811]: W0122 09:05:56.449558 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-ad82174971331ad81d898bec2224126bae636af78f80c817f0ee21f0acf74c9f WatchSource:0}: Error finding container ad82174971331ad81d898bec2224126bae636af78f80c817f0ee21f0acf74c9f: Status 404 returned error can't find the container with id ad82174971331ad81d898bec2224126bae636af78f80c817f0ee21f0acf74c9f Jan 22 09:05:56 crc kubenswrapper[4811]: W0122 09:05:56.450438 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-a27f1a6375a03f5f61de0c5eb8ff6e550425f50e6b60537be0fde3e3bac7d038 WatchSource:0}: Error finding container a27f1a6375a03f5f61de0c5eb8ff6e550425f50e6b60537be0fde3e3bac7d038: Status 404 returned error can't find the container with id a27f1a6375a03f5f61de0c5eb8ff6e550425f50e6b60537be0fde3e3bac7d038 Jan 22 09:05:56 crc kubenswrapper[4811]: I0122 09:05:56.461268 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 09:05:56 crc kubenswrapper[4811]: I0122 09:05:56.466299 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 09:05:56 crc kubenswrapper[4811]: W0122 09:05:56.477700 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-da01dd7fd28cd967d821a0e7bb970b31da8971d6db2db92ecf8f87d2508de75a WatchSource:0}: Error finding container da01dd7fd28cd967d821a0e7bb970b31da8971d6db2db92ecf8f87d2508de75a: Status 404 returned error can't find the container with id da01dd7fd28cd967d821a0e7bb970b31da8971d6db2db92ecf8f87d2508de75a Jan 22 09:05:56 crc kubenswrapper[4811]: E0122 09:05:56.556618 4811 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.26.94:6443: connect: connection refused" interval="800ms" Jan 22 09:05:56 crc kubenswrapper[4811]: I0122 09:05:56.751082 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:05:56 crc kubenswrapper[4811]: I0122 09:05:56.752465 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:05:56 crc kubenswrapper[4811]: I0122 09:05:56.752524 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:05:56 crc kubenswrapper[4811]: I0122 09:05:56.752536 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:05:56 crc kubenswrapper[4811]: I0122 09:05:56.752570 4811 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 22 09:05:56 crc kubenswrapper[4811]: E0122 09:05:56.753024 4811 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 192.168.26.94:6443: connect: connection refused" node="crc" Jan 22 09:05:56 crc kubenswrapper[4811]: W0122 09:05:56.778083 4811 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 192.168.26.94:6443: connect: connection refused Jan 22 09:05:56 crc kubenswrapper[4811]: E0122 09:05:56.778179 4811 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 192.168.26.94:6443: connect: connection refused" logger="UnhandledError" Jan 22 09:05:56 crc kubenswrapper[4811]: W0122 09:05:56.805852 4811 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.26.94:6443: connect: connection refused Jan 22 09:05:56 crc kubenswrapper[4811]: E0122 09:05:56.805922 4811 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.26.94:6443: connect: connection refused" logger="UnhandledError" Jan 22 09:05:56 crc kubenswrapper[4811]: I0122 09:05:56.950571 4811 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 192.168.26.94:6443: connect: connection refused Jan 22 09:05:56 crc kubenswrapper[4811]: I0122 09:05:56.951514 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 23:37:54.189824534 +0000 UTC Jan 22 09:05:56 crc kubenswrapper[4811]: I0122 09:05:56.995798 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"4419625f66ae957b2d9b146fc8b085a4a95c3f47860d6b4847f8fe71a6c4c86a"} Jan 22 09:05:56 crc kubenswrapper[4811]: I0122 09:05:56.995917 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"7a816ff5ba49f0c6d19bbf9f47f0866814e08605c80b338b49964d40079fea27"} Jan 22 09:05:56 crc kubenswrapper[4811]: I0122 09:05:56.998042 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"1b13581fc07c90a3dba27e04a1079d8a678635bf14c0d5463cd7b337f0f40a4e"} Jan 22 09:05:56 crc kubenswrapper[4811]: I0122 09:05:56.998015 4811 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="1b13581fc07c90a3dba27e04a1079d8a678635bf14c0d5463cd7b337f0f40a4e" exitCode=0 Jan 22 09:05:56 crc kubenswrapper[4811]: I0122 09:05:56.998176 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:05:56 crc kubenswrapper[4811]: I0122 09:05:56.998171 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"da01dd7fd28cd967d821a0e7bb970b31da8971d6db2db92ecf8f87d2508de75a"} Jan 22 09:05:56 crc kubenswrapper[4811]: I0122 09:05:56.999375 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:05:56 crc kubenswrapper[4811]: I0122 09:05:56.999418 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:05:56 crc kubenswrapper[4811]: I0122 09:05:56.999433 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:05:57 crc kubenswrapper[4811]: I0122 09:05:57.000195 4811 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="85e811085c9b345899231a1fcf56619c1cc5c8b0e6d22fc897fb96f20ba72e37" exitCode=0 Jan 22 09:05:57 crc kubenswrapper[4811]: I0122 09:05:57.000240 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"85e811085c9b345899231a1fcf56619c1cc5c8b0e6d22fc897fb96f20ba72e37"} Jan 22 09:05:57 crc kubenswrapper[4811]: I0122 09:05:57.000273 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"a27f1a6375a03f5f61de0c5eb8ff6e550425f50e6b60537be0fde3e3bac7d038"} Jan 22 09:05:57 crc kubenswrapper[4811]: I0122 09:05:57.000420 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:05:57 crc kubenswrapper[4811]: I0122 09:05:57.001401 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:05:57 crc kubenswrapper[4811]: I0122 09:05:57.001439 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:05:57 crc kubenswrapper[4811]: I0122 09:05:57.001449 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:05:57 crc kubenswrapper[4811]: I0122 09:05:57.002587 4811 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="09c7d347a923430e4eb2af48033ea3fc7715bedad38dd1b644c7b29a1ab617fa" exitCode=0 Jan 22 09:05:57 crc kubenswrapper[4811]: I0122 09:05:57.002668 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"09c7d347a923430e4eb2af48033ea3fc7715bedad38dd1b644c7b29a1ab617fa"} Jan 22 09:05:57 crc kubenswrapper[4811]: I0122 09:05:57.002726 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"ad82174971331ad81d898bec2224126bae636af78f80c817f0ee21f0acf74c9f"} Jan 22 09:05:57 crc kubenswrapper[4811]: I0122 09:05:57.002891 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:05:57 crc kubenswrapper[4811]: I0122 09:05:57.004277 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:05:57 crc kubenswrapper[4811]: I0122 09:05:57.004944 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"c2e1da6db1e9abd0c7f5dafb9d1cabf171e05168586b5bfcee84df0a7408e847"} Jan 22 09:05:57 crc kubenswrapper[4811]: I0122 09:05:57.004969 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:05:57 crc kubenswrapper[4811]: I0122 09:05:57.005010 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:05:57 crc kubenswrapper[4811]: I0122 09:05:57.005051 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:05:57 crc kubenswrapper[4811]: I0122 09:05:57.004921 4811 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="c2e1da6db1e9abd0c7f5dafb9d1cabf171e05168586b5bfcee84df0a7408e847" exitCode=0 Jan 22 09:05:57 crc kubenswrapper[4811]: I0122 09:05:57.005382 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"47ac9936fe4139bcf0a613207f756257f718d30a49b57b4bb151b33fa59fdce3"} Jan 22 09:05:57 crc kubenswrapper[4811]: I0122 09:05:57.004374 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:05:57 crc kubenswrapper[4811]: I0122 09:05:57.006178 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:05:57 crc kubenswrapper[4811]: I0122 09:05:57.006212 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:05:57 crc kubenswrapper[4811]: I0122 09:05:57.006228 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:05:57 crc kubenswrapper[4811]: I0122 09:05:57.006451 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:05:57 crc kubenswrapper[4811]: I0122 09:05:57.006476 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:05:57 crc kubenswrapper[4811]: I0122 09:05:57.006486 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:05:57 crc kubenswrapper[4811]: W0122 09:05:57.106463 4811 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.26.94:6443: connect: connection refused Jan 22 09:05:57 crc kubenswrapper[4811]: E0122 09:05:57.106555 4811 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.26.94:6443: connect: connection refused" logger="UnhandledError" Jan 22 09:05:57 crc kubenswrapper[4811]: W0122 09:05:57.256164 4811 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.26.94:6443: connect: connection refused Jan 22 09:05:57 crc kubenswrapper[4811]: E0122 09:05:57.256256 4811 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.26.94:6443: connect: connection refused" logger="UnhandledError" Jan 22 09:05:57 crc kubenswrapper[4811]: E0122 09:05:57.357569 4811 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.26.94:6443: connect: connection refused" interval="1.6s" Jan 22 09:05:57 crc kubenswrapper[4811]: I0122 09:05:57.553678 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:05:57 crc kubenswrapper[4811]: I0122 09:05:57.554711 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:05:57 crc kubenswrapper[4811]: I0122 09:05:57.554743 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:05:57 crc kubenswrapper[4811]: I0122 09:05:57.554753 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:05:57 crc kubenswrapper[4811]: I0122 09:05:57.554783 4811 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 22 09:05:57 crc kubenswrapper[4811]: E0122 09:05:57.555160 4811 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 192.168.26.94:6443: connect: connection refused" node="crc" Jan 22 09:05:57 crc kubenswrapper[4811]: E0122 09:05:57.644795 4811 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 192.168.26.94:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188d024c702d0bd3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 09:05:55.949243347 +0000 UTC m=+0.271430470,LastTimestamp:2026-01-22 09:05:55.949243347 +0000 UTC m=+0.271430470,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 09:05:57 crc kubenswrapper[4811]: I0122 09:05:57.952276 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 23:52:07.572344254 +0000 UTC Jan 22 09:05:58 crc kubenswrapper[4811]: I0122 09:05:58.010981 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"45fa5d37dcd1dcae1a7488bb855e4bd7c7dc39a1199fe80b82f83604be089f2e"} Jan 22 09:05:58 crc kubenswrapper[4811]: I0122 09:05:58.011029 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"7830976d23e073a099c3c6af1fa2392d9504ad9d41ddf2ee63c16632c22f833a"} Jan 22 09:05:58 crc kubenswrapper[4811]: I0122 09:05:58.011039 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"af3bf6cdac4751814ecd71637d9e80cbdda1c0e99714275fe77cfa858b3823b6"} Jan 22 09:05:58 crc kubenswrapper[4811]: I0122 09:05:58.011043 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:05:58 crc kubenswrapper[4811]: I0122 09:05:58.011802 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:05:58 crc kubenswrapper[4811]: I0122 09:05:58.011838 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:05:58 crc kubenswrapper[4811]: I0122 09:05:58.011851 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:05:58 crc kubenswrapper[4811]: I0122 09:05:58.014437 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"72d5bfb65ec0c94b865b39227fa43cb243e05b615b8b6c8b2ce289357eb5488b"} Jan 22 09:05:58 crc kubenswrapper[4811]: I0122 09:05:58.014480 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"78e58b4688b2704d8a02a27ef452d900419711bd51a2d64b9c05de7d3a02ffbe"} Jan 22 09:05:58 crc kubenswrapper[4811]: I0122 09:05:58.014493 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"d1a6925ee11245a252465e1a76bf8246c142164097c9c35f3467ab3d1650bc32"} Jan 22 09:05:58 crc kubenswrapper[4811]: I0122 09:05:58.014659 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:05:58 crc kubenswrapper[4811]: I0122 09:05:58.015472 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:05:58 crc kubenswrapper[4811]: I0122 09:05:58.015524 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:05:58 crc kubenswrapper[4811]: I0122 09:05:58.015537 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:05:58 crc kubenswrapper[4811]: I0122 09:05:58.017568 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"a86cb7b767900041a2946ed016b105369283ee61acdf725ade666acb1e926ed0"} Jan 22 09:05:58 crc kubenswrapper[4811]: I0122 09:05:58.017606 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"a9efb1ddf2ce6601c9860410e3ccacc38359aef779a7c8524bf4ec5950743d36"} Jan 22 09:05:58 crc kubenswrapper[4811]: I0122 09:05:58.017618 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"4aee61445030e02ac4fa9de4fefd94fe50e821ac8897c7403e173f2594a2e0ff"} Jan 22 09:05:58 crc kubenswrapper[4811]: I0122 09:05:58.017645 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"6d66c2a7980e144d2ce01cb8ebac91297e012c09d4b9aec617e747542e1ca304"} Jan 22 09:05:58 crc kubenswrapper[4811]: I0122 09:05:58.017656 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"2c0b4f518da69d1ede47cb9d322e9d9ca120cfc5eb770f109f0e5fd2e3260964"} Jan 22 09:05:58 crc kubenswrapper[4811]: I0122 09:05:58.017743 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:05:58 crc kubenswrapper[4811]: I0122 09:05:58.018528 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:05:58 crc kubenswrapper[4811]: I0122 09:05:58.018554 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:05:58 crc kubenswrapper[4811]: I0122 09:05:58.018564 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:05:58 crc kubenswrapper[4811]: I0122 09:05:58.019179 4811 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="ba74d0a51550a0637b01b701bbaa33c5efe84fb88a69b7294d1125f1f400b7cd" exitCode=0 Jan 22 09:05:58 crc kubenswrapper[4811]: I0122 09:05:58.019246 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"ba74d0a51550a0637b01b701bbaa33c5efe84fb88a69b7294d1125f1f400b7cd"} Jan 22 09:05:58 crc kubenswrapper[4811]: I0122 09:05:58.019496 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:05:58 crc kubenswrapper[4811]: I0122 09:05:58.020262 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:05:58 crc kubenswrapper[4811]: I0122 09:05:58.020345 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:05:58 crc kubenswrapper[4811]: I0122 09:05:58.020398 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:05:58 crc kubenswrapper[4811]: I0122 09:05:58.021164 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"95195b2b1e3673feadcfaadaaba692abe8e0e9a6b2c8fb776c38187616e59c7b"} Jan 22 09:05:58 crc kubenswrapper[4811]: I0122 09:05:58.021291 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:05:58 crc kubenswrapper[4811]: I0122 09:05:58.022149 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:05:58 crc kubenswrapper[4811]: I0122 09:05:58.022220 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:05:58 crc kubenswrapper[4811]: I0122 09:05:58.022281 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:05:58 crc kubenswrapper[4811]: I0122 09:05:58.073495 4811 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 22 09:05:58 crc kubenswrapper[4811]: I0122 09:05:58.333748 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 09:05:58 crc kubenswrapper[4811]: I0122 09:05:58.339525 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 09:05:58 crc kubenswrapper[4811]: I0122 09:05:58.953004 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 02:37:59.143652442 +0000 UTC Jan 22 09:05:59 crc kubenswrapper[4811]: I0122 09:05:59.026577 4811 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="9a2da74ed29f44340dd1a4778f15ba78ba4f673eb1b34b676e35c7d7c84af31d" exitCode=0 Jan 22 09:05:59 crc kubenswrapper[4811]: I0122 09:05:59.026739 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:05:59 crc kubenswrapper[4811]: I0122 09:05:59.027125 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"9a2da74ed29f44340dd1a4778f15ba78ba4f673eb1b34b676e35c7d7c84af31d"} Jan 22 09:05:59 crc kubenswrapper[4811]: I0122 09:05:59.027162 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 09:05:59 crc kubenswrapper[4811]: I0122 09:05:59.027240 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:05:59 crc kubenswrapper[4811]: I0122 09:05:59.027500 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:05:59 crc kubenswrapper[4811]: I0122 09:05:59.027513 4811 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 22 09:05:59 crc kubenswrapper[4811]: I0122 09:05:59.027662 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:05:59 crc kubenswrapper[4811]: I0122 09:05:59.027697 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:05:59 crc kubenswrapper[4811]: I0122 09:05:59.027917 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:05:59 crc kubenswrapper[4811]: I0122 09:05:59.027960 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:05:59 crc kubenswrapper[4811]: I0122 09:05:59.027917 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:05:59 crc kubenswrapper[4811]: I0122 09:05:59.027973 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:05:59 crc kubenswrapper[4811]: I0122 09:05:59.027994 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:05:59 crc kubenswrapper[4811]: I0122 09:05:59.028006 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:05:59 crc kubenswrapper[4811]: I0122 09:05:59.028183 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:05:59 crc kubenswrapper[4811]: I0122 09:05:59.028197 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:05:59 crc kubenswrapper[4811]: I0122 09:05:59.028205 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:05:59 crc kubenswrapper[4811]: I0122 09:05:59.028591 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:05:59 crc kubenswrapper[4811]: I0122 09:05:59.028658 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:05:59 crc kubenswrapper[4811]: I0122 09:05:59.028674 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:05:59 crc kubenswrapper[4811]: I0122 09:05:59.028797 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:05:59 crc kubenswrapper[4811]: I0122 09:05:59.028819 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:05:59 crc kubenswrapper[4811]: I0122 09:05:59.028829 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:05:59 crc kubenswrapper[4811]: I0122 09:05:59.155326 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:05:59 crc kubenswrapper[4811]: I0122 09:05:59.156357 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:05:59 crc kubenswrapper[4811]: I0122 09:05:59.156391 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:05:59 crc kubenswrapper[4811]: I0122 09:05:59.156404 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:05:59 crc kubenswrapper[4811]: I0122 09:05:59.156459 4811 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 22 09:05:59 crc kubenswrapper[4811]: I0122 09:05:59.193266 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 09:05:59 crc kubenswrapper[4811]: I0122 09:05:59.495678 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 09:05:59 crc kubenswrapper[4811]: I0122 09:05:59.805537 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 09:05:59 crc kubenswrapper[4811]: I0122 09:05:59.953362 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 20:42:38.769541481 +0000 UTC Jan 22 09:06:00 crc kubenswrapper[4811]: I0122 09:06:00.033671 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"2b14bb77b0cf24bacdb8f5edae5d02ccea019b12feb315e6d6c4887feb1a9354"} Jan 22 09:06:00 crc kubenswrapper[4811]: I0122 09:06:00.033718 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"590bb419e3791ec3a267b9a5aa4022ee74ba5e302b047e85211563e135c4cf31"} Jan 22 09:06:00 crc kubenswrapper[4811]: I0122 09:06:00.033731 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"64382d1748f1eaeb9f2f09d243b35c56f9fd70b9a9765ddb64cb0c4866ed493b"} Jan 22 09:06:00 crc kubenswrapper[4811]: I0122 09:06:00.033739 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:06:00 crc kubenswrapper[4811]: I0122 09:06:00.033752 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:06:00 crc kubenswrapper[4811]: I0122 09:06:00.033741 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"684f8ca14b2eabcbf9280085d495af9a6f1fa76fb187f15b8fa5659178efdc93"} Jan 22 09:06:00 crc kubenswrapper[4811]: I0122 09:06:00.033798 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"c0d10794bea5b8ea94b9247775e0c424fa9d481267c663d4bcae16eb4e7eb729"} Jan 22 09:06:00 crc kubenswrapper[4811]: I0122 09:06:00.033752 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:06:00 crc kubenswrapper[4811]: I0122 09:06:00.033855 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:06:00 crc kubenswrapper[4811]: I0122 09:06:00.034905 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:00 crc kubenswrapper[4811]: I0122 09:06:00.034923 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:00 crc kubenswrapper[4811]: I0122 09:06:00.034933 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:00 crc kubenswrapper[4811]: I0122 09:06:00.034934 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:00 crc kubenswrapper[4811]: I0122 09:06:00.034951 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:00 crc kubenswrapper[4811]: I0122 09:06:00.034959 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:00 crc kubenswrapper[4811]: I0122 09:06:00.034905 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:00 crc kubenswrapper[4811]: I0122 09:06:00.034989 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:00 crc kubenswrapper[4811]: I0122 09:06:00.034997 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:00 crc kubenswrapper[4811]: I0122 09:06:00.036363 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:00 crc kubenswrapper[4811]: I0122 09:06:00.036417 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:00 crc kubenswrapper[4811]: I0122 09:06:00.036440 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:00 crc kubenswrapper[4811]: I0122 09:06:00.305168 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 09:06:00 crc kubenswrapper[4811]: I0122 09:06:00.593533 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 22 09:06:00 crc kubenswrapper[4811]: I0122 09:06:00.954481 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 23:30:51.909055652 +0000 UTC Jan 22 09:06:01 crc kubenswrapper[4811]: I0122 09:06:01.035576 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:06:01 crc kubenswrapper[4811]: I0122 09:06:01.035965 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:06:01 crc kubenswrapper[4811]: I0122 09:06:01.035965 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:06:01 crc kubenswrapper[4811]: I0122 09:06:01.036518 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:01 crc kubenswrapper[4811]: I0122 09:06:01.036606 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:01 crc kubenswrapper[4811]: I0122 09:06:01.036648 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:01 crc kubenswrapper[4811]: I0122 09:06:01.036706 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:01 crc kubenswrapper[4811]: I0122 09:06:01.036733 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:01 crc kubenswrapper[4811]: I0122 09:06:01.036744 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:01 crc kubenswrapper[4811]: I0122 09:06:01.036835 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:01 crc kubenswrapper[4811]: I0122 09:06:01.036865 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:01 crc kubenswrapper[4811]: I0122 09:06:01.036874 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:01 crc kubenswrapper[4811]: I0122 09:06:01.954587 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 20:08:26.806084507 +0000 UTC Jan 22 09:06:02 crc kubenswrapper[4811]: I0122 09:06:02.038190 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:06:02 crc kubenswrapper[4811]: I0122 09:06:02.038190 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:06:02 crc kubenswrapper[4811]: I0122 09:06:02.039142 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:02 crc kubenswrapper[4811]: I0122 09:06:02.039240 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:02 crc kubenswrapper[4811]: I0122 09:06:02.039306 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:02 crc kubenswrapper[4811]: I0122 09:06:02.039261 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:02 crc kubenswrapper[4811]: I0122 09:06:02.039403 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:02 crc kubenswrapper[4811]: I0122 09:06:02.039413 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:02 crc kubenswrapper[4811]: I0122 09:06:02.356375 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 09:06:02 crc kubenswrapper[4811]: I0122 09:06:02.356496 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:06:02 crc kubenswrapper[4811]: I0122 09:06:02.357362 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:02 crc kubenswrapper[4811]: I0122 09:06:02.357385 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:02 crc kubenswrapper[4811]: I0122 09:06:02.357393 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:02 crc kubenswrapper[4811]: I0122 09:06:02.955482 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 06:43:02.933668503 +0000 UTC Jan 22 09:06:03 crc kubenswrapper[4811]: I0122 09:06:03.306238 4811 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 22 09:06:03 crc kubenswrapper[4811]: I0122 09:06:03.306328 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 09:06:03 crc kubenswrapper[4811]: I0122 09:06:03.956477 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 04:24:42.309631957 +0000 UTC Jan 22 09:06:04 crc kubenswrapper[4811]: I0122 09:06:04.956845 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 23:25:12.698200563 +0000 UTC Jan 22 09:06:05 crc kubenswrapper[4811]: I0122 09:06:05.957529 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 14:42:49.718976211 +0000 UTC Jan 22 09:06:06 crc kubenswrapper[4811]: E0122 09:06:06.052745 4811 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 22 09:06:06 crc kubenswrapper[4811]: I0122 09:06:06.957760 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 03:57:25.303008387 +0000 UTC Jan 22 09:06:07 crc kubenswrapper[4811]: I0122 09:06:07.298929 4811 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 22 09:06:07 crc kubenswrapper[4811]: I0122 09:06:07.298988 4811 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 22 09:06:07 crc kubenswrapper[4811]: I0122 09:06:07.920802 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 09:06:07 crc kubenswrapper[4811]: I0122 09:06:07.921215 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:06:07 crc kubenswrapper[4811]: I0122 09:06:07.922359 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:07 crc kubenswrapper[4811]: I0122 09:06:07.922406 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:07 crc kubenswrapper[4811]: I0122 09:06:07.922420 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:07 crc kubenswrapper[4811]: I0122 09:06:07.925225 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 09:06:07 crc kubenswrapper[4811]: I0122 09:06:07.950713 4811 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Jan 22 09:06:07 crc kubenswrapper[4811]: I0122 09:06:07.958018 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 04:06:53.884989382 +0000 UTC Jan 22 09:06:07 crc kubenswrapper[4811]: I0122 09:06:07.996777 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 22 09:06:07 crc kubenswrapper[4811]: I0122 09:06:07.997107 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:06:07 crc kubenswrapper[4811]: I0122 09:06:07.998603 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:07 crc kubenswrapper[4811]: I0122 09:06:07.998656 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:07 crc kubenswrapper[4811]: I0122 09:06:07.998670 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:08 crc kubenswrapper[4811]: I0122 09:06:08.052805 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:06:08 crc kubenswrapper[4811]: I0122 09:06:08.053799 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:08 crc kubenswrapper[4811]: I0122 09:06:08.053859 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:08 crc kubenswrapper[4811]: I0122 09:06:08.053874 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:08 crc kubenswrapper[4811]: I0122 09:06:08.077735 4811 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 22 09:06:08 crc kubenswrapper[4811]: I0122 09:06:08.077796 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 22 09:06:08 crc kubenswrapper[4811]: I0122 09:06:08.081477 4811 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 22 09:06:08 crc kubenswrapper[4811]: I0122 09:06:08.081616 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 22 09:06:08 crc kubenswrapper[4811]: I0122 09:06:08.958482 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 09:25:41.596275387 +0000 UTC Jan 22 09:06:09 crc kubenswrapper[4811]: I0122 09:06:09.810802 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 09:06:09 crc kubenswrapper[4811]: I0122 09:06:09.810961 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:06:09 crc kubenswrapper[4811]: I0122 09:06:09.811197 4811 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 22 09:06:09 crc kubenswrapper[4811]: I0122 09:06:09.811254 4811 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 22 09:06:09 crc kubenswrapper[4811]: I0122 09:06:09.812245 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:09 crc kubenswrapper[4811]: I0122 09:06:09.812278 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:09 crc kubenswrapper[4811]: I0122 09:06:09.812288 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:09 crc kubenswrapper[4811]: I0122 09:06:09.814558 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 09:06:09 crc kubenswrapper[4811]: I0122 09:06:09.959946 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 15:34:36.551126534 +0000 UTC Jan 22 09:06:10 crc kubenswrapper[4811]: I0122 09:06:10.058013 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:06:10 crc kubenswrapper[4811]: I0122 09:06:10.058331 4811 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 22 09:06:10 crc kubenswrapper[4811]: I0122 09:06:10.058429 4811 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 22 09:06:10 crc kubenswrapper[4811]: I0122 09:06:10.058800 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:10 crc kubenswrapper[4811]: I0122 09:06:10.058887 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:10 crc kubenswrapper[4811]: I0122 09:06:10.058942 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:10 crc kubenswrapper[4811]: I0122 09:06:10.961056 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 17:37:59.262383842 +0000 UTC Jan 22 09:06:11 crc kubenswrapper[4811]: I0122 09:06:11.962081 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 06:39:21.16612633 +0000 UTC Jan 22 09:06:12 crc kubenswrapper[4811]: I0122 09:06:12.963131 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 01:51:28.232741403 +0000 UTC Jan 22 09:06:13 crc kubenswrapper[4811]: E0122 09:06:13.067230 4811 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="3.2s" Jan 22 09:06:13 crc kubenswrapper[4811]: I0122 09:06:13.069045 4811 trace.go:236] Trace[324702279]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (22-Jan-2026 09:05:59.396) (total time: 13672ms): Jan 22 09:06:13 crc kubenswrapper[4811]: Trace[324702279]: ---"Objects listed" error: 13672ms (09:06:13.068) Jan 22 09:06:13 crc kubenswrapper[4811]: Trace[324702279]: [13.672982613s] [13.672982613s] END Jan 22 09:06:13 crc kubenswrapper[4811]: I0122 09:06:13.069063 4811 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 22 09:06:13 crc kubenswrapper[4811]: I0122 09:06:13.069400 4811 trace.go:236] Trace[689430946]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (22-Jan-2026 09:05:59.416) (total time: 13652ms): Jan 22 09:06:13 crc kubenswrapper[4811]: Trace[689430946]: ---"Objects listed" error: 13652ms (09:06:13.069) Jan 22 09:06:13 crc kubenswrapper[4811]: Trace[689430946]: [13.652903912s] [13.652903912s] END Jan 22 09:06:13 crc kubenswrapper[4811]: I0122 09:06:13.069417 4811 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 22 09:06:13 crc kubenswrapper[4811]: I0122 09:06:13.070573 4811 trace.go:236] Trace[2011975090]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (22-Jan-2026 09:05:59.945) (total time: 13125ms): Jan 22 09:06:13 crc kubenswrapper[4811]: Trace[2011975090]: ---"Objects listed" error: 13125ms (09:06:13.070) Jan 22 09:06:13 crc kubenswrapper[4811]: Trace[2011975090]: [13.125326042s] [13.125326042s] END Jan 22 09:06:13 crc kubenswrapper[4811]: I0122 09:06:13.070603 4811 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 22 09:06:13 crc kubenswrapper[4811]: I0122 09:06:13.070681 4811 trace.go:236] Trace[1712888053]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (22-Jan-2026 09:05:59.994) (total time: 13075ms): Jan 22 09:06:13 crc kubenswrapper[4811]: Trace[1712888053]: ---"Objects listed" error: 13075ms (09:06:13.070) Jan 22 09:06:13 crc kubenswrapper[4811]: Trace[1712888053]: [13.075795595s] [13.075795595s] END Jan 22 09:06:13 crc kubenswrapper[4811]: I0122 09:06:13.070702 4811 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 22 09:06:13 crc kubenswrapper[4811]: I0122 09:06:13.074882 4811 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 22 09:06:13 crc kubenswrapper[4811]: E0122 09:06:13.075006 4811 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 22 09:06:13 crc kubenswrapper[4811]: I0122 09:06:13.092294 4811 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 22 09:06:13 crc kubenswrapper[4811]: I0122 09:06:13.305295 4811 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 22 09:06:13 crc kubenswrapper[4811]: I0122 09:06:13.305345 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 22 09:06:13 crc kubenswrapper[4811]: I0122 09:06:13.949851 4811 apiserver.go:52] "Watching apiserver" Jan 22 09:06:13 crc kubenswrapper[4811]: I0122 09:06:13.952437 4811 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 22 09:06:13 crc kubenswrapper[4811]: I0122 09:06:13.952812 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c"] Jan 22 09:06:13 crc kubenswrapper[4811]: I0122 09:06:13.953176 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 22 09:06:13 crc kubenswrapper[4811]: I0122 09:06:13.953246 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 09:06:13 crc kubenswrapper[4811]: I0122 09:06:13.953324 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 22 09:06:13 crc kubenswrapper[4811]: I0122 09:06:13.953377 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 09:06:13 crc kubenswrapper[4811]: I0122 09:06:13.953741 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:06:13 crc kubenswrapper[4811]: E0122 09:06:13.953805 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 09:06:13 crc kubenswrapper[4811]: I0122 09:06:13.953896 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 22 09:06:13 crc kubenswrapper[4811]: E0122 09:06:13.953914 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 09:06:13 crc kubenswrapper[4811]: E0122 09:06:13.953997 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 09:06:13 crc kubenswrapper[4811]: I0122 09:06:13.955074 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 22 09:06:13 crc kubenswrapper[4811]: I0122 09:06:13.955409 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 22 09:06:13 crc kubenswrapper[4811]: I0122 09:06:13.955483 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 22 09:06:13 crc kubenswrapper[4811]: I0122 09:06:13.955490 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 22 09:06:13 crc kubenswrapper[4811]: I0122 09:06:13.955712 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 22 09:06:13 crc kubenswrapper[4811]: I0122 09:06:13.956318 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 22 09:06:13 crc kubenswrapper[4811]: I0122 09:06:13.956413 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 22 09:06:13 crc kubenswrapper[4811]: I0122 09:06:13.956882 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 22 09:06:13 crc kubenswrapper[4811]: I0122 09:06:13.957328 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 22 09:06:13 crc kubenswrapper[4811]: I0122 09:06:13.963862 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 00:22:45.528638215 +0000 UTC Jan 22 09:06:13 crc kubenswrapper[4811]: I0122 09:06:13.984888 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 09:06:13 crc kubenswrapper[4811]: I0122 09:06:13.996783 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 22 09:06:13 crc kubenswrapper[4811]: I0122 09:06:13.996834 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 09:06:13 crc kubenswrapper[4811]: I0122 09:06:13.996858 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 22 09:06:13 crc kubenswrapper[4811]: I0122 09:06:13.996877 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 22 09:06:13 crc kubenswrapper[4811]: I0122 09:06:13.996895 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 22 09:06:13 crc kubenswrapper[4811]: I0122 09:06:13.996911 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 09:06:13 crc kubenswrapper[4811]: I0122 09:06:13.996929 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 22 09:06:13 crc kubenswrapper[4811]: I0122 09:06:13.996948 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 22 09:06:13 crc kubenswrapper[4811]: I0122 09:06:13.996971 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 22 09:06:13 crc kubenswrapper[4811]: I0122 09:06:13.996990 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 22 09:06:13 crc kubenswrapper[4811]: I0122 09:06:13.997007 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 22 09:06:13 crc kubenswrapper[4811]: I0122 09:06:13.997023 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 22 09:06:13 crc kubenswrapper[4811]: I0122 09:06:13.997968 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 22 09:06:13 crc kubenswrapper[4811]: I0122 09:06:13.998162 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 22 09:06:13 crc kubenswrapper[4811]: I0122 09:06:13.998324 4811 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 22 09:06:13 crc kubenswrapper[4811]: I0122 09:06:13.998482 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.003753 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.005229 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.010254 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.012120 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 22 09:06:14 crc kubenswrapper[4811]: E0122 09:06:14.012710 4811 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 09:06:14 crc kubenswrapper[4811]: E0122 09:06:14.012768 4811 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 09:06:14 crc kubenswrapper[4811]: E0122 09:06:14.012788 4811 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 09:06:14 crc kubenswrapper[4811]: E0122 09:06:14.012871 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-22 09:06:14.512852922 +0000 UTC m=+18.835040045 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 09:06:14 crc kubenswrapper[4811]: E0122 09:06:14.013175 4811 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 09:06:14 crc kubenswrapper[4811]: E0122 09:06:14.013199 4811 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 09:06:14 crc kubenswrapper[4811]: E0122 09:06:14.013209 4811 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 09:06:14 crc kubenswrapper[4811]: E0122 09:06:14.013258 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-22 09:06:14.51324916 +0000 UTC m=+18.835436282 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.014360 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.018219 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.021965 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.030345 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.038615 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.054112 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.054843 4811 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.066489 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.067779 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.069112 4811 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="a86cb7b767900041a2946ed016b105369283ee61acdf725ade666acb1e926ed0" exitCode=255 Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.069208 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"a86cb7b767900041a2946ed016b105369283ee61acdf725ade666acb1e926ed0"} Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.079474 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.084146 4811 scope.go:117] "RemoveContainer" containerID="a86cb7b767900041a2946ed016b105369283ee61acdf725ade666acb1e926ed0" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.085909 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.091675 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.097635 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.097675 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.097704 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.097722 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.097742 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.097758 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.097801 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.097835 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.097889 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.097907 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.097926 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.097949 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.097965 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.097980 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.097997 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.098015 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.098031 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.098049 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.098028 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.098079 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.098097 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.098113 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.098129 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.098145 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.098179 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.098195 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.098212 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.098228 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.098241 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.098256 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.098271 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.098286 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.098302 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.098318 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.098318 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.098336 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.098355 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.098372 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.098389 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.098405 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.098421 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.098437 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.098458 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.098474 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.098493 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.098511 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.098528 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.098544 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.098560 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.098576 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.098594 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.098611 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.098640 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.098657 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.098673 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.098701 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.098716 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.098732 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.098749 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.098768 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.098788 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.098803 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.098821 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.098838 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.098855 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.098870 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.098885 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.098899 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.098915 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.098929 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.098948 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.098964 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.098979 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.098996 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.099013 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.099030 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.099048 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.099054 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.099063 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.099121 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.099145 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.099163 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.099167 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.099181 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.099251 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.099293 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.099313 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.099333 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.099372 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.099393 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.099432 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.099456 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.099478 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.099525 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.099545 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.099562 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.099580 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.099612 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.099659 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.099677 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.099740 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.099757 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.099775 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.099792 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.099832 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.099848 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.099887 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.099907 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.099923 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.099939 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.099969 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.099987 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.100013 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.100046 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.100065 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.100084 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.100102 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.100134 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.100153 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.100169 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.100470 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.100492 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.100586 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.100976 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.101091 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.101124 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.101143 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.101577 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.101779 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.101808 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.102221 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.102249 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.102265 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.102359 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.102377 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.102393 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.102500 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.102521 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.102538 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.102567 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.102583 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.102597 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.102612 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.102712 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.102733 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.102769 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.102788 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.102805 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.102822 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.102877 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.102895 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.102913 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.102928 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.102945 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.102964 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.102982 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.102999 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.103017 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.103036 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.103054 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.103069 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.103087 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.103102 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.103117 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.103212 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.103251 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.103271 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.103289 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.103307 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.103340 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.103361 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.103378 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.104612 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.104789 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.106695 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.106797 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.106874 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.106946 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.107041 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.107121 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.107188 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.107253 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.107321 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.111282 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.111434 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.111522 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.111605 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.114493 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.114528 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.114555 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.114576 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.114595 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.114613 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.114653 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.114673 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.114699 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.114715 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.114733 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.114749 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.114766 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.114809 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.114833 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.114886 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.115594 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.115678 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.115698 4811 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.115709 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.115719 4811 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.099266 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.099399 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.099725 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.099844 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.099877 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.100004 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.100012 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.100057 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.100193 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.100260 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.100392 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.100449 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.100482 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.100572 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.100724 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.100746 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.100860 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.100981 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.100999 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.101072 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.117135 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.101139 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.101334 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.101377 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.101392 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.101401 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.101486 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.101492 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.103350 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.103413 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.103508 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.103557 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.103758 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.103940 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.104010 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.104162 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.104303 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.104335 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.104527 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.104751 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.105206 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.105553 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.105562 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.105798 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.106557 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.106599 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.106562 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.106484 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.106533 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.106772 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.106789 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.107286 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.107440 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.107764 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.108038 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.108054 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.108058 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.108239 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.108355 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.108561 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.108731 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.108829 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.112048 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.112125 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.112351 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.112400 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.112401 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.112740 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.112861 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.113022 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.113174 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.113401 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.113408 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.113572 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.113680 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.113732 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.113739 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.113974 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.114392 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.114579 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.114861 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.115127 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: E0122 09:06:14.115805 4811 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.116483 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.116804 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.117430 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.119889 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.120086 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.120277 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.120493 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.121238 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.121464 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: E0122 09:06:14.121476 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 09:06:14.621450661 +0000 UTC m=+18.943637784 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.122401 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.122486 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.122497 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.122599 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.122640 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.122728 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.122847 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.122859 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.123231 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.123282 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.123385 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.123413 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.123559 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.123569 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.125541 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.125760 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.125811 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.125892 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.126098 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.126195 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.126433 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.123836 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.124072 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.124146 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.124282 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.124326 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.124405 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.124575 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.124618 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.124566 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.124703 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.124747 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.124886 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.125001 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.125094 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.125121 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.125154 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.125254 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.125275 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.127475 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.128054 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.128126 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.127970 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.128291 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.128702 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.128728 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.129084 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.129117 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.129439 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.129643 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.130040 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.130183 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.130236 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: E0122 09:06:14.130278 4811 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.130325 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: E0122 09:06:14.130356 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 09:06:14.630336297 +0000 UTC m=+18.952523420 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.128872 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.130457 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.130614 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.130638 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.130699 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.130728 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.130828 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.130915 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 22 09:06:14 crc kubenswrapper[4811]: E0122 09:06:14.131362 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 09:06:14.631344638 +0000 UTC m=+18.953531762 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.131492 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.131616 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.131700 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.131840 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.131852 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.132366 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.132090 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.133334 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.133366 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.133850 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.133985 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.134016 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.134017 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.134029 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.134198 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.134220 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.134257 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.134718 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.134729 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.134743 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.134690 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.137826 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.138553 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.139016 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.139122 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.139221 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.139263 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.140883 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.141707 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.141750 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.141758 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.141884 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.148228 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.149824 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.149827 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.150102 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.150960 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.154760 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.155176 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.164058 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.169525 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.177435 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.183335 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.216892 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.216915 4811 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.216926 4811 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.216937 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.216948 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.216959 4811 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.216970 4811 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.216982 4811 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.216993 4811 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.217005 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.217017 4811 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.217027 4811 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.217037 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.217047 4811 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.217056 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.217064 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.217075 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.217084 4811 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.217092 4811 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.217101 4811 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.217112 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.217121 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.217129 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.217139 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.217147 4811 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.217157 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.217165 4811 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.217173 4811 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.217181 4811 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.217190 4811 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.217199 4811 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.217208 4811 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.217218 4811 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.217228 4811 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.217238 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.217248 4811 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.217258 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.217267 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.217274 4811 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.217283 4811 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.217295 4811 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.217305 4811 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.217316 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.217328 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.217338 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.217350 4811 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.217361 4811 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.217372 4811 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.217385 4811 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.217394 4811 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.217404 4811 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.217413 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.217424 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.217434 4811 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.217444 4811 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.217453 4811 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.217462 4811 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.217472 4811 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.217482 4811 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.217491 4811 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.217500 4811 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.217512 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.217521 4811 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.217530 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.217539 4811 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.217549 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.217557 4811 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.217566 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.217574 4811 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.217586 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.217595 4811 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.217603 4811 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.217612 4811 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.217622 4811 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.217646 4811 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.217694 4811 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.217705 4811 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.217716 4811 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.217724 4811 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.217732 4811 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.217742 4811 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.217750 4811 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.217758 4811 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.217767 4811 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.217774 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.217784 4811 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.217792 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.217800 4811 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.217810 4811 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.217820 4811 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.217828 4811 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.217837 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.217847 4811 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.217856 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.217864 4811 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.217873 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.217880 4811 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.217888 4811 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.217896 4811 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.217905 4811 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.217913 4811 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.217922 4811 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.217930 4811 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.217938 4811 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.217946 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.217956 4811 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.217965 4811 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.217974 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.217984 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.217994 4811 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.218001 4811 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.218009 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.218018 4811 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.218025 4811 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.218040 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.218049 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.218058 4811 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.218066 4811 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.218075 4811 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.218083 4811 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.218092 4811 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.218100 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.218109 4811 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.218117 4811 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.218125 4811 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.218133 4811 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.218142 4811 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.218152 4811 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.218165 4811 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.218175 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.218185 4811 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.218195 4811 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.218203 4811 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.218212 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.218221 4811 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.218229 4811 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.218238 4811 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.218246 4811 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.218257 4811 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.218266 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.218274 4811 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.218282 4811 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.218290 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.218299 4811 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.218307 4811 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.218316 4811 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.218325 4811 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.218334 4811 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.218341 4811 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.218350 4811 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.218357 4811 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.218366 4811 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.218374 4811 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.218383 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.218392 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.218400 4811 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.218409 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.218418 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.218427 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.218435 4811 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.218444 4811 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.218453 4811 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.218461 4811 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.218469 4811 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.218478 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.218487 4811 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.218495 4811 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.218504 4811 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.218512 4811 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.218521 4811 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.218530 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.218538 4811 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.218547 4811 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.218556 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.218564 4811 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.218573 4811 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.218580 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.218590 4811 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.218599 4811 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.218607 4811 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.218616 4811 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.218638 4811 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.218647 4811 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.218656 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.218665 4811 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.218674 4811 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.218682 4811 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.218703 4811 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.218713 4811 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.218722 4811 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.265292 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.271214 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 22 09:06:14 crc kubenswrapper[4811]: W0122 09:06:14.275290 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-b7f3f5d3a781088ca64b9832aced966b5293be45e587b7c8b5b623313a9e62eb WatchSource:0}: Error finding container b7f3f5d3a781088ca64b9832aced966b5293be45e587b7c8b5b623313a9e62eb: Status 404 returned error can't find the container with id b7f3f5d3a781088ca64b9832aced966b5293be45e587b7c8b5b623313a9e62eb Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.277454 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 22 09:06:14 crc kubenswrapper[4811]: W0122 09:06:14.290163 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-dbefc545c07cd1a4abe4216ac261eb9d29ce6691489a70463644d89ae10effe8 WatchSource:0}: Error finding container dbefc545c07cd1a4abe4216ac261eb9d29ce6691489a70463644d89ae10effe8: Status 404 returned error can't find the container with id dbefc545c07cd1a4abe4216ac261eb9d29ce6691489a70463644d89ae10effe8 Jan 22 09:06:14 crc kubenswrapper[4811]: W0122 09:06:14.291676 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-a69a7b88ee470eac9bc666c1914fee2e13c1af74dcb5e08a3bcbf8f6cc395434 WatchSource:0}: Error finding container a69a7b88ee470eac9bc666c1914fee2e13c1af74dcb5e08a3bcbf8f6cc395434: Status 404 returned error can't find the container with id a69a7b88ee470eac9bc666c1914fee2e13c1af74dcb5e08a3bcbf8f6cc395434 Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.520829 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.520887 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 09:06:14 crc kubenswrapper[4811]: E0122 09:06:14.521110 4811 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 09:06:14 crc kubenswrapper[4811]: E0122 09:06:14.521135 4811 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 09:06:14 crc kubenswrapper[4811]: E0122 09:06:14.521150 4811 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 09:06:14 crc kubenswrapper[4811]: E0122 09:06:14.521169 4811 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 09:06:14 crc kubenswrapper[4811]: E0122 09:06:14.521201 4811 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 09:06:14 crc kubenswrapper[4811]: E0122 09:06:14.521216 4811 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 09:06:14 crc kubenswrapper[4811]: E0122 09:06:14.521255 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-22 09:06:15.52122207 +0000 UTC m=+19.843409192 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 09:06:14 crc kubenswrapper[4811]: E0122 09:06:14.521283 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-22 09:06:15.521273597 +0000 UTC m=+19.843460730 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.721710 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.721842 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:06:14 crc kubenswrapper[4811]: E0122 09:06:14.721941 4811 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 09:06:14 crc kubenswrapper[4811]: E0122 09:06:14.721947 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 09:06:15.721904941 +0000 UTC m=+20.044092065 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:06:14 crc kubenswrapper[4811]: E0122 09:06:14.722024 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 09:06:15.722002075 +0000 UTC m=+20.044189197 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.722069 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:06:14 crc kubenswrapper[4811]: E0122 09:06:14.722278 4811 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 09:06:14 crc kubenswrapper[4811]: E0122 09:06:14.722337 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 09:06:15.722327158 +0000 UTC m=+20.044514281 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 09:06:14 crc kubenswrapper[4811]: I0122 09:06:14.964655 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 18:05:57.537546911 +0000 UTC Jan 22 09:06:15 crc kubenswrapper[4811]: I0122 09:06:15.073152 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"a69a7b88ee470eac9bc666c1914fee2e13c1af74dcb5e08a3bcbf8f6cc395434"} Jan 22 09:06:15 crc kubenswrapper[4811]: I0122 09:06:15.074990 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"bae2e22607dec646b15563dc62f66655e7a5c2b7ad809406c907e5f687f80105"} Jan 22 09:06:15 crc kubenswrapper[4811]: I0122 09:06:15.075029 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"75e290e203fab8f291146855a960477ceb9c33d965d15b4722c668dc2b3fded3"} Jan 22 09:06:15 crc kubenswrapper[4811]: I0122 09:06:15.075054 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"dbefc545c07cd1a4abe4216ac261eb9d29ce6691489a70463644d89ae10effe8"} Jan 22 09:06:15 crc kubenswrapper[4811]: I0122 09:06:15.076980 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"6a14bfcc7d6d2ab5e1ad0c45999b2e73332e750ed65d9299e188505dafb26d0f"} Jan 22 09:06:15 crc kubenswrapper[4811]: I0122 09:06:15.077118 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"b7f3f5d3a781088ca64b9832aced966b5293be45e587b7c8b5b623313a9e62eb"} Jan 22 09:06:15 crc kubenswrapper[4811]: I0122 09:06:15.078985 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 22 09:06:15 crc kubenswrapper[4811]: I0122 09:06:15.080456 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"a4daff941637e717d7dc33c5627db72d253f50f50288c23257e1eaa66cd38c5d"} Jan 22 09:06:15 crc kubenswrapper[4811]: I0122 09:06:15.080794 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 09:06:15 crc kubenswrapper[4811]: I0122 09:06:15.103158 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:15Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:15 crc kubenswrapper[4811]: I0122 09:06:15.115677 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bae2e22607dec646b15563dc62f66655e7a5c2b7ad809406c907e5f687f80105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75e290e203fab8f291146855a960477ceb9c33d965d15b4722c668dc2b3fded3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:15Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:15 crc kubenswrapper[4811]: I0122 09:06:15.130811 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:15Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:15 crc kubenswrapper[4811]: I0122 09:06:15.143261 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:15Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:15 crc kubenswrapper[4811]: I0122 09:06:15.151451 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:15Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:15 crc kubenswrapper[4811]: I0122 09:06:15.160899 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:15Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:15 crc kubenswrapper[4811]: I0122 09:06:15.170375 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30b44c40-e6d4-4902-98e9-a259269d8bf7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c0b4f518da69d1ede47cb9d322e9d9ca120cfc5eb770f109f0e5fd2e3260964\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aee61445030e02ac4fa9de4fefd94fe50e821ac8897c7403e173f2594a2e0ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d66c2a7980e144d2ce01cb8ebac91297e012c09d4b9aec617e747542e1ca304\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a86cb7b767900041a2946ed016b105369283ee61acdf725ade666acb1e926ed0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a86cb7b767900041a2946ed016b105369283ee61acdf725ade666acb1e926ed0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0122 09:06:13.075349 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0122 09:06:13.075574 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 09:06:13.077171 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1744166093/tls.crt::/tmp/serving-cert-1744166093/tls.key\\\\\\\"\\\\nI0122 09:06:13.324400 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 09:06:13.330700 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 09:06:13.330724 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 09:06:13.330758 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 09:06:13.330771 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 09:06:13.337838 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 09:06:13.337866 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:06:13.337872 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:06:13.337876 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 09:06:13.337879 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 09:06:13.337883 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 09:06:13.337886 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 09:06:13.338151 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 09:06:13.339610 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9efb1ddf2ce6601c9860410e3ccacc38359aef779a7c8524bf4ec5950743d36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85e811085c9b345899231a1fcf56619c1cc5c8b0e6d22fc897fb96f20ba72e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85e811085c9b345899231a1fcf56619c1cc5c8b0e6d22fc897fb96f20ba72e37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:05:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:15Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:15 crc kubenswrapper[4811]: I0122 09:06:15.180798 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:15Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:15 crc kubenswrapper[4811]: I0122 09:06:15.192750 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:15Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:15 crc kubenswrapper[4811]: I0122 09:06:15.202270 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:15Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:15 crc kubenswrapper[4811]: I0122 09:06:15.213889 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30b44c40-e6d4-4902-98e9-a259269d8bf7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c0b4f518da69d1ede47cb9d322e9d9ca120cfc5eb770f109f0e5fd2e3260964\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aee61445030e02ac4fa9de4fefd94fe50e821ac8897c7403e173f2594a2e0ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d66c2a7980e144d2ce01cb8ebac91297e012c09d4b9aec617e747542e1ca304\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4daff941637e717d7dc33c5627db72d253f50f50288c23257e1eaa66cd38c5d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a86cb7b767900041a2946ed016b105369283ee61acdf725ade666acb1e926ed0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0122 09:06:13.075349 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0122 09:06:13.075574 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 09:06:13.077171 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1744166093/tls.crt::/tmp/serving-cert-1744166093/tls.key\\\\\\\"\\\\nI0122 09:06:13.324400 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 09:06:13.330700 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 09:06:13.330724 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 09:06:13.330758 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 09:06:13.330771 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 09:06:13.337838 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 09:06:13.337866 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:06:13.337872 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:06:13.337876 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 09:06:13.337879 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 09:06:13.337883 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 09:06:13.337886 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 09:06:13.338151 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 09:06:13.339610 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9efb1ddf2ce6601c9860410e3ccacc38359aef779a7c8524bf4ec5950743d36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85e811085c9b345899231a1fcf56619c1cc5c8b0e6d22fc897fb96f20ba72e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85e811085c9b345899231a1fcf56619c1cc5c8b0e6d22fc897fb96f20ba72e37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:05:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:15Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:15 crc kubenswrapper[4811]: I0122 09:06:15.224011 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a14bfcc7d6d2ab5e1ad0c45999b2e73332e750ed65d9299e188505dafb26d0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:15Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:15 crc kubenswrapper[4811]: I0122 09:06:15.235322 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bae2e22607dec646b15563dc62f66655e7a5c2b7ad809406c907e5f687f80105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75e290e203fab8f291146855a960477ceb9c33d965d15b4722c668dc2b3fded3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:15Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:15 crc kubenswrapper[4811]: I0122 09:06:15.244286 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:15Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:15 crc kubenswrapper[4811]: I0122 09:06:15.529230 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 09:06:15 crc kubenswrapper[4811]: I0122 09:06:15.529610 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 09:06:15 crc kubenswrapper[4811]: E0122 09:06:15.529474 4811 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 09:06:15 crc kubenswrapper[4811]: E0122 09:06:15.529886 4811 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 09:06:15 crc kubenswrapper[4811]: E0122 09:06:15.529973 4811 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 09:06:15 crc kubenswrapper[4811]: E0122 09:06:15.529724 4811 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 09:06:15 crc kubenswrapper[4811]: E0122 09:06:15.530099 4811 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 09:06:15 crc kubenswrapper[4811]: E0122 09:06:15.530118 4811 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 09:06:15 crc kubenswrapper[4811]: E0122 09:06:15.530213 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-22 09:06:17.530192137 +0000 UTC m=+21.852379270 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 09:06:15 crc kubenswrapper[4811]: E0122 09:06:15.530304 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-22 09:06:17.53029476 +0000 UTC m=+21.852481883 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 09:06:15 crc kubenswrapper[4811]: I0122 09:06:15.731512 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 09:06:15 crc kubenswrapper[4811]: I0122 09:06:15.731593 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:06:15 crc kubenswrapper[4811]: I0122 09:06:15.731614 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:06:15 crc kubenswrapper[4811]: E0122 09:06:15.731727 4811 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 09:06:15 crc kubenswrapper[4811]: E0122 09:06:15.731779 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 09:06:17.7317667 +0000 UTC m=+22.053953823 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 09:06:15 crc kubenswrapper[4811]: E0122 09:06:15.731833 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 09:06:17.731820752 +0000 UTC m=+22.054007876 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:06:15 crc kubenswrapper[4811]: E0122 09:06:15.731863 4811 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 09:06:15 crc kubenswrapper[4811]: E0122 09:06:15.731881 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 09:06:17.731876417 +0000 UTC m=+22.054063540 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 09:06:15 crc kubenswrapper[4811]: I0122 09:06:15.965668 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 11:59:01.031383612 +0000 UTC Jan 22 09:06:15 crc kubenswrapper[4811]: I0122 09:06:15.991614 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 09:06:15 crc kubenswrapper[4811]: E0122 09:06:15.991747 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 09:06:15 crc kubenswrapper[4811]: I0122 09:06:15.991796 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:06:15 crc kubenswrapper[4811]: E0122 09:06:15.991878 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 09:06:15 crc kubenswrapper[4811]: I0122 09:06:15.991933 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 09:06:15 crc kubenswrapper[4811]: E0122 09:06:15.992048 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 09:06:15 crc kubenswrapper[4811]: I0122 09:06:15.996191 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 22 09:06:15 crc kubenswrapper[4811]: I0122 09:06:15.996779 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 22 09:06:15 crc kubenswrapper[4811]: I0122 09:06:15.997784 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 22 09:06:15 crc kubenswrapper[4811]: I0122 09:06:15.998322 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 22 09:06:15 crc kubenswrapper[4811]: I0122 09:06:15.999190 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 22 09:06:15 crc kubenswrapper[4811]: I0122 09:06:15.999616 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 22 09:06:16 crc kubenswrapper[4811]: I0122 09:06:16.000159 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 22 09:06:16 crc kubenswrapper[4811]: I0122 09:06:16.001013 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 22 09:06:16 crc kubenswrapper[4811]: I0122 09:06:16.001394 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a14bfcc7d6d2ab5e1ad0c45999b2e73332e750ed65d9299e188505dafb26d0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:16Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:16 crc kubenswrapper[4811]: I0122 09:06:16.001576 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 22 09:06:16 crc kubenswrapper[4811]: I0122 09:06:16.002394 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 22 09:06:16 crc kubenswrapper[4811]: I0122 09:06:16.002880 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 22 09:06:16 crc kubenswrapper[4811]: I0122 09:06:16.003801 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 22 09:06:16 crc kubenswrapper[4811]: I0122 09:06:16.004218 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 22 09:06:16 crc kubenswrapper[4811]: I0122 09:06:16.004696 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 22 09:06:16 crc kubenswrapper[4811]: I0122 09:06:16.005458 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 22 09:06:16 crc kubenswrapper[4811]: I0122 09:06:16.005926 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 22 09:06:16 crc kubenswrapper[4811]: I0122 09:06:16.006732 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 22 09:06:16 crc kubenswrapper[4811]: I0122 09:06:16.007087 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 22 09:06:16 crc kubenswrapper[4811]: I0122 09:06:16.007555 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 22 09:06:16 crc kubenswrapper[4811]: I0122 09:06:16.008529 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 22 09:06:16 crc kubenswrapper[4811]: I0122 09:06:16.008961 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 22 09:06:16 crc kubenswrapper[4811]: I0122 09:06:16.009801 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 22 09:06:16 crc kubenswrapper[4811]: I0122 09:06:16.010173 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 22 09:06:16 crc kubenswrapper[4811]: I0122 09:06:16.010817 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bae2e22607dec646b15563dc62f66655e7a5c2b7ad809406c907e5f687f80105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75e290e203fab8f291146855a960477ceb9c33d965d15b4722c668dc2b3fded3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:16Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:16 crc kubenswrapper[4811]: I0122 09:06:16.011071 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 22 09:06:16 crc kubenswrapper[4811]: I0122 09:06:16.011426 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 22 09:06:16 crc kubenswrapper[4811]: I0122 09:06:16.011976 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 22 09:06:16 crc kubenswrapper[4811]: I0122 09:06:16.012946 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 22 09:06:16 crc kubenswrapper[4811]: I0122 09:06:16.013381 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 22 09:06:16 crc kubenswrapper[4811]: I0122 09:06:16.014331 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 22 09:06:16 crc kubenswrapper[4811]: I0122 09:06:16.015847 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 22 09:06:16 crc kubenswrapper[4811]: I0122 09:06:16.016263 4811 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 22 09:06:16 crc kubenswrapper[4811]: I0122 09:06:16.016355 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 22 09:06:16 crc kubenswrapper[4811]: I0122 09:06:16.017924 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 22 09:06:16 crc kubenswrapper[4811]: I0122 09:06:16.018750 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 22 09:06:16 crc kubenswrapper[4811]: I0122 09:06:16.019100 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 22 09:06:16 crc kubenswrapper[4811]: I0122 09:06:16.019515 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:16Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:16 crc kubenswrapper[4811]: I0122 09:06:16.020541 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 22 09:06:16 crc kubenswrapper[4811]: I0122 09:06:16.021446 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 22 09:06:16 crc kubenswrapper[4811]: I0122 09:06:16.021942 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 22 09:06:16 crc kubenswrapper[4811]: I0122 09:06:16.022859 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 22 09:06:16 crc kubenswrapper[4811]: I0122 09:06:16.023400 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 22 09:06:16 crc kubenswrapper[4811]: I0122 09:06:16.024303 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 22 09:06:16 crc kubenswrapper[4811]: I0122 09:06:16.024848 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 22 09:06:16 crc kubenswrapper[4811]: I0122 09:06:16.025718 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 22 09:06:16 crc kubenswrapper[4811]: I0122 09:06:16.026232 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 22 09:06:16 crc kubenswrapper[4811]: I0122 09:06:16.026967 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 22 09:06:16 crc kubenswrapper[4811]: I0122 09:06:16.027426 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 22 09:06:16 crc kubenswrapper[4811]: I0122 09:06:16.028224 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 22 09:06:16 crc kubenswrapper[4811]: I0122 09:06:16.028350 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:16Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:16 crc kubenswrapper[4811]: I0122 09:06:16.028863 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 22 09:06:16 crc kubenswrapper[4811]: I0122 09:06:16.029604 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 22 09:06:16 crc kubenswrapper[4811]: I0122 09:06:16.030025 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 22 09:06:16 crc kubenswrapper[4811]: I0122 09:06:16.030769 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 22 09:06:16 crc kubenswrapper[4811]: I0122 09:06:16.031208 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 22 09:06:16 crc kubenswrapper[4811]: I0122 09:06:16.031703 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 22 09:06:16 crc kubenswrapper[4811]: I0122 09:06:16.032419 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 22 09:06:16 crc kubenswrapper[4811]: I0122 09:06:16.036283 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:16Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:16 crc kubenswrapper[4811]: I0122 09:06:16.044548 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:16Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:16 crc kubenswrapper[4811]: I0122 09:06:16.060298 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30b44c40-e6d4-4902-98e9-a259269d8bf7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c0b4f518da69d1ede47cb9d322e9d9ca120cfc5eb770f109f0e5fd2e3260964\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aee61445030e02ac4fa9de4fefd94fe50e821ac8897c7403e173f2594a2e0ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d66c2a7980e144d2ce01cb8ebac91297e012c09d4b9aec617e747542e1ca304\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4daff941637e717d7dc33c5627db72d253f50f50288c23257e1eaa66cd38c5d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a86cb7b767900041a2946ed016b105369283ee61acdf725ade666acb1e926ed0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0122 09:06:13.075349 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0122 09:06:13.075574 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 09:06:13.077171 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1744166093/tls.crt::/tmp/serving-cert-1744166093/tls.key\\\\\\\"\\\\nI0122 09:06:13.324400 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 09:06:13.330700 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 09:06:13.330724 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 09:06:13.330758 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 09:06:13.330771 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 09:06:13.337838 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 09:06:13.337866 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:06:13.337872 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:06:13.337876 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 09:06:13.337879 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 09:06:13.337883 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 09:06:13.337886 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 09:06:13.338151 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 09:06:13.339610 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9efb1ddf2ce6601c9860410e3ccacc38359aef779a7c8524bf4ec5950743d36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85e811085c9b345899231a1fcf56619c1cc5c8b0e6d22fc897fb96f20ba72e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85e811085c9b345899231a1fcf56619c1cc5c8b0e6d22fc897fb96f20ba72e37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:05:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:16Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:16 crc kubenswrapper[4811]: I0122 09:06:16.085498 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"7f776b1bc70646779cf35092ded02c00aa3b990c873de09d21e1f0d76258b0b0"} Jan 22 09:06:16 crc kubenswrapper[4811]: I0122 09:06:16.096224 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:16Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:16 crc kubenswrapper[4811]: I0122 09:06:16.104528 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f776b1bc70646779cf35092ded02c00aa3b990c873de09d21e1f0d76258b0b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:16Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:16 crc kubenswrapper[4811]: I0122 09:06:16.114104 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30b44c40-e6d4-4902-98e9-a259269d8bf7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c0b4f518da69d1ede47cb9d322e9d9ca120cfc5eb770f109f0e5fd2e3260964\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aee61445030e02ac4fa9de4fefd94fe50e821ac8897c7403e173f2594a2e0ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d66c2a7980e144d2ce01cb8ebac91297e012c09d4b9aec617e747542e1ca304\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4daff941637e717d7dc33c5627db72d253f50f50288c23257e1eaa66cd38c5d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a86cb7b767900041a2946ed016b105369283ee61acdf725ade666acb1e926ed0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0122 09:06:13.075349 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0122 09:06:13.075574 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 09:06:13.077171 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1744166093/tls.crt::/tmp/serving-cert-1744166093/tls.key\\\\\\\"\\\\nI0122 09:06:13.324400 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 09:06:13.330700 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 09:06:13.330724 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 09:06:13.330758 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 09:06:13.330771 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 09:06:13.337838 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 09:06:13.337866 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:06:13.337872 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:06:13.337876 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 09:06:13.337879 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 09:06:13.337883 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 09:06:13.337886 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 09:06:13.338151 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 09:06:13.339610 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9efb1ddf2ce6601c9860410e3ccacc38359aef779a7c8524bf4ec5950743d36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85e811085c9b345899231a1fcf56619c1cc5c8b0e6d22fc897fb96f20ba72e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85e811085c9b345899231a1fcf56619c1cc5c8b0e6d22fc897fb96f20ba72e37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:05:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:16Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:16 crc kubenswrapper[4811]: I0122 09:06:16.122264 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:16Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:16 crc kubenswrapper[4811]: I0122 09:06:16.131190 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a14bfcc7d6d2ab5e1ad0c45999b2e73332e750ed65d9299e188505dafb26d0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:16Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:16 crc kubenswrapper[4811]: I0122 09:06:16.140406 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bae2e22607dec646b15563dc62f66655e7a5c2b7ad809406c907e5f687f80105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75e290e203fab8f291146855a960477ceb9c33d965d15b4722c668dc2b3fded3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:16Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:16 crc kubenswrapper[4811]: I0122 09:06:16.147961 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:16Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:16 crc kubenswrapper[4811]: I0122 09:06:16.275432 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 09:06:16 crc kubenswrapper[4811]: I0122 09:06:16.276737 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:16 crc kubenswrapper[4811]: I0122 09:06:16.276771 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:16 crc kubenswrapper[4811]: I0122 09:06:16.276780 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:16 crc kubenswrapper[4811]: I0122 09:06:16.276817 4811 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 22 09:06:16 crc kubenswrapper[4811]: I0122 09:06:16.281555 4811 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 22 09:06:16 crc kubenswrapper[4811]: I0122 09:06:16.281747 4811 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 22 09:06:16 crc kubenswrapper[4811]: I0122 09:06:16.282452 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:16 crc kubenswrapper[4811]: I0122 09:06:16.282486 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:16 crc kubenswrapper[4811]: I0122 09:06:16.282496 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:16 crc kubenswrapper[4811]: I0122 09:06:16.282504 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:16 crc kubenswrapper[4811]: I0122 09:06:16.282512 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:16Z","lastTransitionTime":"2026-01-22T09:06:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:16 crc kubenswrapper[4811]: E0122 09:06:16.294684 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0066ca33-f035-4af3-9028-0da78d54d55e\\\",\\\"systemUUID\\\":\\\"463fdb35-dd0c-4804-a85e-31cd33c59ce4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:16Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:16 crc kubenswrapper[4811]: I0122 09:06:16.298753 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:16 crc kubenswrapper[4811]: I0122 09:06:16.298863 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:16 crc kubenswrapper[4811]: I0122 09:06:16.298925 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:16 crc kubenswrapper[4811]: I0122 09:06:16.298999 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:16 crc kubenswrapper[4811]: I0122 09:06:16.299053 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:16Z","lastTransitionTime":"2026-01-22T09:06:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:16 crc kubenswrapper[4811]: E0122 09:06:16.306968 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0066ca33-f035-4af3-9028-0da78d54d55e\\\",\\\"systemUUID\\\":\\\"463fdb35-dd0c-4804-a85e-31cd33c59ce4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:16Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:16 crc kubenswrapper[4811]: I0122 09:06:16.309210 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:16 crc kubenswrapper[4811]: I0122 09:06:16.309239 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:16 crc kubenswrapper[4811]: I0122 09:06:16.309248 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:16 crc kubenswrapper[4811]: I0122 09:06:16.309258 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:16 crc kubenswrapper[4811]: I0122 09:06:16.309266 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:16Z","lastTransitionTime":"2026-01-22T09:06:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:16 crc kubenswrapper[4811]: E0122 09:06:16.319238 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0066ca33-f035-4af3-9028-0da78d54d55e\\\",\\\"systemUUID\\\":\\\"463fdb35-dd0c-4804-a85e-31cd33c59ce4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:16Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:16 crc kubenswrapper[4811]: I0122 09:06:16.322247 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:16 crc kubenswrapper[4811]: I0122 09:06:16.322272 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:16 crc kubenswrapper[4811]: I0122 09:06:16.322280 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:16 crc kubenswrapper[4811]: I0122 09:06:16.322290 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:16 crc kubenswrapper[4811]: I0122 09:06:16.322298 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:16Z","lastTransitionTime":"2026-01-22T09:06:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:16 crc kubenswrapper[4811]: E0122 09:06:16.330017 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0066ca33-f035-4af3-9028-0da78d54d55e\\\",\\\"systemUUID\\\":\\\"463fdb35-dd0c-4804-a85e-31cd33c59ce4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:16Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:16 crc kubenswrapper[4811]: I0122 09:06:16.332213 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:16 crc kubenswrapper[4811]: I0122 09:06:16.332236 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:16 crc kubenswrapper[4811]: I0122 09:06:16.332244 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:16 crc kubenswrapper[4811]: I0122 09:06:16.332254 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:16 crc kubenswrapper[4811]: I0122 09:06:16.332261 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:16Z","lastTransitionTime":"2026-01-22T09:06:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:16 crc kubenswrapper[4811]: E0122 09:06:16.341200 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0066ca33-f035-4af3-9028-0da78d54d55e\\\",\\\"systemUUID\\\":\\\"463fdb35-dd0c-4804-a85e-31cd33c59ce4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:16Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:16 crc kubenswrapper[4811]: E0122 09:06:16.341306 4811 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 22 09:06:16 crc kubenswrapper[4811]: I0122 09:06:16.342148 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:16 crc kubenswrapper[4811]: I0122 09:06:16.342170 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:16 crc kubenswrapper[4811]: I0122 09:06:16.342178 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:16 crc kubenswrapper[4811]: I0122 09:06:16.342186 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:16 crc kubenswrapper[4811]: I0122 09:06:16.342194 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:16Z","lastTransitionTime":"2026-01-22T09:06:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:16 crc kubenswrapper[4811]: I0122 09:06:16.443154 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:16 crc kubenswrapper[4811]: I0122 09:06:16.443279 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:16 crc kubenswrapper[4811]: I0122 09:06:16.443348 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:16 crc kubenswrapper[4811]: I0122 09:06:16.443410 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:16 crc kubenswrapper[4811]: I0122 09:06:16.443466 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:16Z","lastTransitionTime":"2026-01-22T09:06:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:16 crc kubenswrapper[4811]: I0122 09:06:16.544885 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:16 crc kubenswrapper[4811]: I0122 09:06:16.545161 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:16 crc kubenswrapper[4811]: I0122 09:06:16.545238 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:16 crc kubenswrapper[4811]: I0122 09:06:16.545311 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:16 crc kubenswrapper[4811]: I0122 09:06:16.545377 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:16Z","lastTransitionTime":"2026-01-22T09:06:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:16 crc kubenswrapper[4811]: I0122 09:06:16.647444 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:16 crc kubenswrapper[4811]: I0122 09:06:16.647574 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:16 crc kubenswrapper[4811]: I0122 09:06:16.647680 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:16 crc kubenswrapper[4811]: I0122 09:06:16.647771 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:16 crc kubenswrapper[4811]: I0122 09:06:16.647825 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:16Z","lastTransitionTime":"2026-01-22T09:06:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:16 crc kubenswrapper[4811]: I0122 09:06:16.749147 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:16 crc kubenswrapper[4811]: I0122 09:06:16.749319 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:16 crc kubenswrapper[4811]: I0122 09:06:16.749407 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:16 crc kubenswrapper[4811]: I0122 09:06:16.749492 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:16 crc kubenswrapper[4811]: I0122 09:06:16.749566 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:16Z","lastTransitionTime":"2026-01-22T09:06:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:16 crc kubenswrapper[4811]: I0122 09:06:16.851900 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:16 crc kubenswrapper[4811]: I0122 09:06:16.851928 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:16 crc kubenswrapper[4811]: I0122 09:06:16.851936 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:16 crc kubenswrapper[4811]: I0122 09:06:16.851948 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:16 crc kubenswrapper[4811]: I0122 09:06:16.851957 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:16Z","lastTransitionTime":"2026-01-22T09:06:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:16 crc kubenswrapper[4811]: I0122 09:06:16.953839 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:16 crc kubenswrapper[4811]: I0122 09:06:16.953884 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:16 crc kubenswrapper[4811]: I0122 09:06:16.953893 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:16 crc kubenswrapper[4811]: I0122 09:06:16.953904 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:16 crc kubenswrapper[4811]: I0122 09:06:16.953912 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:16Z","lastTransitionTime":"2026-01-22T09:06:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:16 crc kubenswrapper[4811]: I0122 09:06:16.966161 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 23:06:39.068997008 +0000 UTC Jan 22 09:06:17 crc kubenswrapper[4811]: I0122 09:06:17.055657 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:17 crc kubenswrapper[4811]: I0122 09:06:17.055725 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:17 crc kubenswrapper[4811]: I0122 09:06:17.055740 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:17 crc kubenswrapper[4811]: I0122 09:06:17.055758 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:17 crc kubenswrapper[4811]: I0122 09:06:17.055772 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:17Z","lastTransitionTime":"2026-01-22T09:06:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:17 crc kubenswrapper[4811]: I0122 09:06:17.157381 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:17 crc kubenswrapper[4811]: I0122 09:06:17.157420 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:17 crc kubenswrapper[4811]: I0122 09:06:17.157429 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:17 crc kubenswrapper[4811]: I0122 09:06:17.157442 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:17 crc kubenswrapper[4811]: I0122 09:06:17.157450 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:17Z","lastTransitionTime":"2026-01-22T09:06:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:17 crc kubenswrapper[4811]: I0122 09:06:17.259308 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:17 crc kubenswrapper[4811]: I0122 09:06:17.259330 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:17 crc kubenswrapper[4811]: I0122 09:06:17.259338 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:17 crc kubenswrapper[4811]: I0122 09:06:17.259348 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:17 crc kubenswrapper[4811]: I0122 09:06:17.259357 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:17Z","lastTransitionTime":"2026-01-22T09:06:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:17 crc kubenswrapper[4811]: I0122 09:06:17.361198 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:17 crc kubenswrapper[4811]: I0122 09:06:17.361220 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:17 crc kubenswrapper[4811]: I0122 09:06:17.361227 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:17 crc kubenswrapper[4811]: I0122 09:06:17.361235 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:17 crc kubenswrapper[4811]: I0122 09:06:17.361242 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:17Z","lastTransitionTime":"2026-01-22T09:06:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:17 crc kubenswrapper[4811]: I0122 09:06:17.462607 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:17 crc kubenswrapper[4811]: I0122 09:06:17.462654 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:17 crc kubenswrapper[4811]: I0122 09:06:17.462664 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:17 crc kubenswrapper[4811]: I0122 09:06:17.462674 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:17 crc kubenswrapper[4811]: I0122 09:06:17.462682 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:17Z","lastTransitionTime":"2026-01-22T09:06:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:17 crc kubenswrapper[4811]: I0122 09:06:17.541526 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 09:06:17 crc kubenswrapper[4811]: I0122 09:06:17.541567 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 09:06:17 crc kubenswrapper[4811]: E0122 09:06:17.541712 4811 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 09:06:17 crc kubenswrapper[4811]: E0122 09:06:17.541731 4811 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 09:06:17 crc kubenswrapper[4811]: E0122 09:06:17.541741 4811 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 09:06:17 crc kubenswrapper[4811]: E0122 09:06:17.541780 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-22 09:06:21.5417693 +0000 UTC m=+25.863956424 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 09:06:17 crc kubenswrapper[4811]: E0122 09:06:17.542030 4811 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 09:06:17 crc kubenswrapper[4811]: E0122 09:06:17.542114 4811 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 09:06:17 crc kubenswrapper[4811]: E0122 09:06:17.542183 4811 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 09:06:17 crc kubenswrapper[4811]: E0122 09:06:17.542299 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-22 09:06:21.542283802 +0000 UTC m=+25.864470924 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 09:06:17 crc kubenswrapper[4811]: I0122 09:06:17.563985 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:17 crc kubenswrapper[4811]: I0122 09:06:17.564022 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:17 crc kubenswrapper[4811]: I0122 09:06:17.564031 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:17 crc kubenswrapper[4811]: I0122 09:06:17.564043 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:17 crc kubenswrapper[4811]: I0122 09:06:17.564051 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:17Z","lastTransitionTime":"2026-01-22T09:06:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:17 crc kubenswrapper[4811]: I0122 09:06:17.665900 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:17 crc kubenswrapper[4811]: I0122 09:06:17.665949 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:17 crc kubenswrapper[4811]: I0122 09:06:17.665959 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:17 crc kubenswrapper[4811]: I0122 09:06:17.665970 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:17 crc kubenswrapper[4811]: I0122 09:06:17.665979 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:17Z","lastTransitionTime":"2026-01-22T09:06:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:17 crc kubenswrapper[4811]: I0122 09:06:17.678285 4811 csr.go:261] certificate signing request csr-cg7dh is approved, waiting to be issued Jan 22 09:06:17 crc kubenswrapper[4811]: I0122 09:06:17.701931 4811 csr.go:257] certificate signing request csr-cg7dh is issued Jan 22 09:06:17 crc kubenswrapper[4811]: I0122 09:06:17.745128 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 09:06:17 crc kubenswrapper[4811]: E0122 09:06:17.745303 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 09:06:21.745281027 +0000 UTC m=+26.067468150 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:06:17 crc kubenswrapper[4811]: I0122 09:06:17.745495 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:06:17 crc kubenswrapper[4811]: E0122 09:06:17.745572 4811 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 09:06:17 crc kubenswrapper[4811]: E0122 09:06:17.745643 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 09:06:21.745617031 +0000 UTC m=+26.067804144 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 09:06:17 crc kubenswrapper[4811]: I0122 09:06:17.745578 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:06:17 crc kubenswrapper[4811]: E0122 09:06:17.745821 4811 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 09:06:17 crc kubenswrapper[4811]: E0122 09:06:17.745918 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 09:06:21.745905495 +0000 UTC m=+26.068092619 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 09:06:17 crc kubenswrapper[4811]: I0122 09:06:17.746064 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-xhhs7"] Jan 22 09:06:17 crc kubenswrapper[4811]: I0122 09:06:17.746300 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-xhhs7" Jan 22 09:06:17 crc kubenswrapper[4811]: I0122 09:06:17.749801 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 22 09:06:17 crc kubenswrapper[4811]: I0122 09:06:17.749865 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 22 09:06:17 crc kubenswrapper[4811]: I0122 09:06:17.749892 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 22 09:06:17 crc kubenswrapper[4811]: I0122 09:06:17.764664 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a14bfcc7d6d2ab5e1ad0c45999b2e73332e750ed65d9299e188505dafb26d0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:17Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:17 crc kubenswrapper[4811]: I0122 09:06:17.767558 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:17 crc kubenswrapper[4811]: I0122 09:06:17.767580 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:17 crc kubenswrapper[4811]: I0122 09:06:17.767588 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:17 crc kubenswrapper[4811]: I0122 09:06:17.767600 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:17 crc kubenswrapper[4811]: I0122 09:06:17.767609 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:17Z","lastTransitionTime":"2026-01-22T09:06:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:17 crc kubenswrapper[4811]: I0122 09:06:17.780509 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bae2e22607dec646b15563dc62f66655e7a5c2b7ad809406c907e5f687f80105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75e290e203fab8f291146855a960477ceb9c33d965d15b4722c668dc2b3fded3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:17Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:17 crc kubenswrapper[4811]: I0122 09:06:17.794490 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:17Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:17 crc kubenswrapper[4811]: I0122 09:06:17.807366 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30b44c40-e6d4-4902-98e9-a259269d8bf7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c0b4f518da69d1ede47cb9d322e9d9ca120cfc5eb770f109f0e5fd2e3260964\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aee61445030e02ac4fa9de4fefd94fe50e821ac8897c7403e173f2594a2e0ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d66c2a7980e144d2ce01cb8ebac91297e012c09d4b9aec617e747542e1ca304\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4daff941637e717d7dc33c5627db72d253f50f50288c23257e1eaa66cd38c5d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a86cb7b767900041a2946ed016b105369283ee61acdf725ade666acb1e926ed0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0122 09:06:13.075349 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0122 09:06:13.075574 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 09:06:13.077171 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1744166093/tls.crt::/tmp/serving-cert-1744166093/tls.key\\\\\\\"\\\\nI0122 09:06:13.324400 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 09:06:13.330700 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 09:06:13.330724 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 09:06:13.330758 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 09:06:13.330771 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 09:06:13.337838 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 09:06:13.337866 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:06:13.337872 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:06:13.337876 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 09:06:13.337879 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 09:06:13.337883 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 09:06:13.337886 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 09:06:13.338151 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 09:06:13.339610 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9efb1ddf2ce6601c9860410e3ccacc38359aef779a7c8524bf4ec5950743d36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85e811085c9b345899231a1fcf56619c1cc5c8b0e6d22fc897fb96f20ba72e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85e811085c9b345899231a1fcf56619c1cc5c8b0e6d22fc897fb96f20ba72e37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:05:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:17Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:17 crc kubenswrapper[4811]: I0122 09:06:17.820116 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:17Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:17 crc kubenswrapper[4811]: I0122 09:06:17.846224 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcfzl\" (UniqueName: \"kubernetes.io/projected/f42dc1a6-d0c4-43e4-b9d9-b40c1f910400-kube-api-access-jcfzl\") pod \"node-resolver-xhhs7\" (UID: \"f42dc1a6-d0c4-43e4-b9d9-b40c1f910400\") " pod="openshift-dns/node-resolver-xhhs7" Jan 22 09:06:17 crc kubenswrapper[4811]: I0122 09:06:17.846255 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/f42dc1a6-d0c4-43e4-b9d9-b40c1f910400-hosts-file\") pod \"node-resolver-xhhs7\" (UID: \"f42dc1a6-d0c4-43e4-b9d9-b40c1f910400\") " pod="openshift-dns/node-resolver-xhhs7" Jan 22 09:06:17 crc kubenswrapper[4811]: I0122 09:06:17.847286 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:17Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:17 crc kubenswrapper[4811]: I0122 09:06:17.863515 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f776b1bc70646779cf35092ded02c00aa3b990c873de09d21e1f0d76258b0b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:17Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:17 crc kubenswrapper[4811]: I0122 09:06:17.869212 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:17 crc kubenswrapper[4811]: I0122 09:06:17.869246 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:17 crc kubenswrapper[4811]: I0122 09:06:17.869255 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:17 crc kubenswrapper[4811]: I0122 09:06:17.869269 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:17 crc kubenswrapper[4811]: I0122 09:06:17.869278 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:17Z","lastTransitionTime":"2026-01-22T09:06:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:17 crc kubenswrapper[4811]: I0122 09:06:17.875923 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xhhs7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f42dc1a6-d0c4-43e4-b9d9-b40c1f910400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:17Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:17Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcfzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xhhs7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:17Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:17 crc kubenswrapper[4811]: I0122 09:06:17.946866 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jcfzl\" (UniqueName: \"kubernetes.io/projected/f42dc1a6-d0c4-43e4-b9d9-b40c1f910400-kube-api-access-jcfzl\") pod \"node-resolver-xhhs7\" (UID: \"f42dc1a6-d0c4-43e4-b9d9-b40c1f910400\") " pod="openshift-dns/node-resolver-xhhs7" Jan 22 09:06:17 crc kubenswrapper[4811]: I0122 09:06:17.946907 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/f42dc1a6-d0c4-43e4-b9d9-b40c1f910400-hosts-file\") pod \"node-resolver-xhhs7\" (UID: \"f42dc1a6-d0c4-43e4-b9d9-b40c1f910400\") " pod="openshift-dns/node-resolver-xhhs7" Jan 22 09:06:17 crc kubenswrapper[4811]: I0122 09:06:17.946970 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/f42dc1a6-d0c4-43e4-b9d9-b40c1f910400-hosts-file\") pod \"node-resolver-xhhs7\" (UID: \"f42dc1a6-d0c4-43e4-b9d9-b40c1f910400\") " pod="openshift-dns/node-resolver-xhhs7" Jan 22 09:06:17 crc kubenswrapper[4811]: I0122 09:06:17.968891 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 09:33:24.255285491 +0000 UTC Jan 22 09:06:17 crc kubenswrapper[4811]: I0122 09:06:17.968968 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jcfzl\" (UniqueName: \"kubernetes.io/projected/f42dc1a6-d0c4-43e4-b9d9-b40c1f910400-kube-api-access-jcfzl\") pod \"node-resolver-xhhs7\" (UID: \"f42dc1a6-d0c4-43e4-b9d9-b40c1f910400\") " pod="openshift-dns/node-resolver-xhhs7" Jan 22 09:06:17 crc kubenswrapper[4811]: I0122 09:06:17.971272 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:17 crc kubenswrapper[4811]: I0122 09:06:17.971303 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:17 crc kubenswrapper[4811]: I0122 09:06:17.971312 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:17 crc kubenswrapper[4811]: I0122 09:06:17.971324 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:17 crc kubenswrapper[4811]: I0122 09:06:17.971332 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:17Z","lastTransitionTime":"2026-01-22T09:06:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:17 crc kubenswrapper[4811]: I0122 09:06:17.991756 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 09:06:17 crc kubenswrapper[4811]: I0122 09:06:17.991787 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:06:17 crc kubenswrapper[4811]: E0122 09:06:17.991867 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 09:06:17 crc kubenswrapper[4811]: I0122 09:06:17.991772 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 09:06:17 crc kubenswrapper[4811]: E0122 09:06:17.992006 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 09:06:17 crc kubenswrapper[4811]: E0122 09:06:17.992125 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.016925 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.028880 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:18Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.029494 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.040242 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.043136 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f776b1bc70646779cf35092ded02c00aa3b990c873de09d21e1f0d76258b0b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:18Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.054428 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xhhs7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f42dc1a6-d0c4-43e4-b9d9-b40c1f910400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:17Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:17Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcfzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xhhs7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:18Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.056612 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-xhhs7" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.073702 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.073739 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.073748 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.073764 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.073772 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:18Z","lastTransitionTime":"2026-01-22T09:06:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.077987 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30b44c40-e6d4-4902-98e9-a259269d8bf7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c0b4f518da69d1ede47cb9d322e9d9ca120cfc5eb770f109f0e5fd2e3260964\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aee61445030e02ac4fa9de4fefd94fe50e821ac8897c7403e173f2594a2e0ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d66c2a7980e144d2ce01cb8ebac91297e012c09d4b9aec617e747542e1ca304\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4daff941637e717d7dc33c5627db72d253f50f50288c23257e1eaa66cd38c5d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a86cb7b767900041a2946ed016b105369283ee61acdf725ade666acb1e926ed0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0122 09:06:13.075349 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0122 09:06:13.075574 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 09:06:13.077171 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1744166093/tls.crt::/tmp/serving-cert-1744166093/tls.key\\\\\\\"\\\\nI0122 09:06:13.324400 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 09:06:13.330700 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 09:06:13.330724 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 09:06:13.330758 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 09:06:13.330771 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 09:06:13.337838 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 09:06:13.337866 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:06:13.337872 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:06:13.337876 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 09:06:13.337879 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 09:06:13.337883 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 09:06:13.337886 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 09:06:13.338151 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 09:06:13.339610 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9efb1ddf2ce6601c9860410e3ccacc38359aef779a7c8524bf4ec5950743d36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85e811085c9b345899231a1fcf56619c1cc5c8b0e6d22fc897fb96f20ba72e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85e811085c9b345899231a1fcf56619c1cc5c8b0e6d22fc897fb96f20ba72e37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:05:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:18Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.088009 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:18Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.090883 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-xhhs7" event={"ID":"f42dc1a6-d0c4-43e4-b9d9-b40c1f910400","Type":"ContainerStarted","Data":"bc2808accdfbc460bc2ca0a0ad705c8206c18ad317e29908fd5620d25dbadf8b"} Jan 22 09:06:18 crc kubenswrapper[4811]: E0122 09:06:18.097580 4811 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-crc\" already exists" pod="openshift-etcd/etcd-crc" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.101201 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a14bfcc7d6d2ab5e1ad0c45999b2e73332e750ed65d9299e188505dafb26d0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:18Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.109886 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bae2e22607dec646b15563dc62f66655e7a5c2b7ad809406c907e5f687f80105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75e290e203fab8f291146855a960477ceb9c33d965d15b4722c668dc2b3fded3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:18Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.119319 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:18Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.129606 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:18Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.142308 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:18Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.156799 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f776b1bc70646779cf35092ded02c00aa3b990c873de09d21e1f0d76258b0b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:18Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.164435 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xhhs7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f42dc1a6-d0c4-43e4-b9d9-b40c1f910400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:17Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:17Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcfzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xhhs7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:18Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.176737 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.176937 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.176947 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.176958 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.176968 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:18Z","lastTransitionTime":"2026-01-22T09:06:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.178945 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30b44c40-e6d4-4902-98e9-a259269d8bf7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c0b4f518da69d1ede47cb9d322e9d9ca120cfc5eb770f109f0e5fd2e3260964\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aee61445030e02ac4fa9de4fefd94fe50e821ac8897c7403e173f2594a2e0ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d66c2a7980e144d2ce01cb8ebac91297e012c09d4b9aec617e747542e1ca304\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4daff941637e717d7dc33c5627db72d253f50f50288c23257e1eaa66cd38c5d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a86cb7b767900041a2946ed016b105369283ee61acdf725ade666acb1e926ed0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0122 09:06:13.075349 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0122 09:06:13.075574 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 09:06:13.077171 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1744166093/tls.crt::/tmp/serving-cert-1744166093/tls.key\\\\\\\"\\\\nI0122 09:06:13.324400 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 09:06:13.330700 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 09:06:13.330724 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 09:06:13.330758 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 09:06:13.330771 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 09:06:13.337838 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 09:06:13.337866 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:06:13.337872 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:06:13.337876 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 09:06:13.337879 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 09:06:13.337883 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 09:06:13.337886 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 09:06:13.338151 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 09:06:13.339610 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9efb1ddf2ce6601c9860410e3ccacc38359aef779a7c8524bf4ec5950743d36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85e811085c9b345899231a1fcf56619c1cc5c8b0e6d22fc897fb96f20ba72e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85e811085c9b345899231a1fcf56619c1cc5c8b0e6d22fc897fb96f20ba72e37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:05:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:18Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.196029 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cf1d28a-4ee8-45bb-9a86-3687d8db2105\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684f8ca14b2eabcbf9280085d495af9a6f1fa76fb187f15b8fa5659178efdc93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64382d1748f1eaeb9f2f09d243b35c56f9fd70b9a9765ddb64cb0c4866ed493b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://590bb419e3791ec3a267b9a5aa4022ee74ba5e302b047e85211563e135c4cf31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b14bb77b0cf24bacdb8f5edae5d02ccea019b12feb315e6d6c4887feb1a9354\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0d10794bea5b8ea94b9247775e0c424fa9d481267c663d4bcae16eb4e7eb729\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09c7d347a923430e4eb2af48033ea3fc7715bedad38dd1b644c7b29a1ab617fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09c7d347a923430e4eb2af48033ea3fc7715bedad38dd1b644c7b29a1ab617fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba74d0a51550a0637b01b701bbaa33c5efe84fb88a69b7294d1125f1f400b7cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba74d0a51550a0637b01b701bbaa33c5efe84fb88a69b7294d1125f1f400b7cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9a2da74ed29f44340dd1a4778f15ba78ba4f673eb1b34b676e35c7d7c84af31d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a2da74ed29f44340dd1a4778f15ba78ba4f673eb1b34b676e35c7d7c84af31d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:05:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:18Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.213001 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a14bfcc7d6d2ab5e1ad0c45999b2e73332e750ed65d9299e188505dafb26d0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:18Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.227409 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bae2e22607dec646b15563dc62f66655e7a5c2b7ad809406c907e5f687f80105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75e290e203fab8f291146855a960477ceb9c33d965d15b4722c668dc2b3fded3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:18Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.239008 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:18Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.278895 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.278922 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.278931 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.278942 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.278951 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:18Z","lastTransitionTime":"2026-01-22T09:06:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.380532 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.380560 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.380568 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.380580 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.380588 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:18Z","lastTransitionTime":"2026-01-22T09:06:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.482007 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.482036 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.482045 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.482056 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.482065 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:18Z","lastTransitionTime":"2026-01-22T09:06:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.583666 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.583716 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.583727 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.583739 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.583749 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:18Z","lastTransitionTime":"2026-01-22T09:06:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.634992 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-9g4j8"] Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.635404 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-9g4j8" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.636558 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-274vf"] Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.637429 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.637567 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.637726 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.637932 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.638064 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.638409 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-txvcq"] Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.638556 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-274vf" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.638587 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-kfqgt"] Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.638662 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.639303 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-kfqgt" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.642862 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.642881 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.643256 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.643281 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.643293 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.643388 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.645124 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.645277 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.645391 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.645492 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.645574 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.645550 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.648114 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.648888 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.666891 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cf1d28a-4ee8-45bb-9a86-3687d8db2105\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684f8ca14b2eabcbf9280085d495af9a6f1fa76fb187f15b8fa5659178efdc93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64382d1748f1eaeb9f2f09d243b35c56f9fd70b9a9765ddb64cb0c4866ed493b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://590bb419e3791ec3a267b9a5aa4022ee74ba5e302b047e85211563e135c4cf31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b14bb77b0cf24bacdb8f5edae5d02ccea019b12feb315e6d6c4887feb1a9354\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0d10794bea5b8ea94b9247775e0c424fa9d481267c663d4bcae16eb4e7eb729\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09c7d347a923430e4eb2af48033ea3fc7715bedad38dd1b644c7b29a1ab617fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09c7d347a923430e4eb2af48033ea3fc7715bedad38dd1b644c7b29a1ab617fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba74d0a51550a0637b01b701bbaa33c5efe84fb88a69b7294d1125f1f400b7cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba74d0a51550a0637b01b701bbaa33c5efe84fb88a69b7294d1125f1f400b7cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9a2da74ed29f44340dd1a4778f15ba78ba4f673eb1b34b676e35c7d7c84af31d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a2da74ed29f44340dd1a4778f15ba78ba4f673eb1b34b676e35c7d7c84af31d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:05:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:18Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.676893 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a14bfcc7d6d2ab5e1ad0c45999b2e73332e750ed65d9299e188505dafb26d0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:18Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.685397 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bae2e22607dec646b15563dc62f66655e7a5c2b7ad809406c907e5f687f80105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75e290e203fab8f291146855a960477ceb9c33d965d15b4722c668dc2b3fded3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:18Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.686136 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.686221 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.686294 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.686359 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.686411 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:18Z","lastTransitionTime":"2026-01-22T09:06:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.693163 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:18Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.701677 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9g4j8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d23c9c9-89ca-4db5-99dc-1e5b9f80be38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9g4j8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:18Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.702658 4811 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-22 09:01:17 +0000 UTC, rotation deadline is 2026-12-08 19:02:51.542733408 +0000 UTC Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.702702 4811 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7689h56m32.840032991s for next certificate rotation Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.711108 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30b44c40-e6d4-4902-98e9-a259269d8bf7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c0b4f518da69d1ede47cb9d322e9d9ca120cfc5eb770f109f0e5fd2e3260964\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aee61445030e02ac4fa9de4fefd94fe50e821ac8897c7403e173f2594a2e0ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d66c2a7980e144d2ce01cb8ebac91297e012c09d4b9aec617e747542e1ca304\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4daff941637e717d7dc33c5627db72d253f50f50288c23257e1eaa66cd38c5d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a86cb7b767900041a2946ed016b105369283ee61acdf725ade666acb1e926ed0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0122 09:06:13.075349 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0122 09:06:13.075574 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 09:06:13.077171 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1744166093/tls.crt::/tmp/serving-cert-1744166093/tls.key\\\\\\\"\\\\nI0122 09:06:13.324400 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 09:06:13.330700 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 09:06:13.330724 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 09:06:13.330758 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 09:06:13.330771 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 09:06:13.337838 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 09:06:13.337866 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:06:13.337872 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:06:13.337876 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 09:06:13.337879 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 09:06:13.337883 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 09:06:13.337886 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 09:06:13.338151 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 09:06:13.339610 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9efb1ddf2ce6601c9860410e3ccacc38359aef779a7c8524bf4ec5950743d36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85e811085c9b345899231a1fcf56619c1cc5c8b0e6d22fc897fb96f20ba72e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85e811085c9b345899231a1fcf56619c1cc5c8b0e6d22fc897fb96f20ba72e37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:05:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:18Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.723271 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:18Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.731237 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:18Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.738900 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f776b1bc70646779cf35092ded02c00aa3b990c873de09d21e1f0d76258b0b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:18Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.745347 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xhhs7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f42dc1a6-d0c4-43e4-b9d9-b40c1f910400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:17Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:17Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcfzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xhhs7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:18Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.752856 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f2555861-d1bb-4f21-be4a-165ed9212932-multus-conf-dir\") pod \"multus-kfqgt\" (UID: \"f2555861-d1bb-4f21-be4a-165ed9212932\") " pod="openshift-multus/multus-kfqgt" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.752884 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1cd0f0db-de53-47c0-9b45-2ce8b37392a3-env-overrides\") pod \"ovnkube-node-274vf\" (UID: \"1cd0f0db-de53-47c0-9b45-2ce8b37392a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-274vf" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.752898 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f2555861-d1bb-4f21-be4a-165ed9212932-cni-binary-copy\") pod \"multus-kfqgt\" (UID: \"f2555861-d1bb-4f21-be4a-165ed9212932\") " pod="openshift-multus/multus-kfqgt" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.752912 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f2555861-d1bb-4f21-be4a-165ed9212932-host-var-lib-cni-bin\") pod \"multus-kfqgt\" (UID: \"f2555861-d1bb-4f21-be4a-165ed9212932\") " pod="openshift-multus/multus-kfqgt" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.752926 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/f2555861-d1bb-4f21-be4a-165ed9212932-hostroot\") pod \"multus-kfqgt\" (UID: \"f2555861-d1bb-4f21-be4a-165ed9212932\") " pod="openshift-multus/multus-kfqgt" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.752956 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/f2555861-d1bb-4f21-be4a-165ed9212932-host-run-multus-certs\") pod \"multus-kfqgt\" (UID: \"f2555861-d1bb-4f21-be4a-165ed9212932\") " pod="openshift-multus/multus-kfqgt" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.753008 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f2555861-d1bb-4f21-be4a-165ed9212932-cnibin\") pod \"multus-kfqgt\" (UID: \"f2555861-d1bb-4f21-be4a-165ed9212932\") " pod="openshift-multus/multus-kfqgt" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.753023 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/1cd0f0db-de53-47c0-9b45-2ce8b37392a3-log-socket\") pod \"ovnkube-node-274vf\" (UID: \"1cd0f0db-de53-47c0-9b45-2ce8b37392a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-274vf" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.753063 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f2555861-d1bb-4f21-be4a-165ed9212932-system-cni-dir\") pod \"multus-kfqgt\" (UID: \"f2555861-d1bb-4f21-be4a-165ed9212932\") " pod="openshift-multus/multus-kfqgt" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.753077 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f2555861-d1bb-4f21-be4a-165ed9212932-os-release\") pod \"multus-kfqgt\" (UID: \"f2555861-d1bb-4f21-be4a-165ed9212932\") " pod="openshift-multus/multus-kfqgt" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.753091 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f2555861-d1bb-4f21-be4a-165ed9212932-host-run-netns\") pod \"multus-kfqgt\" (UID: \"f2555861-d1bb-4f21-be4a-165ed9212932\") " pod="openshift-multus/multus-kfqgt" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.753106 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jt7cm\" (UniqueName: \"kubernetes.io/projected/f2555861-d1bb-4f21-be4a-165ed9212932-kube-api-access-jt7cm\") pod \"multus-kfqgt\" (UID: \"f2555861-d1bb-4f21-be4a-165ed9212932\") " pod="openshift-multus/multus-kfqgt" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.753121 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/1cd0f0db-de53-47c0-9b45-2ce8b37392a3-systemd-units\") pod \"ovnkube-node-274vf\" (UID: \"1cd0f0db-de53-47c0-9b45-2ce8b37392a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-274vf" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.753136 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/1cd0f0db-de53-47c0-9b45-2ce8b37392a3-run-systemd\") pod \"ovnkube-node-274vf\" (UID: \"1cd0f0db-de53-47c0-9b45-2ce8b37392a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-274vf" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.753152 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1cd0f0db-de53-47c0-9b45-2ce8b37392a3-host-cni-netd\") pod \"ovnkube-node-274vf\" (UID: \"1cd0f0db-de53-47c0-9b45-2ce8b37392a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-274vf" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.753167 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/3d23c9c9-89ca-4db5-99dc-1e5b9f80be38-cni-binary-copy\") pod \"multus-additional-cni-plugins-9g4j8\" (UID: \"3d23c9c9-89ca-4db5-99dc-1e5b9f80be38\") " pod="openshift-multus/multus-additional-cni-plugins-9g4j8" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.753181 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/f2555861-d1bb-4f21-be4a-165ed9212932-host-var-lib-cni-multus\") pod \"multus-kfqgt\" (UID: \"f2555861-d1bb-4f21-be4a-165ed9212932\") " pod="openshift-multus/multus-kfqgt" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.753195 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/1cd0f0db-de53-47c0-9b45-2ce8b37392a3-host-kubelet\") pod \"ovnkube-node-274vf\" (UID: \"1cd0f0db-de53-47c0-9b45-2ce8b37392a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-274vf" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.753217 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/1cd0f0db-de53-47c0-9b45-2ce8b37392a3-node-log\") pod \"ovnkube-node-274vf\" (UID: \"1cd0f0db-de53-47c0-9b45-2ce8b37392a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-274vf" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.753232 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1cd0f0db-de53-47c0-9b45-2ce8b37392a3-ovn-node-metrics-cert\") pod \"ovnkube-node-274vf\" (UID: \"1cd0f0db-de53-47c0-9b45-2ce8b37392a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-274vf" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.753247 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/3d23c9c9-89ca-4db5-99dc-1e5b9f80be38-os-release\") pod \"multus-additional-cni-plugins-9g4j8\" (UID: \"3d23c9c9-89ca-4db5-99dc-1e5b9f80be38\") " pod="openshift-multus/multus-additional-cni-plugins-9g4j8" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.753272 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1cd0f0db-de53-47c0-9b45-2ce8b37392a3-host-run-ovn-kubernetes\") pod \"ovnkube-node-274vf\" (UID: \"1cd0f0db-de53-47c0-9b45-2ce8b37392a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-274vf" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.753288 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1cd0f0db-de53-47c0-9b45-2ce8b37392a3-ovnkube-config\") pod \"ovnkube-node-274vf\" (UID: \"1cd0f0db-de53-47c0-9b45-2ce8b37392a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-274vf" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.753319 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1cd0f0db-de53-47c0-9b45-2ce8b37392a3-etc-openvswitch\") pod \"ovnkube-node-274vf\" (UID: \"1cd0f0db-de53-47c0-9b45-2ce8b37392a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-274vf" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.753336 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/3d23c9c9-89ca-4db5-99dc-1e5b9f80be38-cnibin\") pod \"multus-additional-cni-plugins-9g4j8\" (UID: \"3d23c9c9-89ca-4db5-99dc-1e5b9f80be38\") " pod="openshift-multus/multus-additional-cni-plugins-9g4j8" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.753349 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/f2555861-d1bb-4f21-be4a-165ed9212932-multus-daemon-config\") pod \"multus-kfqgt\" (UID: \"f2555861-d1bb-4f21-be4a-165ed9212932\") " pod="openshift-multus/multus-kfqgt" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.753386 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/84068a6b-e189-419b-87f5-f31428f6eafe-rootfs\") pod \"machine-config-daemon-txvcq\" (UID: \"84068a6b-e189-419b-87f5-f31428f6eafe\") " pod="openshift-machine-config-operator/machine-config-daemon-txvcq" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.753425 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thzhp\" (UniqueName: \"kubernetes.io/projected/84068a6b-e189-419b-87f5-f31428f6eafe-kube-api-access-thzhp\") pod \"machine-config-daemon-txvcq\" (UID: \"84068a6b-e189-419b-87f5-f31428f6eafe\") " pod="openshift-machine-config-operator/machine-config-daemon-txvcq" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.753447 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/3d23c9c9-89ca-4db5-99dc-1e5b9f80be38-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-9g4j8\" (UID: \"3d23c9c9-89ca-4db5-99dc-1e5b9f80be38\") " pod="openshift-multus/multus-additional-cni-plugins-9g4j8" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.753463 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f2555861-d1bb-4f21-be4a-165ed9212932-multus-cni-dir\") pod \"multus-kfqgt\" (UID: \"f2555861-d1bb-4f21-be4a-165ed9212932\") " pod="openshift-multus/multus-kfqgt" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.753478 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/1cd0f0db-de53-47c0-9b45-2ce8b37392a3-host-slash\") pod \"ovnkube-node-274vf\" (UID: \"1cd0f0db-de53-47c0-9b45-2ce8b37392a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-274vf" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.753504 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/3d23c9c9-89ca-4db5-99dc-1e5b9f80be38-tuning-conf-dir\") pod \"multus-additional-cni-plugins-9g4j8\" (UID: \"3d23c9c9-89ca-4db5-99dc-1e5b9f80be38\") " pod="openshift-multus/multus-additional-cni-plugins-9g4j8" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.753524 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/f2555861-d1bb-4f21-be4a-165ed9212932-host-run-k8s-cni-cncf-io\") pod \"multus-kfqgt\" (UID: \"f2555861-d1bb-4f21-be4a-165ed9212932\") " pod="openshift-multus/multus-kfqgt" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.753541 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/84068a6b-e189-419b-87f5-f31428f6eafe-mcd-auth-proxy-config\") pod \"machine-config-daemon-txvcq\" (UID: \"84068a6b-e189-419b-87f5-f31428f6eafe\") " pod="openshift-machine-config-operator/machine-config-daemon-txvcq" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.753555 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1cd0f0db-de53-47c0-9b45-2ce8b37392a3-run-openvswitch\") pod \"ovnkube-node-274vf\" (UID: \"1cd0f0db-de53-47c0-9b45-2ce8b37392a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-274vf" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.753570 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/1cd0f0db-de53-47c0-9b45-2ce8b37392a3-run-ovn\") pod \"ovnkube-node-274vf\" (UID: \"1cd0f0db-de53-47c0-9b45-2ce8b37392a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-274vf" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.753583 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/1cd0f0db-de53-47c0-9b45-2ce8b37392a3-ovnkube-script-lib\") pod \"ovnkube-node-274vf\" (UID: \"1cd0f0db-de53-47c0-9b45-2ce8b37392a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-274vf" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.753617 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/84068a6b-e189-419b-87f5-f31428f6eafe-proxy-tls\") pod \"machine-config-daemon-txvcq\" (UID: \"84068a6b-e189-419b-87f5-f31428f6eafe\") " pod="openshift-machine-config-operator/machine-config-daemon-txvcq" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.753675 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1cd0f0db-de53-47c0-9b45-2ce8b37392a3-host-run-netns\") pod \"ovnkube-node-274vf\" (UID: \"1cd0f0db-de53-47c0-9b45-2ce8b37392a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-274vf" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.753690 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3d23c9c9-89ca-4db5-99dc-1e5b9f80be38-system-cni-dir\") pod \"multus-additional-cni-plugins-9g4j8\" (UID: \"3d23c9c9-89ca-4db5-99dc-1e5b9f80be38\") " pod="openshift-multus/multus-additional-cni-plugins-9g4j8" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.753745 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwl4p\" (UniqueName: \"kubernetes.io/projected/3d23c9c9-89ca-4db5-99dc-1e5b9f80be38-kube-api-access-rwl4p\") pod \"multus-additional-cni-plugins-9g4j8\" (UID: \"3d23c9c9-89ca-4db5-99dc-1e5b9f80be38\") " pod="openshift-multus/multus-additional-cni-plugins-9g4j8" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.753769 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1cd0f0db-de53-47c0-9b45-2ce8b37392a3-var-lib-openvswitch\") pod \"ovnkube-node-274vf\" (UID: \"1cd0f0db-de53-47c0-9b45-2ce8b37392a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-274vf" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.753783 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1cd0f0db-de53-47c0-9b45-2ce8b37392a3-host-cni-bin\") pod \"ovnkube-node-274vf\" (UID: \"1cd0f0db-de53-47c0-9b45-2ce8b37392a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-274vf" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.753798 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1cd0f0db-de53-47c0-9b45-2ce8b37392a3-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-274vf\" (UID: \"1cd0f0db-de53-47c0-9b45-2ce8b37392a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-274vf" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.753813 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwxdh\" (UniqueName: \"kubernetes.io/projected/1cd0f0db-de53-47c0-9b45-2ce8b37392a3-kube-api-access-vwxdh\") pod \"ovnkube-node-274vf\" (UID: \"1cd0f0db-de53-47c0-9b45-2ce8b37392a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-274vf" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.753828 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/f2555861-d1bb-4f21-be4a-165ed9212932-multus-socket-dir-parent\") pod \"multus-kfqgt\" (UID: \"f2555861-d1bb-4f21-be4a-165ed9212932\") " pod="openshift-multus/multus-kfqgt" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.753841 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/f2555861-d1bb-4f21-be4a-165ed9212932-host-var-lib-kubelet\") pod \"multus-kfqgt\" (UID: \"f2555861-d1bb-4f21-be4a-165ed9212932\") " pod="openshift-multus/multus-kfqgt" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.753854 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f2555861-d1bb-4f21-be4a-165ed9212932-etc-kubernetes\") pod \"multus-kfqgt\" (UID: \"f2555861-d1bb-4f21-be4a-165ed9212932\") " pod="openshift-multus/multus-kfqgt" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.754604 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30b44c40-e6d4-4902-98e9-a259269d8bf7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c0b4f518da69d1ede47cb9d322e9d9ca120cfc5eb770f109f0e5fd2e3260964\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aee61445030e02ac4fa9de4fefd94fe50e821ac8897c7403e173f2594a2e0ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d66c2a7980e144d2ce01cb8ebac91297e012c09d4b9aec617e747542e1ca304\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4daff941637e717d7dc33c5627db72d253f50f50288c23257e1eaa66cd38c5d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a86cb7b767900041a2946ed016b105369283ee61acdf725ade666acb1e926ed0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0122 09:06:13.075349 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0122 09:06:13.075574 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 09:06:13.077171 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1744166093/tls.crt::/tmp/serving-cert-1744166093/tls.key\\\\\\\"\\\\nI0122 09:06:13.324400 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 09:06:13.330700 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 09:06:13.330724 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 09:06:13.330758 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 09:06:13.330771 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 09:06:13.337838 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 09:06:13.337866 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:06:13.337872 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:06:13.337876 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 09:06:13.337879 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 09:06:13.337883 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 09:06:13.337886 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 09:06:13.338151 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 09:06:13.339610 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9efb1ddf2ce6601c9860410e3ccacc38359aef779a7c8524bf4ec5950743d36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85e811085c9b345899231a1fcf56619c1cc5c8b0e6d22fc897fb96f20ba72e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85e811085c9b345899231a1fcf56619c1cc5c8b0e6d22fc897fb96f20ba72e37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:05:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:18Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.762192 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:18Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.769827 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:18Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.775974 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xhhs7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f42dc1a6-d0c4-43e4-b9d9-b40c1f910400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:17Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:17Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcfzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xhhs7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:18Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.784710 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9g4j8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d23c9c9-89ca-4db5-99dc-1e5b9f80be38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9g4j8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:18Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.788125 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.788153 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.788171 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.788182 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.788192 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:18Z","lastTransitionTime":"2026-01-22T09:06:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.792813 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84068a6b-e189-419b-87f5-f31428f6eafe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thzhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thzhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-txvcq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:18Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.801604 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a14bfcc7d6d2ab5e1ad0c45999b2e73332e750ed65d9299e188505dafb26d0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:18Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.809669 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bae2e22607dec646b15563dc62f66655e7a5c2b7ad809406c907e5f687f80105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75e290e203fab8f291146855a960477ceb9c33d965d15b4722c668dc2b3fded3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:18Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.822127 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-274vf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cd0f0db-de53-47c0-9b45-2ce8b37392a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-274vf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:18Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.829733 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f776b1bc70646779cf35092ded02c00aa3b990c873de09d21e1f0d76258b0b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:18Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.843314 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cf1d28a-4ee8-45bb-9a86-3687d8db2105\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684f8ca14b2eabcbf9280085d495af9a6f1fa76fb187f15b8fa5659178efdc93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64382d1748f1eaeb9f2f09d243b35c56f9fd70b9a9765ddb64cb0c4866ed493b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://590bb419e3791ec3a267b9a5aa4022ee74ba5e302b047e85211563e135c4cf31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b14bb77b0cf24bacdb8f5edae5d02ccea019b12feb315e6d6c4887feb1a9354\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0d10794bea5b8ea94b9247775e0c424fa9d481267c663d4bcae16eb4e7eb729\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09c7d347a923430e4eb2af48033ea3fc7715bedad38dd1b644c7b29a1ab617fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09c7d347a923430e4eb2af48033ea3fc7715bedad38dd1b644c7b29a1ab617fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba74d0a51550a0637b01b701bbaa33c5efe84fb88a69b7294d1125f1f400b7cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba74d0a51550a0637b01b701bbaa33c5efe84fb88a69b7294d1125f1f400b7cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9a2da74ed29f44340dd1a4778f15ba78ba4f673eb1b34b676e35c7d7c84af31d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a2da74ed29f44340dd1a4778f15ba78ba4f673eb1b34b676e35c7d7c84af31d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:05:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:18Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.851384 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:18Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.855104 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1cd0f0db-de53-47c0-9b45-2ce8b37392a3-host-run-netns\") pod \"ovnkube-node-274vf\" (UID: \"1cd0f0db-de53-47c0-9b45-2ce8b37392a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-274vf" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.855207 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3d23c9c9-89ca-4db5-99dc-1e5b9f80be38-system-cni-dir\") pod \"multus-additional-cni-plugins-9g4j8\" (UID: \"3d23c9c9-89ca-4db5-99dc-1e5b9f80be38\") " pod="openshift-multus/multus-additional-cni-plugins-9g4j8" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.855267 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3d23c9c9-89ca-4db5-99dc-1e5b9f80be38-system-cni-dir\") pod \"multus-additional-cni-plugins-9g4j8\" (UID: \"3d23c9c9-89ca-4db5-99dc-1e5b9f80be38\") " pod="openshift-multus/multus-additional-cni-plugins-9g4j8" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.855345 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwl4p\" (UniqueName: \"kubernetes.io/projected/3d23c9c9-89ca-4db5-99dc-1e5b9f80be38-kube-api-access-rwl4p\") pod \"multus-additional-cni-plugins-9g4j8\" (UID: \"3d23c9c9-89ca-4db5-99dc-1e5b9f80be38\") " pod="openshift-multus/multus-additional-cni-plugins-9g4j8" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.855433 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1cd0f0db-de53-47c0-9b45-2ce8b37392a3-var-lib-openvswitch\") pod \"ovnkube-node-274vf\" (UID: \"1cd0f0db-de53-47c0-9b45-2ce8b37392a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-274vf" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.855503 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1cd0f0db-de53-47c0-9b45-2ce8b37392a3-var-lib-openvswitch\") pod \"ovnkube-node-274vf\" (UID: \"1cd0f0db-de53-47c0-9b45-2ce8b37392a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-274vf" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.855505 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1cd0f0db-de53-47c0-9b45-2ce8b37392a3-host-cni-bin\") pod \"ovnkube-node-274vf\" (UID: \"1cd0f0db-de53-47c0-9b45-2ce8b37392a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-274vf" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.855548 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1cd0f0db-de53-47c0-9b45-2ce8b37392a3-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-274vf\" (UID: \"1cd0f0db-de53-47c0-9b45-2ce8b37392a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-274vf" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.855565 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwxdh\" (UniqueName: \"kubernetes.io/projected/1cd0f0db-de53-47c0-9b45-2ce8b37392a3-kube-api-access-vwxdh\") pod \"ovnkube-node-274vf\" (UID: \"1cd0f0db-de53-47c0-9b45-2ce8b37392a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-274vf" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.855583 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/f2555861-d1bb-4f21-be4a-165ed9212932-multus-socket-dir-parent\") pod \"multus-kfqgt\" (UID: \"f2555861-d1bb-4f21-be4a-165ed9212932\") " pod="openshift-multus/multus-kfqgt" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.855597 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/f2555861-d1bb-4f21-be4a-165ed9212932-host-var-lib-kubelet\") pod \"multus-kfqgt\" (UID: \"f2555861-d1bb-4f21-be4a-165ed9212932\") " pod="openshift-multus/multus-kfqgt" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.855614 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f2555861-d1bb-4f21-be4a-165ed9212932-etc-kubernetes\") pod \"multus-kfqgt\" (UID: \"f2555861-d1bb-4f21-be4a-165ed9212932\") " pod="openshift-multus/multus-kfqgt" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.855595 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1cd0f0db-de53-47c0-9b45-2ce8b37392a3-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-274vf\" (UID: \"1cd0f0db-de53-47c0-9b45-2ce8b37392a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-274vf" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.855656 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f2555861-d1bb-4f21-be4a-165ed9212932-multus-conf-dir\") pod \"multus-kfqgt\" (UID: \"f2555861-d1bb-4f21-be4a-165ed9212932\") " pod="openshift-multus/multus-kfqgt" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.855674 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1cd0f0db-de53-47c0-9b45-2ce8b37392a3-env-overrides\") pod \"ovnkube-node-274vf\" (UID: \"1cd0f0db-de53-47c0-9b45-2ce8b37392a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-274vf" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.855688 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f2555861-d1bb-4f21-be4a-165ed9212932-cni-binary-copy\") pod \"multus-kfqgt\" (UID: \"f2555861-d1bb-4f21-be4a-165ed9212932\") " pod="openshift-multus/multus-kfqgt" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.855700 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f2555861-d1bb-4f21-be4a-165ed9212932-etc-kubernetes\") pod \"multus-kfqgt\" (UID: \"f2555861-d1bb-4f21-be4a-165ed9212932\") " pod="openshift-multus/multus-kfqgt" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.855726 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f2555861-d1bb-4f21-be4a-165ed9212932-host-var-lib-cni-bin\") pod \"multus-kfqgt\" (UID: \"f2555861-d1bb-4f21-be4a-165ed9212932\") " pod="openshift-multus/multus-kfqgt" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.855728 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/f2555861-d1bb-4f21-be4a-165ed9212932-multus-socket-dir-parent\") pod \"multus-kfqgt\" (UID: \"f2555861-d1bb-4f21-be4a-165ed9212932\") " pod="openshift-multus/multus-kfqgt" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.855709 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f2555861-d1bb-4f21-be4a-165ed9212932-host-var-lib-cni-bin\") pod \"multus-kfqgt\" (UID: \"f2555861-d1bb-4f21-be4a-165ed9212932\") " pod="openshift-multus/multus-kfqgt" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.855222 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1cd0f0db-de53-47c0-9b45-2ce8b37392a3-host-run-netns\") pod \"ovnkube-node-274vf\" (UID: \"1cd0f0db-de53-47c0-9b45-2ce8b37392a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-274vf" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.855767 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/f2555861-d1bb-4f21-be4a-165ed9212932-hostroot\") pod \"multus-kfqgt\" (UID: \"f2555861-d1bb-4f21-be4a-165ed9212932\") " pod="openshift-multus/multus-kfqgt" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.855769 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f2555861-d1bb-4f21-be4a-165ed9212932-multus-conf-dir\") pod \"multus-kfqgt\" (UID: \"f2555861-d1bb-4f21-be4a-165ed9212932\") " pod="openshift-multus/multus-kfqgt" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.855790 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/f2555861-d1bb-4f21-be4a-165ed9212932-host-run-multus-certs\") pod \"multus-kfqgt\" (UID: \"f2555861-d1bb-4f21-be4a-165ed9212932\") " pod="openshift-multus/multus-kfqgt" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.855809 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f2555861-d1bb-4f21-be4a-165ed9212932-cnibin\") pod \"multus-kfqgt\" (UID: \"f2555861-d1bb-4f21-be4a-165ed9212932\") " pod="openshift-multus/multus-kfqgt" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.855811 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/f2555861-d1bb-4f21-be4a-165ed9212932-hostroot\") pod \"multus-kfqgt\" (UID: \"f2555861-d1bb-4f21-be4a-165ed9212932\") " pod="openshift-multus/multus-kfqgt" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.855824 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/1cd0f0db-de53-47c0-9b45-2ce8b37392a3-log-socket\") pod \"ovnkube-node-274vf\" (UID: \"1cd0f0db-de53-47c0-9b45-2ce8b37392a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-274vf" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.855842 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f2555861-d1bb-4f21-be4a-165ed9212932-system-cni-dir\") pod \"multus-kfqgt\" (UID: \"f2555861-d1bb-4f21-be4a-165ed9212932\") " pod="openshift-multus/multus-kfqgt" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.855841 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/f2555861-d1bb-4f21-be4a-165ed9212932-host-run-multus-certs\") pod \"multus-kfqgt\" (UID: \"f2555861-d1bb-4f21-be4a-165ed9212932\") " pod="openshift-multus/multus-kfqgt" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.855855 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f2555861-d1bb-4f21-be4a-165ed9212932-os-release\") pod \"multus-kfqgt\" (UID: \"f2555861-d1bb-4f21-be4a-165ed9212932\") " pod="openshift-multus/multus-kfqgt" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.855890 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f2555861-d1bb-4f21-be4a-165ed9212932-host-run-netns\") pod \"multus-kfqgt\" (UID: \"f2555861-d1bb-4f21-be4a-165ed9212932\") " pod="openshift-multus/multus-kfqgt" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.855906 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jt7cm\" (UniqueName: \"kubernetes.io/projected/f2555861-d1bb-4f21-be4a-165ed9212932-kube-api-access-jt7cm\") pod \"multus-kfqgt\" (UID: \"f2555861-d1bb-4f21-be4a-165ed9212932\") " pod="openshift-multus/multus-kfqgt" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.855916 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f2555861-d1bb-4f21-be4a-165ed9212932-cnibin\") pod \"multus-kfqgt\" (UID: \"f2555861-d1bb-4f21-be4a-165ed9212932\") " pod="openshift-multus/multus-kfqgt" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.855923 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/1cd0f0db-de53-47c0-9b45-2ce8b37392a3-systemd-units\") pod \"ovnkube-node-274vf\" (UID: \"1cd0f0db-de53-47c0-9b45-2ce8b37392a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-274vf" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.855939 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/1cd0f0db-de53-47c0-9b45-2ce8b37392a3-log-socket\") pod \"ovnkube-node-274vf\" (UID: \"1cd0f0db-de53-47c0-9b45-2ce8b37392a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-274vf" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.855943 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/1cd0f0db-de53-47c0-9b45-2ce8b37392a3-run-systemd\") pod \"ovnkube-node-274vf\" (UID: \"1cd0f0db-de53-47c0-9b45-2ce8b37392a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-274vf" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.855957 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1cd0f0db-de53-47c0-9b45-2ce8b37392a3-host-cni-netd\") pod \"ovnkube-node-274vf\" (UID: \"1cd0f0db-de53-47c0-9b45-2ce8b37392a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-274vf" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.855973 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/3d23c9c9-89ca-4db5-99dc-1e5b9f80be38-cni-binary-copy\") pod \"multus-additional-cni-plugins-9g4j8\" (UID: \"3d23c9c9-89ca-4db5-99dc-1e5b9f80be38\") " pod="openshift-multus/multus-additional-cni-plugins-9g4j8" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.855987 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/f2555861-d1bb-4f21-be4a-165ed9212932-host-var-lib-cni-multus\") pod \"multus-kfqgt\" (UID: \"f2555861-d1bb-4f21-be4a-165ed9212932\") " pod="openshift-multus/multus-kfqgt" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.855998 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f2555861-d1bb-4f21-be4a-165ed9212932-system-cni-dir\") pod \"multus-kfqgt\" (UID: \"f2555861-d1bb-4f21-be4a-165ed9212932\") " pod="openshift-multus/multus-kfqgt" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.856003 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/1cd0f0db-de53-47c0-9b45-2ce8b37392a3-host-kubelet\") pod \"ovnkube-node-274vf\" (UID: \"1cd0f0db-de53-47c0-9b45-2ce8b37392a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-274vf" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.856018 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/1cd0f0db-de53-47c0-9b45-2ce8b37392a3-node-log\") pod \"ovnkube-node-274vf\" (UID: \"1cd0f0db-de53-47c0-9b45-2ce8b37392a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-274vf" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.856019 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/1cd0f0db-de53-47c0-9b45-2ce8b37392a3-systemd-units\") pod \"ovnkube-node-274vf\" (UID: \"1cd0f0db-de53-47c0-9b45-2ce8b37392a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-274vf" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.856034 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1cd0f0db-de53-47c0-9b45-2ce8b37392a3-ovn-node-metrics-cert\") pod \"ovnkube-node-274vf\" (UID: \"1cd0f0db-de53-47c0-9b45-2ce8b37392a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-274vf" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.856041 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f2555861-d1bb-4f21-be4a-165ed9212932-host-run-netns\") pod \"multus-kfqgt\" (UID: \"f2555861-d1bb-4f21-be4a-165ed9212932\") " pod="openshift-multus/multus-kfqgt" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.855891 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f2555861-d1bb-4f21-be4a-165ed9212932-os-release\") pod \"multus-kfqgt\" (UID: \"f2555861-d1bb-4f21-be4a-165ed9212932\") " pod="openshift-multus/multus-kfqgt" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.856048 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/3d23c9c9-89ca-4db5-99dc-1e5b9f80be38-os-release\") pod \"multus-additional-cni-plugins-9g4j8\" (UID: \"3d23c9c9-89ca-4db5-99dc-1e5b9f80be38\") " pod="openshift-multus/multus-additional-cni-plugins-9g4j8" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.856074 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1cd0f0db-de53-47c0-9b45-2ce8b37392a3-host-run-ovn-kubernetes\") pod \"ovnkube-node-274vf\" (UID: \"1cd0f0db-de53-47c0-9b45-2ce8b37392a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-274vf" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.856088 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/3d23c9c9-89ca-4db5-99dc-1e5b9f80be38-os-release\") pod \"multus-additional-cni-plugins-9g4j8\" (UID: \"3d23c9c9-89ca-4db5-99dc-1e5b9f80be38\") " pod="openshift-multus/multus-additional-cni-plugins-9g4j8" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.856090 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1cd0f0db-de53-47c0-9b45-2ce8b37392a3-ovnkube-config\") pod \"ovnkube-node-274vf\" (UID: \"1cd0f0db-de53-47c0-9b45-2ce8b37392a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-274vf" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.856119 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1cd0f0db-de53-47c0-9b45-2ce8b37392a3-etc-openvswitch\") pod \"ovnkube-node-274vf\" (UID: \"1cd0f0db-de53-47c0-9b45-2ce8b37392a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-274vf" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.856136 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/3d23c9c9-89ca-4db5-99dc-1e5b9f80be38-cnibin\") pod \"multus-additional-cni-plugins-9g4j8\" (UID: \"3d23c9c9-89ca-4db5-99dc-1e5b9f80be38\") " pod="openshift-multus/multus-additional-cni-plugins-9g4j8" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.856153 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/f2555861-d1bb-4f21-be4a-165ed9212932-multus-daemon-config\") pod \"multus-kfqgt\" (UID: \"f2555861-d1bb-4f21-be4a-165ed9212932\") " pod="openshift-multus/multus-kfqgt" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.856167 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/84068a6b-e189-419b-87f5-f31428f6eafe-rootfs\") pod \"machine-config-daemon-txvcq\" (UID: \"84068a6b-e189-419b-87f5-f31428f6eafe\") " pod="openshift-machine-config-operator/machine-config-daemon-txvcq" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.856181 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-thzhp\" (UniqueName: \"kubernetes.io/projected/84068a6b-e189-419b-87f5-f31428f6eafe-kube-api-access-thzhp\") pod \"machine-config-daemon-txvcq\" (UID: \"84068a6b-e189-419b-87f5-f31428f6eafe\") " pod="openshift-machine-config-operator/machine-config-daemon-txvcq" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.856198 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/3d23c9c9-89ca-4db5-99dc-1e5b9f80be38-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-9g4j8\" (UID: \"3d23c9c9-89ca-4db5-99dc-1e5b9f80be38\") " pod="openshift-multus/multus-additional-cni-plugins-9g4j8" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.856210 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1cd0f0db-de53-47c0-9b45-2ce8b37392a3-env-overrides\") pod \"ovnkube-node-274vf\" (UID: \"1cd0f0db-de53-47c0-9b45-2ce8b37392a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-274vf" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.856214 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f2555861-d1bb-4f21-be4a-165ed9212932-multus-cni-dir\") pod \"multus-kfqgt\" (UID: \"f2555861-d1bb-4f21-be4a-165ed9212932\") " pod="openshift-multus/multus-kfqgt" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.856250 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/1cd0f0db-de53-47c0-9b45-2ce8b37392a3-host-slash\") pod \"ovnkube-node-274vf\" (UID: \"1cd0f0db-de53-47c0-9b45-2ce8b37392a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-274vf" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.856308 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/3d23c9c9-89ca-4db5-99dc-1e5b9f80be38-tuning-conf-dir\") pod \"multus-additional-cni-plugins-9g4j8\" (UID: \"3d23c9c9-89ca-4db5-99dc-1e5b9f80be38\") " pod="openshift-multus/multus-additional-cni-plugins-9g4j8" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.856326 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/1cd0f0db-de53-47c0-9b45-2ce8b37392a3-run-systemd\") pod \"ovnkube-node-274vf\" (UID: \"1cd0f0db-de53-47c0-9b45-2ce8b37392a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-274vf" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.856327 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/f2555861-d1bb-4f21-be4a-165ed9212932-host-run-k8s-cni-cncf-io\") pod \"multus-kfqgt\" (UID: \"f2555861-d1bb-4f21-be4a-165ed9212932\") " pod="openshift-multus/multus-kfqgt" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.856346 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/f2555861-d1bb-4f21-be4a-165ed9212932-host-run-k8s-cni-cncf-io\") pod \"multus-kfqgt\" (UID: \"f2555861-d1bb-4f21-be4a-165ed9212932\") " pod="openshift-multus/multus-kfqgt" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.856352 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/84068a6b-e189-419b-87f5-f31428f6eafe-mcd-auth-proxy-config\") pod \"machine-config-daemon-txvcq\" (UID: \"84068a6b-e189-419b-87f5-f31428f6eafe\") " pod="openshift-machine-config-operator/machine-config-daemon-txvcq" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.856367 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/1cd0f0db-de53-47c0-9b45-2ce8b37392a3-host-slash\") pod \"ovnkube-node-274vf\" (UID: \"1cd0f0db-de53-47c0-9b45-2ce8b37392a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-274vf" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.856370 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1cd0f0db-de53-47c0-9b45-2ce8b37392a3-run-openvswitch\") pod \"ovnkube-node-274vf\" (UID: \"1cd0f0db-de53-47c0-9b45-2ce8b37392a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-274vf" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.856386 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/1cd0f0db-de53-47c0-9b45-2ce8b37392a3-run-ovn\") pod \"ovnkube-node-274vf\" (UID: \"1cd0f0db-de53-47c0-9b45-2ce8b37392a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-274vf" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.856400 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/1cd0f0db-de53-47c0-9b45-2ce8b37392a3-ovnkube-script-lib\") pod \"ovnkube-node-274vf\" (UID: \"1cd0f0db-de53-47c0-9b45-2ce8b37392a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-274vf" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.856412 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/84068a6b-e189-419b-87f5-f31428f6eafe-proxy-tls\") pod \"machine-config-daemon-txvcq\" (UID: \"84068a6b-e189-419b-87f5-f31428f6eafe\") " pod="openshift-machine-config-operator/machine-config-daemon-txvcq" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.856310 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f2555861-d1bb-4f21-be4a-165ed9212932-multus-cni-dir\") pod \"multus-kfqgt\" (UID: \"f2555861-d1bb-4f21-be4a-165ed9212932\") " pod="openshift-multus/multus-kfqgt" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.856525 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1cd0f0db-de53-47c0-9b45-2ce8b37392a3-host-run-ovn-kubernetes\") pod \"ovnkube-node-274vf\" (UID: \"1cd0f0db-de53-47c0-9b45-2ce8b37392a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-274vf" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.856553 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/3d23c9c9-89ca-4db5-99dc-1e5b9f80be38-cnibin\") pod \"multus-additional-cni-plugins-9g4j8\" (UID: \"3d23c9c9-89ca-4db5-99dc-1e5b9f80be38\") " pod="openshift-multus/multus-additional-cni-plugins-9g4j8" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.856564 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1cd0f0db-de53-47c0-9b45-2ce8b37392a3-ovnkube-config\") pod \"ovnkube-node-274vf\" (UID: \"1cd0f0db-de53-47c0-9b45-2ce8b37392a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-274vf" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.856573 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1cd0f0db-de53-47c0-9b45-2ce8b37392a3-host-cni-netd\") pod \"ovnkube-node-274vf\" (UID: \"1cd0f0db-de53-47c0-9b45-2ce8b37392a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-274vf" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.856369 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f2555861-d1bb-4f21-be4a-165ed9212932-cni-binary-copy\") pod \"multus-kfqgt\" (UID: \"f2555861-d1bb-4f21-be4a-165ed9212932\") " pod="openshift-multus/multus-kfqgt" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.856818 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/3d23c9c9-89ca-4db5-99dc-1e5b9f80be38-tuning-conf-dir\") pod \"multus-additional-cni-plugins-9g4j8\" (UID: \"3d23c9c9-89ca-4db5-99dc-1e5b9f80be38\") " pod="openshift-multus/multus-additional-cni-plugins-9g4j8" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.856849 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/f2555861-d1bb-4f21-be4a-165ed9212932-host-var-lib-cni-multus\") pod \"multus-kfqgt\" (UID: \"f2555861-d1bb-4f21-be4a-165ed9212932\") " pod="openshift-multus/multus-kfqgt" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.856889 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/f2555861-d1bb-4f21-be4a-165ed9212932-host-var-lib-kubelet\") pod \"multus-kfqgt\" (UID: \"f2555861-d1bb-4f21-be4a-165ed9212932\") " pod="openshift-multus/multus-kfqgt" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.856968 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1cd0f0db-de53-47c0-9b45-2ce8b37392a3-etc-openvswitch\") pod \"ovnkube-node-274vf\" (UID: \"1cd0f0db-de53-47c0-9b45-2ce8b37392a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-274vf" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.856987 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/1cd0f0db-de53-47c0-9b45-2ce8b37392a3-host-kubelet\") pod \"ovnkube-node-274vf\" (UID: \"1cd0f0db-de53-47c0-9b45-2ce8b37392a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-274vf" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.857070 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/3d23c9c9-89ca-4db5-99dc-1e5b9f80be38-cni-binary-copy\") pod \"multus-additional-cni-plugins-9g4j8\" (UID: \"3d23c9c9-89ca-4db5-99dc-1e5b9f80be38\") " pod="openshift-multus/multus-additional-cni-plugins-9g4j8" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.857103 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/1cd0f0db-de53-47c0-9b45-2ce8b37392a3-run-ovn\") pod \"ovnkube-node-274vf\" (UID: \"1cd0f0db-de53-47c0-9b45-2ce8b37392a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-274vf" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.857118 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1cd0f0db-de53-47c0-9b45-2ce8b37392a3-run-openvswitch\") pod \"ovnkube-node-274vf\" (UID: \"1cd0f0db-de53-47c0-9b45-2ce8b37392a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-274vf" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.857201 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/84068a6b-e189-419b-87f5-f31428f6eafe-rootfs\") pod \"machine-config-daemon-txvcq\" (UID: \"84068a6b-e189-419b-87f5-f31428f6eafe\") " pod="openshift-machine-config-operator/machine-config-daemon-txvcq" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.857214 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/84068a6b-e189-419b-87f5-f31428f6eafe-mcd-auth-proxy-config\") pod \"machine-config-daemon-txvcq\" (UID: \"84068a6b-e189-419b-87f5-f31428f6eafe\") " pod="openshift-machine-config-operator/machine-config-daemon-txvcq" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.857292 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/f2555861-d1bb-4f21-be4a-165ed9212932-multus-daemon-config\") pod \"multus-kfqgt\" (UID: \"f2555861-d1bb-4f21-be4a-165ed9212932\") " pod="openshift-multus/multus-kfqgt" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.857330 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/1cd0f0db-de53-47c0-9b45-2ce8b37392a3-node-log\") pod \"ovnkube-node-274vf\" (UID: \"1cd0f0db-de53-47c0-9b45-2ce8b37392a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-274vf" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.857516 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1cd0f0db-de53-47c0-9b45-2ce8b37392a3-host-cni-bin\") pod \"ovnkube-node-274vf\" (UID: \"1cd0f0db-de53-47c0-9b45-2ce8b37392a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-274vf" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.857674 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/3d23c9c9-89ca-4db5-99dc-1e5b9f80be38-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-9g4j8\" (UID: \"3d23c9c9-89ca-4db5-99dc-1e5b9f80be38\") " pod="openshift-multus/multus-additional-cni-plugins-9g4j8" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.857683 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/1cd0f0db-de53-47c0-9b45-2ce8b37392a3-ovnkube-script-lib\") pod \"ovnkube-node-274vf\" (UID: \"1cd0f0db-de53-47c0-9b45-2ce8b37392a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-274vf" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.859536 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/84068a6b-e189-419b-87f5-f31428f6eafe-proxy-tls\") pod \"machine-config-daemon-txvcq\" (UID: \"84068a6b-e189-419b-87f5-f31428f6eafe\") " pod="openshift-machine-config-operator/machine-config-daemon-txvcq" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.860976 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1cd0f0db-de53-47c0-9b45-2ce8b37392a3-ovn-node-metrics-cert\") pod \"ovnkube-node-274vf\" (UID: \"1cd0f0db-de53-47c0-9b45-2ce8b37392a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-274vf" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.868524 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kfqgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2555861-d1bb-4f21-be4a-165ed9212932\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jt7cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kfqgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:18Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.870895 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwxdh\" (UniqueName: \"kubernetes.io/projected/1cd0f0db-de53-47c0-9b45-2ce8b37392a3-kube-api-access-vwxdh\") pod \"ovnkube-node-274vf\" (UID: \"1cd0f0db-de53-47c0-9b45-2ce8b37392a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-274vf" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.872005 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jt7cm\" (UniqueName: \"kubernetes.io/projected/f2555861-d1bb-4f21-be4a-165ed9212932-kube-api-access-jt7cm\") pod \"multus-kfqgt\" (UID: \"f2555861-d1bb-4f21-be4a-165ed9212932\") " pod="openshift-multus/multus-kfqgt" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.872648 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwl4p\" (UniqueName: \"kubernetes.io/projected/3d23c9c9-89ca-4db5-99dc-1e5b9f80be38-kube-api-access-rwl4p\") pod \"multus-additional-cni-plugins-9g4j8\" (UID: \"3d23c9c9-89ca-4db5-99dc-1e5b9f80be38\") " pod="openshift-multus/multus-additional-cni-plugins-9g4j8" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.874101 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-thzhp\" (UniqueName: \"kubernetes.io/projected/84068a6b-e189-419b-87f5-f31428f6eafe-kube-api-access-thzhp\") pod \"machine-config-daemon-txvcq\" (UID: \"84068a6b-e189-419b-87f5-f31428f6eafe\") " pod="openshift-machine-config-operator/machine-config-daemon-txvcq" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.890721 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.890756 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.890766 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.890778 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.890787 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:18Z","lastTransitionTime":"2026-01-22T09:06:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.946729 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-9g4j8" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.952465 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-274vf" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.960303 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" Jan 22 09:06:18 crc kubenswrapper[4811]: W0122 09:06:18.961121 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3d23c9c9_89ca_4db5_99dc_1e5b9f80be38.slice/crio-5d4f6c2d54b22c4646c4b93b5586f1cea3566c37eaddc281f373918296932747 WatchSource:0}: Error finding container 5d4f6c2d54b22c4646c4b93b5586f1cea3566c37eaddc281f373918296932747: Status 404 returned error can't find the container with id 5d4f6c2d54b22c4646c4b93b5586f1cea3566c37eaddc281f373918296932747 Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.974415 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 00:27:46.41984947 +0000 UTC Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.974590 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-kfqgt" Jan 22 09:06:18 crc kubenswrapper[4811]: W0122 09:06:18.978911 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1cd0f0db_de53_47c0_9b45_2ce8b37392a3.slice/crio-9a8029475e24f2282a3c68e7ce56df83c50df1ab181a0cd417f5c4db3884084f WatchSource:0}: Error finding container 9a8029475e24f2282a3c68e7ce56df83c50df1ab181a0cd417f5c4db3884084f: Status 404 returned error can't find the container with id 9a8029475e24f2282a3c68e7ce56df83c50df1ab181a0cd417f5c4db3884084f Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.993239 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.993266 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.993275 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.993285 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:18 crc kubenswrapper[4811]: I0122 09:06:18.993293 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:18Z","lastTransitionTime":"2026-01-22T09:06:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:19 crc kubenswrapper[4811]: W0122 09:06:19.006018 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf2555861_d1bb_4f21_be4a_165ed9212932.slice/crio-43693a5f2f63904368967d6025803e2e78747a45d5e0c501cb4b1b3c08d2b815 WatchSource:0}: Error finding container 43693a5f2f63904368967d6025803e2e78747a45d5e0c501cb4b1b3c08d2b815: Status 404 returned error can't find the container with id 43693a5f2f63904368967d6025803e2e78747a45d5e0c501cb4b1b3c08d2b815 Jan 22 09:06:19 crc kubenswrapper[4811]: I0122 09:06:19.095196 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:19 crc kubenswrapper[4811]: I0122 09:06:19.095231 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:19 crc kubenswrapper[4811]: I0122 09:06:19.095240 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:19 crc kubenswrapper[4811]: I0122 09:06:19.095252 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:19 crc kubenswrapper[4811]: I0122 09:06:19.095707 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:19Z","lastTransitionTime":"2026-01-22T09:06:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:19 crc kubenswrapper[4811]: I0122 09:06:19.095980 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-xhhs7" event={"ID":"f42dc1a6-d0c4-43e4-b9d9-b40c1f910400","Type":"ContainerStarted","Data":"a64d7f1d53f9a3d9616b76ea12902134c5e0a3a30c9a61113675074caae70d58"} Jan 22 09:06:19 crc kubenswrapper[4811]: I0122 09:06:19.100649 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-9g4j8" event={"ID":"3d23c9c9-89ca-4db5-99dc-1e5b9f80be38","Type":"ContainerStarted","Data":"5d4f6c2d54b22c4646c4b93b5586f1cea3566c37eaddc281f373918296932747"} Jan 22 09:06:19 crc kubenswrapper[4811]: I0122 09:06:19.102406 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-kfqgt" event={"ID":"f2555861-d1bb-4f21-be4a-165ed9212932","Type":"ContainerStarted","Data":"6ececfc4b2e4f2a120e7a3307235d606db00733e1a5631b6e19b5312afc8e8ff"} Jan 22 09:06:19 crc kubenswrapper[4811]: I0122 09:06:19.102439 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-kfqgt" event={"ID":"f2555861-d1bb-4f21-be4a-165ed9212932","Type":"ContainerStarted","Data":"43693a5f2f63904368967d6025803e2e78747a45d5e0c501cb4b1b3c08d2b815"} Jan 22 09:06:19 crc kubenswrapper[4811]: I0122 09:06:19.104875 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" event={"ID":"84068a6b-e189-419b-87f5-f31428f6eafe","Type":"ContainerStarted","Data":"bbb067d04c1683ffd06d01aab41bb5988fdca44b8bd16012d35c42795d0d8011"} Jan 22 09:06:19 crc kubenswrapper[4811]: I0122 09:06:19.104907 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" event={"ID":"84068a6b-e189-419b-87f5-f31428f6eafe","Type":"ContainerStarted","Data":"e653dcde9bb5602616c9cce14ac6a91aa81e4dfa8e9211e4e858d25294c9f7e3"} Jan 22 09:06:19 crc kubenswrapper[4811]: I0122 09:06:19.105926 4811 generic.go:334] "Generic (PLEG): container finished" podID="1cd0f0db-de53-47c0-9b45-2ce8b37392a3" containerID="523ab909aa2cbac6043b3c572a17c40733e9b2155d4017253d68998cb85d3e7a" exitCode=0 Jan 22 09:06:19 crc kubenswrapper[4811]: I0122 09:06:19.106345 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-274vf" event={"ID":"1cd0f0db-de53-47c0-9b45-2ce8b37392a3","Type":"ContainerDied","Data":"523ab909aa2cbac6043b3c572a17c40733e9b2155d4017253d68998cb85d3e7a"} Jan 22 09:06:19 crc kubenswrapper[4811]: I0122 09:06:19.106644 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-274vf" event={"ID":"1cd0f0db-de53-47c0-9b45-2ce8b37392a3","Type":"ContainerStarted","Data":"9a8029475e24f2282a3c68e7ce56df83c50df1ab181a0cd417f5c4db3884084f"} Jan 22 09:06:19 crc kubenswrapper[4811]: I0122 09:06:19.109840 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-274vf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cd0f0db-de53-47c0-9b45-2ce8b37392a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-274vf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:19Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:19 crc kubenswrapper[4811]: I0122 09:06:19.119547 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a14bfcc7d6d2ab5e1ad0c45999b2e73332e750ed65d9299e188505dafb26d0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:19Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:19 crc kubenswrapper[4811]: I0122 09:06:19.132351 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bae2e22607dec646b15563dc62f66655e7a5c2b7ad809406c907e5f687f80105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75e290e203fab8f291146855a960477ceb9c33d965d15b4722c668dc2b3fded3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:19Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:19 crc kubenswrapper[4811]: I0122 09:06:19.142854 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f776b1bc70646779cf35092ded02c00aa3b990c873de09d21e1f0d76258b0b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:19Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:19 crc kubenswrapper[4811]: I0122 09:06:19.160230 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cf1d28a-4ee8-45bb-9a86-3687d8db2105\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684f8ca14b2eabcbf9280085d495af9a6f1fa76fb187f15b8fa5659178efdc93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64382d1748f1eaeb9f2f09d243b35c56f9fd70b9a9765ddb64cb0c4866ed493b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://590bb419e3791ec3a267b9a5aa4022ee74ba5e302b047e85211563e135c4cf31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b14bb77b0cf24bacdb8f5edae5d02ccea019b12feb315e6d6c4887feb1a9354\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0d10794bea5b8ea94b9247775e0c424fa9d481267c663d4bcae16eb4e7eb729\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09c7d347a923430e4eb2af48033ea3fc7715bedad38dd1b644c7b29a1ab617fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09c7d347a923430e4eb2af48033ea3fc7715bedad38dd1b644c7b29a1ab617fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba74d0a51550a0637b01b701bbaa33c5efe84fb88a69b7294d1125f1f400b7cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba74d0a51550a0637b01b701bbaa33c5efe84fb88a69b7294d1125f1f400b7cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9a2da74ed29f44340dd1a4778f15ba78ba4f673eb1b34b676e35c7d7c84af31d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a2da74ed29f44340dd1a4778f15ba78ba4f673eb1b34b676e35c7d7c84af31d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:05:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:19Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:19 crc kubenswrapper[4811]: I0122 09:06:19.174785 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:19Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:19 crc kubenswrapper[4811]: I0122 09:06:19.185055 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kfqgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2555861-d1bb-4f21-be4a-165ed9212932\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jt7cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kfqgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:19Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:19 crc kubenswrapper[4811]: I0122 09:06:19.198437 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:19 crc kubenswrapper[4811]: I0122 09:06:19.198471 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:19 crc kubenswrapper[4811]: I0122 09:06:19.198480 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:19 crc kubenswrapper[4811]: I0122 09:06:19.198494 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:19 crc kubenswrapper[4811]: I0122 09:06:19.198502 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:19Z","lastTransitionTime":"2026-01-22T09:06:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:19 crc kubenswrapper[4811]: I0122 09:06:19.199842 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9g4j8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d23c9c9-89ca-4db5-99dc-1e5b9f80be38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9g4j8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:19Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:19 crc kubenswrapper[4811]: I0122 09:06:19.209682 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84068a6b-e189-419b-87f5-f31428f6eafe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thzhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thzhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-txvcq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:19Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:19 crc kubenswrapper[4811]: I0122 09:06:19.219837 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30b44c40-e6d4-4902-98e9-a259269d8bf7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c0b4f518da69d1ede47cb9d322e9d9ca120cfc5eb770f109f0e5fd2e3260964\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aee61445030e02ac4fa9de4fefd94fe50e821ac8897c7403e173f2594a2e0ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d66c2a7980e144d2ce01cb8ebac91297e012c09d4b9aec617e747542e1ca304\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4daff941637e717d7dc33c5627db72d253f50f50288c23257e1eaa66cd38c5d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a86cb7b767900041a2946ed016b105369283ee61acdf725ade666acb1e926ed0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0122 09:06:13.075349 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0122 09:06:13.075574 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 09:06:13.077171 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1744166093/tls.crt::/tmp/serving-cert-1744166093/tls.key\\\\\\\"\\\\nI0122 09:06:13.324400 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 09:06:13.330700 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 09:06:13.330724 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 09:06:13.330758 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 09:06:13.330771 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 09:06:13.337838 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 09:06:13.337866 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:06:13.337872 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:06:13.337876 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 09:06:13.337879 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 09:06:13.337883 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 09:06:13.337886 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 09:06:13.338151 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 09:06:13.339610 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9efb1ddf2ce6601c9860410e3ccacc38359aef779a7c8524bf4ec5950743d36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85e811085c9b345899231a1fcf56619c1cc5c8b0e6d22fc897fb96f20ba72e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85e811085c9b345899231a1fcf56619c1cc5c8b0e6d22fc897fb96f20ba72e37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:05:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:19Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:19 crc kubenswrapper[4811]: I0122 09:06:19.229969 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:19Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:19 crc kubenswrapper[4811]: I0122 09:06:19.246305 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:19Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:19 crc kubenswrapper[4811]: I0122 09:06:19.253840 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xhhs7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f42dc1a6-d0c4-43e4-b9d9-b40c1f910400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a64d7f1d53f9a3d9616b76ea12902134c5e0a3a30c9a61113675074caae70d58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcfzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xhhs7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:19Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:19 crc kubenswrapper[4811]: I0122 09:06:19.264081 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9g4j8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d23c9c9-89ca-4db5-99dc-1e5b9f80be38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9g4j8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:19Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:19 crc kubenswrapper[4811]: I0122 09:06:19.272919 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84068a6b-e189-419b-87f5-f31428f6eafe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thzhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thzhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-txvcq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:19Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:19 crc kubenswrapper[4811]: I0122 09:06:19.289034 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30b44c40-e6d4-4902-98e9-a259269d8bf7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c0b4f518da69d1ede47cb9d322e9d9ca120cfc5eb770f109f0e5fd2e3260964\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aee61445030e02ac4fa9de4fefd94fe50e821ac8897c7403e173f2594a2e0ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d66c2a7980e144d2ce01cb8ebac91297e012c09d4b9aec617e747542e1ca304\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4daff941637e717d7dc33c5627db72d253f50f50288c23257e1eaa66cd38c5d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a86cb7b767900041a2946ed016b105369283ee61acdf725ade666acb1e926ed0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0122 09:06:13.075349 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0122 09:06:13.075574 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 09:06:13.077171 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1744166093/tls.crt::/tmp/serving-cert-1744166093/tls.key\\\\\\\"\\\\nI0122 09:06:13.324400 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 09:06:13.330700 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 09:06:13.330724 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 09:06:13.330758 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 09:06:13.330771 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 09:06:13.337838 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 09:06:13.337866 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:06:13.337872 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:06:13.337876 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 09:06:13.337879 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 09:06:13.337883 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 09:06:13.337886 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 09:06:13.338151 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 09:06:13.339610 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9efb1ddf2ce6601c9860410e3ccacc38359aef779a7c8524bf4ec5950743d36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85e811085c9b345899231a1fcf56619c1cc5c8b0e6d22fc897fb96f20ba72e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85e811085c9b345899231a1fcf56619c1cc5c8b0e6d22fc897fb96f20ba72e37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:05:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:19Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:19 crc kubenswrapper[4811]: I0122 09:06:19.300037 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:19 crc kubenswrapper[4811]: I0122 09:06:19.300066 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:19 crc kubenswrapper[4811]: I0122 09:06:19.300074 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:19 crc kubenswrapper[4811]: I0122 09:06:19.300086 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:19 crc kubenswrapper[4811]: I0122 09:06:19.300095 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:19Z","lastTransitionTime":"2026-01-22T09:06:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:19 crc kubenswrapper[4811]: I0122 09:06:19.300604 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:19Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:19 crc kubenswrapper[4811]: I0122 09:06:19.310504 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:19Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:19 crc kubenswrapper[4811]: I0122 09:06:19.317907 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xhhs7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f42dc1a6-d0c4-43e4-b9d9-b40c1f910400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a64d7f1d53f9a3d9616b76ea12902134c5e0a3a30c9a61113675074caae70d58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcfzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xhhs7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:19Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:19 crc kubenswrapper[4811]: I0122 09:06:19.331881 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-274vf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cd0f0db-de53-47c0-9b45-2ce8b37392a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://523ab909aa2cbac6043b3c572a17c40733e9b2155d4017253d68998cb85d3e7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://523ab909aa2cbac6043b3c572a17c40733e9b2155d4017253d68998cb85d3e7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-274vf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:19Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:19 crc kubenswrapper[4811]: I0122 09:06:19.340982 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a14bfcc7d6d2ab5e1ad0c45999b2e73332e750ed65d9299e188505dafb26d0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:19Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:19 crc kubenswrapper[4811]: I0122 09:06:19.350287 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bae2e22607dec646b15563dc62f66655e7a5c2b7ad809406c907e5f687f80105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75e290e203fab8f291146855a960477ceb9c33d965d15b4722c668dc2b3fded3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:19Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:19 crc kubenswrapper[4811]: I0122 09:06:19.359047 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f776b1bc70646779cf35092ded02c00aa3b990c873de09d21e1f0d76258b0b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:19Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:19 crc kubenswrapper[4811]: I0122 09:06:19.374582 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cf1d28a-4ee8-45bb-9a86-3687d8db2105\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684f8ca14b2eabcbf9280085d495af9a6f1fa76fb187f15b8fa5659178efdc93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64382d1748f1eaeb9f2f09d243b35c56f9fd70b9a9765ddb64cb0c4866ed493b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://590bb419e3791ec3a267b9a5aa4022ee74ba5e302b047e85211563e135c4cf31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b14bb77b0cf24bacdb8f5edae5d02ccea019b12feb315e6d6c4887feb1a9354\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0d10794bea5b8ea94b9247775e0c424fa9d481267c663d4bcae16eb4e7eb729\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09c7d347a923430e4eb2af48033ea3fc7715bedad38dd1b644c7b29a1ab617fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09c7d347a923430e4eb2af48033ea3fc7715bedad38dd1b644c7b29a1ab617fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba74d0a51550a0637b01b701bbaa33c5efe84fb88a69b7294d1125f1f400b7cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba74d0a51550a0637b01b701bbaa33c5efe84fb88a69b7294d1125f1f400b7cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9a2da74ed29f44340dd1a4778f15ba78ba4f673eb1b34b676e35c7d7c84af31d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a2da74ed29f44340dd1a4778f15ba78ba4f673eb1b34b676e35c7d7c84af31d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:05:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:19Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:19 crc kubenswrapper[4811]: I0122 09:06:19.392238 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:19Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:19 crc kubenswrapper[4811]: I0122 09:06:19.401354 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:19 crc kubenswrapper[4811]: I0122 09:06:19.401386 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:19 crc kubenswrapper[4811]: I0122 09:06:19.401396 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:19 crc kubenswrapper[4811]: I0122 09:06:19.401410 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:19 crc kubenswrapper[4811]: I0122 09:06:19.401419 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:19Z","lastTransitionTime":"2026-01-22T09:06:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:19 crc kubenswrapper[4811]: I0122 09:06:19.402519 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kfqgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2555861-d1bb-4f21-be4a-165ed9212932\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ececfc4b2e4f2a120e7a3307235d606db00733e1a5631b6e19b5312afc8e8ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jt7cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kfqgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:19Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:19 crc kubenswrapper[4811]: I0122 09:06:19.503165 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:19 crc kubenswrapper[4811]: I0122 09:06:19.503202 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:19 crc kubenswrapper[4811]: I0122 09:06:19.503210 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:19 crc kubenswrapper[4811]: I0122 09:06:19.503223 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:19 crc kubenswrapper[4811]: I0122 09:06:19.503232 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:19Z","lastTransitionTime":"2026-01-22T09:06:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:19 crc kubenswrapper[4811]: I0122 09:06:19.604909 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:19 crc kubenswrapper[4811]: I0122 09:06:19.604945 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:19 crc kubenswrapper[4811]: I0122 09:06:19.604954 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:19 crc kubenswrapper[4811]: I0122 09:06:19.604967 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:19 crc kubenswrapper[4811]: I0122 09:06:19.604975 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:19Z","lastTransitionTime":"2026-01-22T09:06:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:19 crc kubenswrapper[4811]: I0122 09:06:19.706269 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:19 crc kubenswrapper[4811]: I0122 09:06:19.706308 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:19 crc kubenswrapper[4811]: I0122 09:06:19.706316 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:19 crc kubenswrapper[4811]: I0122 09:06:19.706329 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:19 crc kubenswrapper[4811]: I0122 09:06:19.706337 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:19Z","lastTransitionTime":"2026-01-22T09:06:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:19 crc kubenswrapper[4811]: I0122 09:06:19.807932 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:19 crc kubenswrapper[4811]: I0122 09:06:19.807969 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:19 crc kubenswrapper[4811]: I0122 09:06:19.807980 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:19 crc kubenswrapper[4811]: I0122 09:06:19.807993 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:19 crc kubenswrapper[4811]: I0122 09:06:19.808003 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:19Z","lastTransitionTime":"2026-01-22T09:06:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:19 crc kubenswrapper[4811]: I0122 09:06:19.910420 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:19 crc kubenswrapper[4811]: I0122 09:06:19.910587 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:19 crc kubenswrapper[4811]: I0122 09:06:19.910595 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:19 crc kubenswrapper[4811]: I0122 09:06:19.910607 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:19 crc kubenswrapper[4811]: I0122 09:06:19.910616 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:19Z","lastTransitionTime":"2026-01-22T09:06:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:19 crc kubenswrapper[4811]: I0122 09:06:19.975288 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 14:28:36.571500268 +0000 UTC Jan 22 09:06:19 crc kubenswrapper[4811]: I0122 09:06:19.991788 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 09:06:19 crc kubenswrapper[4811]: I0122 09:06:19.991807 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:06:19 crc kubenswrapper[4811]: I0122 09:06:19.991822 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 09:06:19 crc kubenswrapper[4811]: E0122 09:06:19.991903 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 09:06:19 crc kubenswrapper[4811]: E0122 09:06:19.991994 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 09:06:19 crc kubenswrapper[4811]: E0122 09:06:19.992053 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 09:06:20 crc kubenswrapper[4811]: I0122 09:06:20.012409 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:20 crc kubenswrapper[4811]: I0122 09:06:20.012440 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:20 crc kubenswrapper[4811]: I0122 09:06:20.012449 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:20 crc kubenswrapper[4811]: I0122 09:06:20.012459 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:20 crc kubenswrapper[4811]: I0122 09:06:20.012467 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:20Z","lastTransitionTime":"2026-01-22T09:06:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:20 crc kubenswrapper[4811]: I0122 09:06:20.110936 4811 generic.go:334] "Generic (PLEG): container finished" podID="3d23c9c9-89ca-4db5-99dc-1e5b9f80be38" containerID="e01aaa7db78471fea3a39a7dfc2be7926746a5c38a6a92dc3d33ebab7b8bf3cb" exitCode=0 Jan 22 09:06:20 crc kubenswrapper[4811]: I0122 09:06:20.111038 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-9g4j8" event={"ID":"3d23c9c9-89ca-4db5-99dc-1e5b9f80be38","Type":"ContainerDied","Data":"e01aaa7db78471fea3a39a7dfc2be7926746a5c38a6a92dc3d33ebab7b8bf3cb"} Jan 22 09:06:20 crc kubenswrapper[4811]: I0122 09:06:20.114093 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" event={"ID":"84068a6b-e189-419b-87f5-f31428f6eafe","Type":"ContainerStarted","Data":"cde8418c9f22553519ec0a9ba1ddcdd553f64aafb44fa002e924499701236bca"} Jan 22 09:06:20 crc kubenswrapper[4811]: I0122 09:06:20.116340 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:20 crc kubenswrapper[4811]: I0122 09:06:20.116371 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:20 crc kubenswrapper[4811]: I0122 09:06:20.116380 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:20 crc kubenswrapper[4811]: I0122 09:06:20.116392 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:20 crc kubenswrapper[4811]: I0122 09:06:20.116400 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:20Z","lastTransitionTime":"2026-01-22T09:06:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:20 crc kubenswrapper[4811]: I0122 09:06:20.120449 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-274vf" event={"ID":"1cd0f0db-de53-47c0-9b45-2ce8b37392a3","Type":"ContainerStarted","Data":"c261cdea10a90cb9b06369f3b725b7778d070fb329e81ab44552446e12662451"} Jan 22 09:06:20 crc kubenswrapper[4811]: I0122 09:06:20.120481 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-274vf" event={"ID":"1cd0f0db-de53-47c0-9b45-2ce8b37392a3","Type":"ContainerStarted","Data":"8f0736426a25dadac460e5ed467f3082cd8c4806190c053022d6f360c4290799"} Jan 22 09:06:20 crc kubenswrapper[4811]: I0122 09:06:20.120492 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-274vf" event={"ID":"1cd0f0db-de53-47c0-9b45-2ce8b37392a3","Type":"ContainerStarted","Data":"e071df53e8cd1652f507eacae3e2c54ac75e76a6b9a426620c970762e81ecd84"} Jan 22 09:06:20 crc kubenswrapper[4811]: I0122 09:06:20.120510 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-274vf" event={"ID":"1cd0f0db-de53-47c0-9b45-2ce8b37392a3","Type":"ContainerStarted","Data":"b5f0ec565b046b22d600103b58946f6f520873f1686cc3042bf7ddc7f24bd4ad"} Jan 22 09:06:20 crc kubenswrapper[4811]: I0122 09:06:20.120517 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-274vf" event={"ID":"1cd0f0db-de53-47c0-9b45-2ce8b37392a3","Type":"ContainerStarted","Data":"9197a8c82d49fb9b4b1c97827ee95582ca8148656abb88904e06abb25c135597"} Jan 22 09:06:20 crc kubenswrapper[4811]: I0122 09:06:20.120525 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-274vf" event={"ID":"1cd0f0db-de53-47c0-9b45-2ce8b37392a3","Type":"ContainerStarted","Data":"55f26cdf9d61842467eb1367c6ec156b4de9c40a5b05fe60d3d100df5136a8a2"} Jan 22 09:06:20 crc kubenswrapper[4811]: I0122 09:06:20.126751 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f776b1bc70646779cf35092ded02c00aa3b990c873de09d21e1f0d76258b0b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:20Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:20 crc kubenswrapper[4811]: I0122 09:06:20.141447 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cf1d28a-4ee8-45bb-9a86-3687d8db2105\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684f8ca14b2eabcbf9280085d495af9a6f1fa76fb187f15b8fa5659178efdc93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64382d1748f1eaeb9f2f09d243b35c56f9fd70b9a9765ddb64cb0c4866ed493b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://590bb419e3791ec3a267b9a5aa4022ee74ba5e302b047e85211563e135c4cf31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b14bb77b0cf24bacdb8f5edae5d02ccea019b12feb315e6d6c4887feb1a9354\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0d10794bea5b8ea94b9247775e0c424fa9d481267c663d4bcae16eb4e7eb729\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09c7d347a923430e4eb2af48033ea3fc7715bedad38dd1b644c7b29a1ab617fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09c7d347a923430e4eb2af48033ea3fc7715bedad38dd1b644c7b29a1ab617fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba74d0a51550a0637b01b701bbaa33c5efe84fb88a69b7294d1125f1f400b7cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba74d0a51550a0637b01b701bbaa33c5efe84fb88a69b7294d1125f1f400b7cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9a2da74ed29f44340dd1a4778f15ba78ba4f673eb1b34b676e35c7d7c84af31d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a2da74ed29f44340dd1a4778f15ba78ba4f673eb1b34b676e35c7d7c84af31d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:05:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:20Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:20 crc kubenswrapper[4811]: I0122 09:06:20.151123 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:20Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:20 crc kubenswrapper[4811]: I0122 09:06:20.162363 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kfqgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2555861-d1bb-4f21-be4a-165ed9212932\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ececfc4b2e4f2a120e7a3307235d606db00733e1a5631b6e19b5312afc8e8ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jt7cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kfqgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:20Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:20 crc kubenswrapper[4811]: I0122 09:06:20.171386 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xhhs7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f42dc1a6-d0c4-43e4-b9d9-b40c1f910400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a64d7f1d53f9a3d9616b76ea12902134c5e0a3a30c9a61113675074caae70d58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcfzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xhhs7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:20Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:20 crc kubenswrapper[4811]: I0122 09:06:20.181427 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9g4j8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d23c9c9-89ca-4db5-99dc-1e5b9f80be38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e01aaa7db78471fea3a39a7dfc2be7926746a5c38a6a92dc3d33ebab7b8bf3cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e01aaa7db78471fea3a39a7dfc2be7926746a5c38a6a92dc3d33ebab7b8bf3cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9g4j8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:20Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:20 crc kubenswrapper[4811]: I0122 09:06:20.189314 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84068a6b-e189-419b-87f5-f31428f6eafe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thzhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thzhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-txvcq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:20Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:20 crc kubenswrapper[4811]: I0122 09:06:20.198561 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30b44c40-e6d4-4902-98e9-a259269d8bf7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c0b4f518da69d1ede47cb9d322e9d9ca120cfc5eb770f109f0e5fd2e3260964\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aee61445030e02ac4fa9de4fefd94fe50e821ac8897c7403e173f2594a2e0ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d66c2a7980e144d2ce01cb8ebac91297e012c09d4b9aec617e747542e1ca304\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4daff941637e717d7dc33c5627db72d253f50f50288c23257e1eaa66cd38c5d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a86cb7b767900041a2946ed016b105369283ee61acdf725ade666acb1e926ed0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0122 09:06:13.075349 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0122 09:06:13.075574 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 09:06:13.077171 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1744166093/tls.crt::/tmp/serving-cert-1744166093/tls.key\\\\\\\"\\\\nI0122 09:06:13.324400 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 09:06:13.330700 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 09:06:13.330724 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 09:06:13.330758 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 09:06:13.330771 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 09:06:13.337838 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 09:06:13.337866 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:06:13.337872 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:06:13.337876 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 09:06:13.337879 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 09:06:13.337883 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 09:06:13.337886 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 09:06:13.338151 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 09:06:13.339610 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9efb1ddf2ce6601c9860410e3ccacc38359aef779a7c8524bf4ec5950743d36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85e811085c9b345899231a1fcf56619c1cc5c8b0e6d22fc897fb96f20ba72e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85e811085c9b345899231a1fcf56619c1cc5c8b0e6d22fc897fb96f20ba72e37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:05:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:20Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:20 crc kubenswrapper[4811]: I0122 09:06:20.205966 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:20Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:20 crc kubenswrapper[4811]: I0122 09:06:20.215154 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:20Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:20 crc kubenswrapper[4811]: I0122 09:06:20.217955 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:20 crc kubenswrapper[4811]: I0122 09:06:20.217983 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:20 crc kubenswrapper[4811]: I0122 09:06:20.217992 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:20 crc kubenswrapper[4811]: I0122 09:06:20.218005 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:20 crc kubenswrapper[4811]: I0122 09:06:20.218013 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:20Z","lastTransitionTime":"2026-01-22T09:06:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:20 crc kubenswrapper[4811]: I0122 09:06:20.223068 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bae2e22607dec646b15563dc62f66655e7a5c2b7ad809406c907e5f687f80105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75e290e203fab8f291146855a960477ceb9c33d965d15b4722c668dc2b3fded3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:20Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:20 crc kubenswrapper[4811]: I0122 09:06:20.238448 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-274vf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cd0f0db-de53-47c0-9b45-2ce8b37392a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://523ab909aa2cbac6043b3c572a17c40733e9b2155d4017253d68998cb85d3e7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://523ab909aa2cbac6043b3c572a17c40733e9b2155d4017253d68998cb85d3e7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-274vf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:20Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:20 crc kubenswrapper[4811]: I0122 09:06:20.250015 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a14bfcc7d6d2ab5e1ad0c45999b2e73332e750ed65d9299e188505dafb26d0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:20Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:20 crc kubenswrapper[4811]: I0122 09:06:20.264032 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a14bfcc7d6d2ab5e1ad0c45999b2e73332e750ed65d9299e188505dafb26d0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:20Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:20 crc kubenswrapper[4811]: I0122 09:06:20.273491 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bae2e22607dec646b15563dc62f66655e7a5c2b7ad809406c907e5f687f80105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75e290e203fab8f291146855a960477ceb9c33d965d15b4722c668dc2b3fded3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:20Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:20 crc kubenswrapper[4811]: I0122 09:06:20.286047 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-274vf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cd0f0db-de53-47c0-9b45-2ce8b37392a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://523ab909aa2cbac6043b3c572a17c40733e9b2155d4017253d68998cb85d3e7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://523ab909aa2cbac6043b3c572a17c40733e9b2155d4017253d68998cb85d3e7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-274vf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:20Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:20 crc kubenswrapper[4811]: I0122 09:06:20.293942 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f776b1bc70646779cf35092ded02c00aa3b990c873de09d21e1f0d76258b0b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:20Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:20 crc kubenswrapper[4811]: I0122 09:06:20.301718 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:20Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:20 crc kubenswrapper[4811]: I0122 09:06:20.308979 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 09:06:20 crc kubenswrapper[4811]: I0122 09:06:20.311061 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kfqgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2555861-d1bb-4f21-be4a-165ed9212932\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ececfc4b2e4f2a120e7a3307235d606db00733e1a5631b6e19b5312afc8e8ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jt7cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kfqgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:20Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:20 crc kubenswrapper[4811]: I0122 09:06:20.311638 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 09:06:20 crc kubenswrapper[4811]: I0122 09:06:20.315116 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 22 09:06:20 crc kubenswrapper[4811]: I0122 09:06:20.320181 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:20 crc kubenswrapper[4811]: I0122 09:06:20.320216 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:20 crc kubenswrapper[4811]: I0122 09:06:20.320225 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:20 crc kubenswrapper[4811]: I0122 09:06:20.320239 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:20 crc kubenswrapper[4811]: I0122 09:06:20.320247 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:20Z","lastTransitionTime":"2026-01-22T09:06:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:20 crc kubenswrapper[4811]: I0122 09:06:20.325980 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cf1d28a-4ee8-45bb-9a86-3687d8db2105\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684f8ca14b2eabcbf9280085d495af9a6f1fa76fb187f15b8fa5659178efdc93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64382d1748f1eaeb9f2f09d243b35c56f9fd70b9a9765ddb64cb0c4866ed493b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://590bb419e3791ec3a267b9a5aa4022ee74ba5e302b047e85211563e135c4cf31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b14bb77b0cf24bacdb8f5edae5d02ccea019b12feb315e6d6c4887feb1a9354\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0d10794bea5b8ea94b9247775e0c424fa9d481267c663d4bcae16eb4e7eb729\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09c7d347a923430e4eb2af48033ea3fc7715bedad38dd1b644c7b29a1ab617fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09c7d347a923430e4eb2af48033ea3fc7715bedad38dd1b644c7b29a1ab617fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba74d0a51550a0637b01b701bbaa33c5efe84fb88a69b7294d1125f1f400b7cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba74d0a51550a0637b01b701bbaa33c5efe84fb88a69b7294d1125f1f400b7cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9a2da74ed29f44340dd1a4778f15ba78ba4f673eb1b34b676e35c7d7c84af31d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a2da74ed29f44340dd1a4778f15ba78ba4f673eb1b34b676e35c7d7c84af31d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:05:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:20Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:20 crc kubenswrapper[4811]: I0122 09:06:20.335284 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:20Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:20 crc kubenswrapper[4811]: I0122 09:06:20.343336 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:20Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:20 crc kubenswrapper[4811]: I0122 09:06:20.351072 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xhhs7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f42dc1a6-d0c4-43e4-b9d9-b40c1f910400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a64d7f1d53f9a3d9616b76ea12902134c5e0a3a30c9a61113675074caae70d58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcfzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xhhs7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:20Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:20 crc kubenswrapper[4811]: I0122 09:06:20.360410 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9g4j8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d23c9c9-89ca-4db5-99dc-1e5b9f80be38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e01aaa7db78471fea3a39a7dfc2be7926746a5c38a6a92dc3d33ebab7b8bf3cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e01aaa7db78471fea3a39a7dfc2be7926746a5c38a6a92dc3d33ebab7b8bf3cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9g4j8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:20Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:20 crc kubenswrapper[4811]: I0122 09:06:20.367367 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84068a6b-e189-419b-87f5-f31428f6eafe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cde8418c9f22553519ec0a9ba1ddcdd553f64aafb44fa002e924499701236bca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thzhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbb067d04c1683ffd06d01aab41bb5988fdca44b8bd16012d35c42795d0d8011\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thzhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-txvcq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:20Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:20 crc kubenswrapper[4811]: I0122 09:06:20.375869 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30b44c40-e6d4-4902-98e9-a259269d8bf7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c0b4f518da69d1ede47cb9d322e9d9ca120cfc5eb770f109f0e5fd2e3260964\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aee61445030e02ac4fa9de4fefd94fe50e821ac8897c7403e173f2594a2e0ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d66c2a7980e144d2ce01cb8ebac91297e012c09d4b9aec617e747542e1ca304\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4daff941637e717d7dc33c5627db72d253f50f50288c23257e1eaa66cd38c5d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a86cb7b767900041a2946ed016b105369283ee61acdf725ade666acb1e926ed0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0122 09:06:13.075349 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0122 09:06:13.075574 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 09:06:13.077171 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1744166093/tls.crt::/tmp/serving-cert-1744166093/tls.key\\\\\\\"\\\\nI0122 09:06:13.324400 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 09:06:13.330700 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 09:06:13.330724 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 09:06:13.330758 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 09:06:13.330771 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 09:06:13.337838 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 09:06:13.337866 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:06:13.337872 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:06:13.337876 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 09:06:13.337879 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 09:06:13.337883 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 09:06:13.337886 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 09:06:13.338151 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 09:06:13.339610 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9efb1ddf2ce6601c9860410e3ccacc38359aef779a7c8524bf4ec5950743d36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85e811085c9b345899231a1fcf56619c1cc5c8b0e6d22fc897fb96f20ba72e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85e811085c9b345899231a1fcf56619c1cc5c8b0e6d22fc897fb96f20ba72e37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:05:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:20Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:20 crc kubenswrapper[4811]: I0122 09:06:20.400537 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cf1d28a-4ee8-45bb-9a86-3687d8db2105\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684f8ca14b2eabcbf9280085d495af9a6f1fa76fb187f15b8fa5659178efdc93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64382d1748f1eaeb9f2f09d243b35c56f9fd70b9a9765ddb64cb0c4866ed493b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://590bb419e3791ec3a267b9a5aa4022ee74ba5e302b047e85211563e135c4cf31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b14bb77b0cf24bacdb8f5edae5d02ccea019b12feb315e6d6c4887feb1a9354\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0d10794bea5b8ea94b9247775e0c424fa9d481267c663d4bcae16eb4e7eb729\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09c7d347a923430e4eb2af48033ea3fc7715bedad38dd1b644c7b29a1ab617fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09c7d347a923430e4eb2af48033ea3fc7715bedad38dd1b644c7b29a1ab617fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba74d0a51550a0637b01b701bbaa33c5efe84fb88a69b7294d1125f1f400b7cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba74d0a51550a0637b01b701bbaa33c5efe84fb88a69b7294d1125f1f400b7cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9a2da74ed29f44340dd1a4778f15ba78ba4f673eb1b34b676e35c7d7c84af31d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a2da74ed29f44340dd1a4778f15ba78ba4f673eb1b34b676e35c7d7c84af31d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:05:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:20Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:20 crc kubenswrapper[4811]: I0122 09:06:20.422018 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:20 crc kubenswrapper[4811]: I0122 09:06:20.422146 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:20 crc kubenswrapper[4811]: I0122 09:06:20.422206 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:20 crc kubenswrapper[4811]: I0122 09:06:20.422274 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:20 crc kubenswrapper[4811]: I0122 09:06:20.422360 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:20Z","lastTransitionTime":"2026-01-22T09:06:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:20 crc kubenswrapper[4811]: I0122 09:06:20.437656 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:20Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:20 crc kubenswrapper[4811]: I0122 09:06:20.474245 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kfqgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2555861-d1bb-4f21-be4a-165ed9212932\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ececfc4b2e4f2a120e7a3307235d606db00733e1a5631b6e19b5312afc8e8ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jt7cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kfqgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:20Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:20 crc kubenswrapper[4811]: I0122 09:06:20.514200 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30b44c40-e6d4-4902-98e9-a259269d8bf7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c0b4f518da69d1ede47cb9d322e9d9ca120cfc5eb770f109f0e5fd2e3260964\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aee61445030e02ac4fa9de4fefd94fe50e821ac8897c7403e173f2594a2e0ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d66c2a7980e144d2ce01cb8ebac91297e012c09d4b9aec617e747542e1ca304\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4daff941637e717d7dc33c5627db72d253f50f50288c23257e1eaa66cd38c5d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a86cb7b767900041a2946ed016b105369283ee61acdf725ade666acb1e926ed0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0122 09:06:13.075349 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0122 09:06:13.075574 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 09:06:13.077171 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1744166093/tls.crt::/tmp/serving-cert-1744166093/tls.key\\\\\\\"\\\\nI0122 09:06:13.324400 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 09:06:13.330700 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 09:06:13.330724 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 09:06:13.330758 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 09:06:13.330771 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 09:06:13.337838 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 09:06:13.337866 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:06:13.337872 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:06:13.337876 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 09:06:13.337879 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 09:06:13.337883 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 09:06:13.337886 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 09:06:13.338151 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 09:06:13.339610 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9efb1ddf2ce6601c9860410e3ccacc38359aef779a7c8524bf4ec5950743d36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85e811085c9b345899231a1fcf56619c1cc5c8b0e6d22fc897fb96f20ba72e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85e811085c9b345899231a1fcf56619c1cc5c8b0e6d22fc897fb96f20ba72e37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:05:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:20Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:20 crc kubenswrapper[4811]: I0122 09:06:20.524526 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:20 crc kubenswrapper[4811]: I0122 09:06:20.524562 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:20 crc kubenswrapper[4811]: I0122 09:06:20.524573 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:20 crc kubenswrapper[4811]: I0122 09:06:20.524585 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:20 crc kubenswrapper[4811]: I0122 09:06:20.524594 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:20Z","lastTransitionTime":"2026-01-22T09:06:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:20 crc kubenswrapper[4811]: I0122 09:06:20.553572 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:20Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:20 crc kubenswrapper[4811]: I0122 09:06:20.615923 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:20Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:20 crc kubenswrapper[4811]: I0122 09:06:20.626463 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:20 crc kubenswrapper[4811]: I0122 09:06:20.626496 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:20 crc kubenswrapper[4811]: I0122 09:06:20.626506 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:20 crc kubenswrapper[4811]: I0122 09:06:20.626520 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:20 crc kubenswrapper[4811]: I0122 09:06:20.626528 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:20Z","lastTransitionTime":"2026-01-22T09:06:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:20 crc kubenswrapper[4811]: I0122 09:06:20.643895 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xhhs7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f42dc1a6-d0c4-43e4-b9d9-b40c1f910400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a64d7f1d53f9a3d9616b76ea12902134c5e0a3a30c9a61113675074caae70d58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcfzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xhhs7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:20Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:20 crc kubenswrapper[4811]: I0122 09:06:20.678987 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9g4j8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d23c9c9-89ca-4db5-99dc-1e5b9f80be38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e01aaa7db78471fea3a39a7dfc2be7926746a5c38a6a92dc3d33ebab7b8bf3cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e01aaa7db78471fea3a39a7dfc2be7926746a5c38a6a92dc3d33ebab7b8bf3cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9g4j8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:20Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:20 crc kubenswrapper[4811]: I0122 09:06:20.714518 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84068a6b-e189-419b-87f5-f31428f6eafe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cde8418c9f22553519ec0a9ba1ddcdd553f64aafb44fa002e924499701236bca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thzhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbb067d04c1683ffd06d01aab41bb5988fdca44b8bd16012d35c42795d0d8011\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thzhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-txvcq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:20Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:20 crc kubenswrapper[4811]: I0122 09:06:20.728209 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:20 crc kubenswrapper[4811]: I0122 09:06:20.728242 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:20 crc kubenswrapper[4811]: I0122 09:06:20.728252 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:20 crc kubenswrapper[4811]: I0122 09:06:20.728263 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:20 crc kubenswrapper[4811]: I0122 09:06:20.728274 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:20Z","lastTransitionTime":"2026-01-22T09:06:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:20 crc kubenswrapper[4811]: I0122 09:06:20.754276 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab5dbe63-08fa-4157-9c52-003944e50d7b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af3bf6cdac4751814ecd71637d9e80cbdda1c0e99714275fe77cfa858b3823b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4419625f66ae957b2d9b146fc8b085a4a95c3f47860d6b4847f8fe71a6c4c86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7830976d23e073a099c3c6af1fa2392d9504ad9d41ddf2ee63c16632c22f833a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45fa5d37dcd1dcae1a7488bb855e4bd7c7dc39a1199fe80b82f83604be089f2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:05:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:20Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:20 crc kubenswrapper[4811]: I0122 09:06:20.796566 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a14bfcc7d6d2ab5e1ad0c45999b2e73332e750ed65d9299e188505dafb26d0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:20Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:20 crc kubenswrapper[4811]: I0122 09:06:20.797585 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-n2kj4"] Jan 22 09:06:20 crc kubenswrapper[4811]: I0122 09:06:20.797908 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-n2kj4" Jan 22 09:06:20 crc kubenswrapper[4811]: I0122 09:06:20.828384 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 22 09:06:20 crc kubenswrapper[4811]: I0122 09:06:20.829839 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:20 crc kubenswrapper[4811]: I0122 09:06:20.829866 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:20 crc kubenswrapper[4811]: I0122 09:06:20.829875 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:20 crc kubenswrapper[4811]: I0122 09:06:20.829888 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:20 crc kubenswrapper[4811]: I0122 09:06:20.829896 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:20Z","lastTransitionTime":"2026-01-22T09:06:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:20 crc kubenswrapper[4811]: I0122 09:06:20.848385 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 22 09:06:20 crc kubenswrapper[4811]: I0122 09:06:20.868693 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 22 09:06:20 crc kubenswrapper[4811]: I0122 09:06:20.888091 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 22 09:06:20 crc kubenswrapper[4811]: I0122 09:06:20.916089 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bae2e22607dec646b15563dc62f66655e7a5c2b7ad809406c907e5f687f80105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75e290e203fab8f291146855a960477ceb9c33d965d15b4722c668dc2b3fded3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:20Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:20 crc kubenswrapper[4811]: I0122 09:06:20.934130 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:20 crc kubenswrapper[4811]: I0122 09:06:20.934174 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:20 crc kubenswrapper[4811]: I0122 09:06:20.934188 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:20 crc kubenswrapper[4811]: I0122 09:06:20.934203 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:20 crc kubenswrapper[4811]: I0122 09:06:20.934214 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:20Z","lastTransitionTime":"2026-01-22T09:06:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:20 crc kubenswrapper[4811]: I0122 09:06:20.961409 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-274vf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cd0f0db-de53-47c0-9b45-2ce8b37392a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://523ab909aa2cbac6043b3c572a17c40733e9b2155d4017253d68998cb85d3e7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://523ab909aa2cbac6043b3c572a17c40733e9b2155d4017253d68998cb85d3e7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-274vf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:20Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:20 crc kubenswrapper[4811]: I0122 09:06:20.975870 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 05:15:44.568431985 +0000 UTC Jan 22 09:06:20 crc kubenswrapper[4811]: I0122 09:06:20.978224 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z47mj\" (UniqueName: \"kubernetes.io/projected/66a7f3f9-ab88-4b1a-a28a-900480fa9651-kube-api-access-z47mj\") pod \"node-ca-n2kj4\" (UID: \"66a7f3f9-ab88-4b1a-a28a-900480fa9651\") " pod="openshift-image-registry/node-ca-n2kj4" Jan 22 09:06:20 crc kubenswrapper[4811]: I0122 09:06:20.978254 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/66a7f3f9-ab88-4b1a-a28a-900480fa9651-serviceca\") pod \"node-ca-n2kj4\" (UID: \"66a7f3f9-ab88-4b1a-a28a-900480fa9651\") " pod="openshift-image-registry/node-ca-n2kj4" Jan 22 09:06:20 crc kubenswrapper[4811]: I0122 09:06:20.978293 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/66a7f3f9-ab88-4b1a-a28a-900480fa9651-host\") pod \"node-ca-n2kj4\" (UID: \"66a7f3f9-ab88-4b1a-a28a-900480fa9651\") " pod="openshift-image-registry/node-ca-n2kj4" Jan 22 09:06:20 crc kubenswrapper[4811]: I0122 09:06:20.994577 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f776b1bc70646779cf35092ded02c00aa3b990c873de09d21e1f0d76258b0b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:20Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:21 crc kubenswrapper[4811]: I0122 09:06:21.035613 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:21Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:21 crc kubenswrapper[4811]: I0122 09:06:21.036124 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:21 crc kubenswrapper[4811]: I0122 09:06:21.036152 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:21 crc kubenswrapper[4811]: I0122 09:06:21.036160 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:21 crc kubenswrapper[4811]: I0122 09:06:21.036171 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:21 crc kubenswrapper[4811]: I0122 09:06:21.036180 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:21Z","lastTransitionTime":"2026-01-22T09:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:21 crc kubenswrapper[4811]: I0122 09:06:21.075414 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kfqgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2555861-d1bb-4f21-be4a-165ed9212932\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ececfc4b2e4f2a120e7a3307235d606db00733e1a5631b6e19b5312afc8e8ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jt7cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kfqgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:21Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:21 crc kubenswrapper[4811]: I0122 09:06:21.078659 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z47mj\" (UniqueName: \"kubernetes.io/projected/66a7f3f9-ab88-4b1a-a28a-900480fa9651-kube-api-access-z47mj\") pod \"node-ca-n2kj4\" (UID: \"66a7f3f9-ab88-4b1a-a28a-900480fa9651\") " pod="openshift-image-registry/node-ca-n2kj4" Jan 22 09:06:21 crc kubenswrapper[4811]: I0122 09:06:21.078690 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/66a7f3f9-ab88-4b1a-a28a-900480fa9651-serviceca\") pod \"node-ca-n2kj4\" (UID: \"66a7f3f9-ab88-4b1a-a28a-900480fa9651\") " pod="openshift-image-registry/node-ca-n2kj4" Jan 22 09:06:21 crc kubenswrapper[4811]: I0122 09:06:21.078742 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/66a7f3f9-ab88-4b1a-a28a-900480fa9651-host\") pod \"node-ca-n2kj4\" (UID: \"66a7f3f9-ab88-4b1a-a28a-900480fa9651\") " pod="openshift-image-registry/node-ca-n2kj4" Jan 22 09:06:21 crc kubenswrapper[4811]: I0122 09:06:21.078810 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/66a7f3f9-ab88-4b1a-a28a-900480fa9651-host\") pod \"node-ca-n2kj4\" (UID: \"66a7f3f9-ab88-4b1a-a28a-900480fa9651\") " pod="openshift-image-registry/node-ca-n2kj4" Jan 22 09:06:21 crc kubenswrapper[4811]: I0122 09:06:21.079539 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/66a7f3f9-ab88-4b1a-a28a-900480fa9651-serviceca\") pod \"node-ca-n2kj4\" (UID: \"66a7f3f9-ab88-4b1a-a28a-900480fa9651\") " pod="openshift-image-registry/node-ca-n2kj4" Jan 22 09:06:21 crc kubenswrapper[4811]: I0122 09:06:21.123499 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z47mj\" (UniqueName: \"kubernetes.io/projected/66a7f3f9-ab88-4b1a-a28a-900480fa9651-kube-api-access-z47mj\") pod \"node-ca-n2kj4\" (UID: \"66a7f3f9-ab88-4b1a-a28a-900480fa9651\") " pod="openshift-image-registry/node-ca-n2kj4" Jan 22 09:06:21 crc kubenswrapper[4811]: I0122 09:06:21.128324 4811 generic.go:334] "Generic (PLEG): container finished" podID="3d23c9c9-89ca-4db5-99dc-1e5b9f80be38" containerID="6c4b6317408e75e2cc7947a3ff91f9268141cd5000f951dcbfdab3fdc5c2386c" exitCode=0 Jan 22 09:06:21 crc kubenswrapper[4811]: I0122 09:06:21.128590 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-9g4j8" event={"ID":"3d23c9c9-89ca-4db5-99dc-1e5b9f80be38","Type":"ContainerDied","Data":"6c4b6317408e75e2cc7947a3ff91f9268141cd5000f951dcbfdab3fdc5c2386c"} Jan 22 09:06:21 crc kubenswrapper[4811]: I0122 09:06:21.137931 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:21 crc kubenswrapper[4811]: I0122 09:06:21.137964 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:21 crc kubenswrapper[4811]: I0122 09:06:21.137973 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:21 crc kubenswrapper[4811]: I0122 09:06:21.137986 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:21 crc kubenswrapper[4811]: I0122 09:06:21.137996 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:21Z","lastTransitionTime":"2026-01-22T09:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:21 crc kubenswrapper[4811]: I0122 09:06:21.141683 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cf1d28a-4ee8-45bb-9a86-3687d8db2105\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684f8ca14b2eabcbf9280085d495af9a6f1fa76fb187f15b8fa5659178efdc93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64382d1748f1eaeb9f2f09d243b35c56f9fd70b9a9765ddb64cb0c4866ed493b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://590bb419e3791ec3a267b9a5aa4022ee74ba5e302b047e85211563e135c4cf31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b14bb77b0cf24bacdb8f5edae5d02ccea019b12feb315e6d6c4887feb1a9354\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0d10794bea5b8ea94b9247775e0c424fa9d481267c663d4bcae16eb4e7eb729\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09c7d347a923430e4eb2af48033ea3fc7715bedad38dd1b644c7b29a1ab617fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09c7d347a923430e4eb2af48033ea3fc7715bedad38dd1b644c7b29a1ab617fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba74d0a51550a0637b01b701bbaa33c5efe84fb88a69b7294d1125f1f400b7cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba74d0a51550a0637b01b701bbaa33c5efe84fb88a69b7294d1125f1f400b7cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9a2da74ed29f44340dd1a4778f15ba78ba4f673eb1b34b676e35c7d7c84af31d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a2da74ed29f44340dd1a4778f15ba78ba4f673eb1b34b676e35c7d7c84af31d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:05:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:21Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:21 crc kubenswrapper[4811]: I0122 09:06:21.176789 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:21Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:21 crc kubenswrapper[4811]: I0122 09:06:21.216807 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:21Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:21 crc kubenswrapper[4811]: I0122 09:06:21.239546 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:21 crc kubenswrapper[4811]: I0122 09:06:21.239609 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:21 crc kubenswrapper[4811]: I0122 09:06:21.239619 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:21 crc kubenswrapper[4811]: I0122 09:06:21.239657 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:21 crc kubenswrapper[4811]: I0122 09:06:21.239666 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:21Z","lastTransitionTime":"2026-01-22T09:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:21 crc kubenswrapper[4811]: I0122 09:06:21.253414 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xhhs7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f42dc1a6-d0c4-43e4-b9d9-b40c1f910400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a64d7f1d53f9a3d9616b76ea12902134c5e0a3a30c9a61113675074caae70d58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcfzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xhhs7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:21Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:21 crc kubenswrapper[4811]: I0122 09:06:21.295227 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9g4j8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d23c9c9-89ca-4db5-99dc-1e5b9f80be38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e01aaa7db78471fea3a39a7dfc2be7926746a5c38a6a92dc3d33ebab7b8bf3cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e01aaa7db78471fea3a39a7dfc2be7926746a5c38a6a92dc3d33ebab7b8bf3cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9g4j8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:21Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:21 crc kubenswrapper[4811]: I0122 09:06:21.334186 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84068a6b-e189-419b-87f5-f31428f6eafe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cde8418c9f22553519ec0a9ba1ddcdd553f64aafb44fa002e924499701236bca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thzhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbb067d04c1683ffd06d01aab41bb5988fdca44b8bd16012d35c42795d0d8011\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thzhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-txvcq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:21Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:21 crc kubenswrapper[4811]: I0122 09:06:21.341527 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:21 crc kubenswrapper[4811]: I0122 09:06:21.341560 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:21 crc kubenswrapper[4811]: I0122 09:06:21.341570 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:21 crc kubenswrapper[4811]: I0122 09:06:21.341586 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:21 crc kubenswrapper[4811]: I0122 09:06:21.341598 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:21Z","lastTransitionTime":"2026-01-22T09:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:21 crc kubenswrapper[4811]: I0122 09:06:21.378359 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30b44c40-e6d4-4902-98e9-a259269d8bf7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c0b4f518da69d1ede47cb9d322e9d9ca120cfc5eb770f109f0e5fd2e3260964\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aee61445030e02ac4fa9de4fefd94fe50e821ac8897c7403e173f2594a2e0ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d66c2a7980e144d2ce01cb8ebac91297e012c09d4b9aec617e747542e1ca304\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4daff941637e717d7dc33c5627db72d253f50f50288c23257e1eaa66cd38c5d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a86cb7b767900041a2946ed016b105369283ee61acdf725ade666acb1e926ed0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0122 09:06:13.075349 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0122 09:06:13.075574 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 09:06:13.077171 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1744166093/tls.crt::/tmp/serving-cert-1744166093/tls.key\\\\\\\"\\\\nI0122 09:06:13.324400 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 09:06:13.330700 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 09:06:13.330724 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 09:06:13.330758 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 09:06:13.330771 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 09:06:13.337838 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 09:06:13.337866 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:06:13.337872 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:06:13.337876 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 09:06:13.337879 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 09:06:13.337883 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 09:06:13.337886 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 09:06:13.338151 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 09:06:13.339610 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9efb1ddf2ce6601c9860410e3ccacc38359aef779a7c8524bf4ec5950743d36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85e811085c9b345899231a1fcf56619c1cc5c8b0e6d22fc897fb96f20ba72e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85e811085c9b345899231a1fcf56619c1cc5c8b0e6d22fc897fb96f20ba72e37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:05:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:21Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:21 crc kubenswrapper[4811]: I0122 09:06:21.410070 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-n2kj4" Jan 22 09:06:21 crc kubenswrapper[4811]: I0122 09:06:21.417123 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab5dbe63-08fa-4157-9c52-003944e50d7b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af3bf6cdac4751814ecd71637d9e80cbdda1c0e99714275fe77cfa858b3823b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4419625f66ae957b2d9b146fc8b085a4a95c3f47860d6b4847f8fe71a6c4c86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7830976d23e073a099c3c6af1fa2392d9504ad9d41ddf2ee63c16632c22f833a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45fa5d37dcd1dcae1a7488bb855e4bd7c7dc39a1199fe80b82f83604be089f2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:05:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:21Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:21 crc kubenswrapper[4811]: W0122 09:06:21.420380 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod66a7f3f9_ab88_4b1a_a28a_900480fa9651.slice/crio-3eada178e76edddd50078a528afc1855807d4374c23352a3aa2a0ac8459b78a3 WatchSource:0}: Error finding container 3eada178e76edddd50078a528afc1855807d4374c23352a3aa2a0ac8459b78a3: Status 404 returned error can't find the container with id 3eada178e76edddd50078a528afc1855807d4374c23352a3aa2a0ac8459b78a3 Jan 22 09:06:21 crc kubenswrapper[4811]: I0122 09:06:21.443614 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:21 crc kubenswrapper[4811]: I0122 09:06:21.443884 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:21 crc kubenswrapper[4811]: I0122 09:06:21.443894 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:21 crc kubenswrapper[4811]: I0122 09:06:21.443909 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:21 crc kubenswrapper[4811]: I0122 09:06:21.443918 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:21Z","lastTransitionTime":"2026-01-22T09:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:21 crc kubenswrapper[4811]: I0122 09:06:21.461967 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a14bfcc7d6d2ab5e1ad0c45999b2e73332e750ed65d9299e188505dafb26d0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:21Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:21 crc kubenswrapper[4811]: I0122 09:06:21.494784 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bae2e22607dec646b15563dc62f66655e7a5c2b7ad809406c907e5f687f80105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75e290e203fab8f291146855a960477ceb9c33d965d15b4722c668dc2b3fded3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:21Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:21 crc kubenswrapper[4811]: I0122 09:06:21.542117 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-274vf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cd0f0db-de53-47c0-9b45-2ce8b37392a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://523ab909aa2cbac6043b3c572a17c40733e9b2155d4017253d68998cb85d3e7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://523ab909aa2cbac6043b3c572a17c40733e9b2155d4017253d68998cb85d3e7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-274vf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:21Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:21 crc kubenswrapper[4811]: I0122 09:06:21.545358 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:21 crc kubenswrapper[4811]: I0122 09:06:21.545389 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:21 crc kubenswrapper[4811]: I0122 09:06:21.545398 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:21 crc kubenswrapper[4811]: I0122 09:06:21.545410 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:21 crc kubenswrapper[4811]: I0122 09:06:21.545418 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:21Z","lastTransitionTime":"2026-01-22T09:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:21 crc kubenswrapper[4811]: I0122 09:06:21.577832 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n2kj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66a7f3f9-ab88-4b1a-a28a-900480fa9651\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z47mj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n2kj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:21Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:21 crc kubenswrapper[4811]: I0122 09:06:21.582292 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 09:06:21 crc kubenswrapper[4811]: I0122 09:06:21.582325 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 09:06:21 crc kubenswrapper[4811]: E0122 09:06:21.582417 4811 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 09:06:21 crc kubenswrapper[4811]: E0122 09:06:21.582436 4811 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 09:06:21 crc kubenswrapper[4811]: E0122 09:06:21.582446 4811 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 09:06:21 crc kubenswrapper[4811]: E0122 09:06:21.582458 4811 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 09:06:21 crc kubenswrapper[4811]: E0122 09:06:21.582483 4811 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 09:06:21 crc kubenswrapper[4811]: E0122 09:06:21.582486 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-22 09:06:29.582476881 +0000 UTC m=+33.904664003 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 09:06:21 crc kubenswrapper[4811]: E0122 09:06:21.582495 4811 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 09:06:21 crc kubenswrapper[4811]: E0122 09:06:21.582531 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-22 09:06:29.58251925 +0000 UTC m=+33.904706373 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 09:06:21 crc kubenswrapper[4811]: I0122 09:06:21.614754 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f776b1bc70646779cf35092ded02c00aa3b990c873de09d21e1f0d76258b0b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:21Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:21 crc kubenswrapper[4811]: I0122 09:06:21.646877 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:21 crc kubenswrapper[4811]: I0122 09:06:21.646914 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:21 crc kubenswrapper[4811]: I0122 09:06:21.646922 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:21 crc kubenswrapper[4811]: I0122 09:06:21.646936 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:21 crc kubenswrapper[4811]: I0122 09:06:21.646945 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:21Z","lastTransitionTime":"2026-01-22T09:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:21 crc kubenswrapper[4811]: I0122 09:06:21.657440 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9g4j8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d23c9c9-89ca-4db5-99dc-1e5b9f80be38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e01aaa7db78471fea3a39a7dfc2be7926746a5c38a6a92dc3d33ebab7b8bf3cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e01aaa7db78471fea3a39a7dfc2be7926746a5c38a6a92dc3d33ebab7b8bf3cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c4b6317408e75e2cc7947a3ff91f9268141cd5000f951dcbfdab3fdc5c2386c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c4b6317408e75e2cc7947a3ff91f9268141cd5000f951dcbfdab3fdc5c2386c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9g4j8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:21Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:21 crc kubenswrapper[4811]: I0122 09:06:21.696734 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84068a6b-e189-419b-87f5-f31428f6eafe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cde8418c9f22553519ec0a9ba1ddcdd553f64aafb44fa002e924499701236bca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thzhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbb067d04c1683ffd06d01aab41bb5988fdca44b8bd16012d35c42795d0d8011\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thzhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-txvcq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:21Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:21 crc kubenswrapper[4811]: I0122 09:06:21.736103 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30b44c40-e6d4-4902-98e9-a259269d8bf7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c0b4f518da69d1ede47cb9d322e9d9ca120cfc5eb770f109f0e5fd2e3260964\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aee61445030e02ac4fa9de4fefd94fe50e821ac8897c7403e173f2594a2e0ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d66c2a7980e144d2ce01cb8ebac91297e012c09d4b9aec617e747542e1ca304\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4daff941637e717d7dc33c5627db72d253f50f50288c23257e1eaa66cd38c5d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a86cb7b767900041a2946ed016b105369283ee61acdf725ade666acb1e926ed0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0122 09:06:13.075349 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0122 09:06:13.075574 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 09:06:13.077171 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1744166093/tls.crt::/tmp/serving-cert-1744166093/tls.key\\\\\\\"\\\\nI0122 09:06:13.324400 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 09:06:13.330700 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 09:06:13.330724 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 09:06:13.330758 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 09:06:13.330771 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 09:06:13.337838 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 09:06:13.337866 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:06:13.337872 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:06:13.337876 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 09:06:13.337879 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 09:06:13.337883 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 09:06:13.337886 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 09:06:13.338151 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 09:06:13.339610 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9efb1ddf2ce6601c9860410e3ccacc38359aef779a7c8524bf4ec5950743d36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85e811085c9b345899231a1fcf56619c1cc5c8b0e6d22fc897fb96f20ba72e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85e811085c9b345899231a1fcf56619c1cc5c8b0e6d22fc897fb96f20ba72e37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:05:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:21Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:21 crc kubenswrapper[4811]: I0122 09:06:21.752753 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:21 crc kubenswrapper[4811]: I0122 09:06:21.752795 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:21 crc kubenswrapper[4811]: I0122 09:06:21.752805 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:21 crc kubenswrapper[4811]: I0122 09:06:21.752819 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:21 crc kubenswrapper[4811]: I0122 09:06:21.752828 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:21Z","lastTransitionTime":"2026-01-22T09:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:21 crc kubenswrapper[4811]: I0122 09:06:21.774672 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:21Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:21 crc kubenswrapper[4811]: I0122 09:06:21.783893 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 09:06:21 crc kubenswrapper[4811]: I0122 09:06:21.783971 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:06:21 crc kubenswrapper[4811]: I0122 09:06:21.783999 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:06:21 crc kubenswrapper[4811]: E0122 09:06:21.784026 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 09:06:29.784001079 +0000 UTC m=+34.106188212 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:06:21 crc kubenswrapper[4811]: E0122 09:06:21.784090 4811 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 09:06:21 crc kubenswrapper[4811]: E0122 09:06:21.784113 4811 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 09:06:21 crc kubenswrapper[4811]: E0122 09:06:21.784140 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 09:06:29.78412916 +0000 UTC m=+34.106316283 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 09:06:21 crc kubenswrapper[4811]: E0122 09:06:21.784169 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 09:06:29.784154047 +0000 UTC m=+34.106341170 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 09:06:21 crc kubenswrapper[4811]: I0122 09:06:21.814482 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:21Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:21 crc kubenswrapper[4811]: I0122 09:06:21.853803 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xhhs7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f42dc1a6-d0c4-43e4-b9d9-b40c1f910400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a64d7f1d53f9a3d9616b76ea12902134c5e0a3a30c9a61113675074caae70d58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcfzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xhhs7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:21Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:21 crc kubenswrapper[4811]: I0122 09:06:21.854816 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:21 crc kubenswrapper[4811]: I0122 09:06:21.854842 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:21 crc kubenswrapper[4811]: I0122 09:06:21.854852 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:21 crc kubenswrapper[4811]: I0122 09:06:21.854863 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:21 crc kubenswrapper[4811]: I0122 09:06:21.854871 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:21Z","lastTransitionTime":"2026-01-22T09:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:21 crc kubenswrapper[4811]: I0122 09:06:21.899455 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-274vf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cd0f0db-de53-47c0-9b45-2ce8b37392a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://523ab909aa2cbac6043b3c572a17c40733e9b2155d4017253d68998cb85d3e7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://523ab909aa2cbac6043b3c572a17c40733e9b2155d4017253d68998cb85d3e7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-274vf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:21Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:21 crc kubenswrapper[4811]: I0122 09:06:21.933746 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n2kj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66a7f3f9-ab88-4b1a-a28a-900480fa9651\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z47mj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n2kj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:21Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:21 crc kubenswrapper[4811]: I0122 09:06:21.957287 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:21 crc kubenswrapper[4811]: I0122 09:06:21.957321 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:21 crc kubenswrapper[4811]: I0122 09:06:21.957331 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:21 crc kubenswrapper[4811]: I0122 09:06:21.957346 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:21 crc kubenswrapper[4811]: I0122 09:06:21.957355 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:21Z","lastTransitionTime":"2026-01-22T09:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:21 crc kubenswrapper[4811]: I0122 09:06:21.973590 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab5dbe63-08fa-4157-9c52-003944e50d7b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af3bf6cdac4751814ecd71637d9e80cbdda1c0e99714275fe77cfa858b3823b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4419625f66ae957b2d9b146fc8b085a4a95c3f47860d6b4847f8fe71a6c4c86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7830976d23e073a099c3c6af1fa2392d9504ad9d41ddf2ee63c16632c22f833a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45fa5d37dcd1dcae1a7488bb855e4bd7c7dc39a1199fe80b82f83604be089f2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:05:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:21Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:21 crc kubenswrapper[4811]: I0122 09:06:21.976646 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 00:35:47.052795502 +0000 UTC Jan 22 09:06:21 crc kubenswrapper[4811]: I0122 09:06:21.991983 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 09:06:21 crc kubenswrapper[4811]: E0122 09:06:21.992173 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 09:06:21 crc kubenswrapper[4811]: I0122 09:06:21.992042 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 09:06:21 crc kubenswrapper[4811]: E0122 09:06:21.992352 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 09:06:21 crc kubenswrapper[4811]: I0122 09:06:21.992006 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:06:21 crc kubenswrapper[4811]: E0122 09:06:21.992527 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 09:06:22 crc kubenswrapper[4811]: I0122 09:06:22.016343 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a14bfcc7d6d2ab5e1ad0c45999b2e73332e750ed65d9299e188505dafb26d0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:22Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:22 crc kubenswrapper[4811]: I0122 09:06:22.055238 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bae2e22607dec646b15563dc62f66655e7a5c2b7ad809406c907e5f687f80105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75e290e203fab8f291146855a960477ceb9c33d965d15b4722c668dc2b3fded3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:22Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:22 crc kubenswrapper[4811]: I0122 09:06:22.059515 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:22 crc kubenswrapper[4811]: I0122 09:06:22.059552 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:22 crc kubenswrapper[4811]: I0122 09:06:22.059564 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:22 crc kubenswrapper[4811]: I0122 09:06:22.059578 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:22 crc kubenswrapper[4811]: I0122 09:06:22.059588 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:22Z","lastTransitionTime":"2026-01-22T09:06:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:22 crc kubenswrapper[4811]: I0122 09:06:22.093946 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f776b1bc70646779cf35092ded02c00aa3b990c873de09d21e1f0d76258b0b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:22Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:22 crc kubenswrapper[4811]: I0122 09:06:22.135245 4811 generic.go:334] "Generic (PLEG): container finished" podID="3d23c9c9-89ca-4db5-99dc-1e5b9f80be38" containerID="b9e774edddd1e144cf32c598245081727747e9a4e3c04ee339ff63e330ceb5cc" exitCode=0 Jan 22 09:06:22 crc kubenswrapper[4811]: I0122 09:06:22.135308 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-9g4j8" event={"ID":"3d23c9c9-89ca-4db5-99dc-1e5b9f80be38","Type":"ContainerDied","Data":"b9e774edddd1e144cf32c598245081727747e9a4e3c04ee339ff63e330ceb5cc"} Jan 22 09:06:22 crc kubenswrapper[4811]: I0122 09:06:22.140514 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-274vf" event={"ID":"1cd0f0db-de53-47c0-9b45-2ce8b37392a3","Type":"ContainerStarted","Data":"4c741f2990f908528ccab348e1eb02e217a9b659becc7a41c4eb8da28bfd42e5"} Jan 22 09:06:22 crc kubenswrapper[4811]: I0122 09:06:22.142375 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-n2kj4" event={"ID":"66a7f3f9-ab88-4b1a-a28a-900480fa9651","Type":"ContainerStarted","Data":"a3a88defded9a5435941e2608c8a29baee840bc0ce48454a3731f344299347cf"} Jan 22 09:06:22 crc kubenswrapper[4811]: I0122 09:06:22.142439 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-n2kj4" event={"ID":"66a7f3f9-ab88-4b1a-a28a-900480fa9651","Type":"ContainerStarted","Data":"3eada178e76edddd50078a528afc1855807d4374c23352a3aa2a0ac8459b78a3"} Jan 22 09:06:22 crc kubenswrapper[4811]: I0122 09:06:22.149507 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cf1d28a-4ee8-45bb-9a86-3687d8db2105\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684f8ca14b2eabcbf9280085d495af9a6f1fa76fb187f15b8fa5659178efdc93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64382d1748f1eaeb9f2f09d243b35c56f9fd70b9a9765ddb64cb0c4866ed493b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://590bb419e3791ec3a267b9a5aa4022ee74ba5e302b047e85211563e135c4cf31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b14bb77b0cf24bacdb8f5edae5d02ccea019b12feb315e6d6c4887feb1a9354\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0d10794bea5b8ea94b9247775e0c424fa9d481267c663d4bcae16eb4e7eb729\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09c7d347a923430e4eb2af48033ea3fc7715bedad38dd1b644c7b29a1ab617fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09c7d347a923430e4eb2af48033ea3fc7715bedad38dd1b644c7b29a1ab617fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba74d0a51550a0637b01b701bbaa33c5efe84fb88a69b7294d1125f1f400b7cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba74d0a51550a0637b01b701bbaa33c5efe84fb88a69b7294d1125f1f400b7cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9a2da74ed29f44340dd1a4778f15ba78ba4f673eb1b34b676e35c7d7c84af31d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a2da74ed29f44340dd1a4778f15ba78ba4f673eb1b34b676e35c7d7c84af31d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:05:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:22Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:22 crc kubenswrapper[4811]: I0122 09:06:22.161616 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:22 crc kubenswrapper[4811]: I0122 09:06:22.161758 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:22 crc kubenswrapper[4811]: I0122 09:06:22.161872 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:22 crc kubenswrapper[4811]: I0122 09:06:22.162139 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:22 crc kubenswrapper[4811]: I0122 09:06:22.162321 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:22Z","lastTransitionTime":"2026-01-22T09:06:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:22 crc kubenswrapper[4811]: I0122 09:06:22.176501 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:22Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:22 crc kubenswrapper[4811]: I0122 09:06:22.214962 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kfqgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2555861-d1bb-4f21-be4a-165ed9212932\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ececfc4b2e4f2a120e7a3307235d606db00733e1a5631b6e19b5312afc8e8ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jt7cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kfqgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:22Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:22 crc kubenswrapper[4811]: I0122 09:06:22.255074 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30b44c40-e6d4-4902-98e9-a259269d8bf7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c0b4f518da69d1ede47cb9d322e9d9ca120cfc5eb770f109f0e5fd2e3260964\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aee61445030e02ac4fa9de4fefd94fe50e821ac8897c7403e173f2594a2e0ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d66c2a7980e144d2ce01cb8ebac91297e012c09d4b9aec617e747542e1ca304\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4daff941637e717d7dc33c5627db72d253f50f50288c23257e1eaa66cd38c5d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a86cb7b767900041a2946ed016b105369283ee61acdf725ade666acb1e926ed0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0122 09:06:13.075349 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0122 09:06:13.075574 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 09:06:13.077171 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1744166093/tls.crt::/tmp/serving-cert-1744166093/tls.key\\\\\\\"\\\\nI0122 09:06:13.324400 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 09:06:13.330700 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 09:06:13.330724 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 09:06:13.330758 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 09:06:13.330771 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 09:06:13.337838 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 09:06:13.337866 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:06:13.337872 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:06:13.337876 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 09:06:13.337879 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 09:06:13.337883 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 09:06:13.337886 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 09:06:13.338151 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 09:06:13.339610 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9efb1ddf2ce6601c9860410e3ccacc38359aef779a7c8524bf4ec5950743d36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85e811085c9b345899231a1fcf56619c1cc5c8b0e6d22fc897fb96f20ba72e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85e811085c9b345899231a1fcf56619c1cc5c8b0e6d22fc897fb96f20ba72e37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:05:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:22Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:22 crc kubenswrapper[4811]: I0122 09:06:22.266082 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:22 crc kubenswrapper[4811]: I0122 09:06:22.266114 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:22 crc kubenswrapper[4811]: I0122 09:06:22.266124 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:22 crc kubenswrapper[4811]: I0122 09:06:22.266138 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:22 crc kubenswrapper[4811]: I0122 09:06:22.266148 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:22Z","lastTransitionTime":"2026-01-22T09:06:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:22 crc kubenswrapper[4811]: I0122 09:06:22.295400 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:22Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:22 crc kubenswrapper[4811]: I0122 09:06:22.335053 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:22Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:22 crc kubenswrapper[4811]: I0122 09:06:22.368240 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:22 crc kubenswrapper[4811]: I0122 09:06:22.368272 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:22 crc kubenswrapper[4811]: I0122 09:06:22.368282 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:22 crc kubenswrapper[4811]: I0122 09:06:22.368299 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:22 crc kubenswrapper[4811]: I0122 09:06:22.368312 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:22Z","lastTransitionTime":"2026-01-22T09:06:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:22 crc kubenswrapper[4811]: I0122 09:06:22.373241 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xhhs7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f42dc1a6-d0c4-43e4-b9d9-b40c1f910400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a64d7f1d53f9a3d9616b76ea12902134c5e0a3a30c9a61113675074caae70d58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcfzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xhhs7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:22Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:22 crc kubenswrapper[4811]: I0122 09:06:22.414838 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9g4j8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d23c9c9-89ca-4db5-99dc-1e5b9f80be38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e01aaa7db78471fea3a39a7dfc2be7926746a5c38a6a92dc3d33ebab7b8bf3cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e01aaa7db78471fea3a39a7dfc2be7926746a5c38a6a92dc3d33ebab7b8bf3cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c4b6317408e75e2cc7947a3ff91f9268141cd5000f951dcbfdab3fdc5c2386c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c4b6317408e75e2cc7947a3ff91f9268141cd5000f951dcbfdab3fdc5c2386c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9e774edddd1e144cf32c598245081727747e9a4e3c04ee339ff63e330ceb5cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9e774edddd1e144cf32c598245081727747e9a4e3c04ee339ff63e330ceb5cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9g4j8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:22Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:22 crc kubenswrapper[4811]: I0122 09:06:22.453087 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84068a6b-e189-419b-87f5-f31428f6eafe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cde8418c9f22553519ec0a9ba1ddcdd553f64aafb44fa002e924499701236bca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thzhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbb067d04c1683ffd06d01aab41bb5988fdca44b8bd16012d35c42795d0d8011\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thzhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-txvcq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:22Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:22 crc kubenswrapper[4811]: I0122 09:06:22.469900 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:22 crc kubenswrapper[4811]: I0122 09:06:22.469932 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:22 crc kubenswrapper[4811]: I0122 09:06:22.469941 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:22 crc kubenswrapper[4811]: I0122 09:06:22.469954 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:22 crc kubenswrapper[4811]: I0122 09:06:22.469963 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:22Z","lastTransitionTime":"2026-01-22T09:06:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:22 crc kubenswrapper[4811]: I0122 09:06:22.497320 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab5dbe63-08fa-4157-9c52-003944e50d7b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af3bf6cdac4751814ecd71637d9e80cbdda1c0e99714275fe77cfa858b3823b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4419625f66ae957b2d9b146fc8b085a4a95c3f47860d6b4847f8fe71a6c4c86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7830976d23e073a099c3c6af1fa2392d9504ad9d41ddf2ee63c16632c22f833a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45fa5d37dcd1dcae1a7488bb855e4bd7c7dc39a1199fe80b82f83604be089f2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:05:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:22Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:22 crc kubenswrapper[4811]: I0122 09:06:22.534457 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a14bfcc7d6d2ab5e1ad0c45999b2e73332e750ed65d9299e188505dafb26d0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:22Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:22 crc kubenswrapper[4811]: I0122 09:06:22.572299 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:22 crc kubenswrapper[4811]: I0122 09:06:22.572329 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:22 crc kubenswrapper[4811]: I0122 09:06:22.572338 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:22 crc kubenswrapper[4811]: I0122 09:06:22.572351 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:22 crc kubenswrapper[4811]: I0122 09:06:22.572359 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:22Z","lastTransitionTime":"2026-01-22T09:06:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:22 crc kubenswrapper[4811]: I0122 09:06:22.575441 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bae2e22607dec646b15563dc62f66655e7a5c2b7ad809406c907e5f687f80105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75e290e203fab8f291146855a960477ceb9c33d965d15b4722c668dc2b3fded3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:22Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:22 crc kubenswrapper[4811]: I0122 09:06:22.619187 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-274vf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cd0f0db-de53-47c0-9b45-2ce8b37392a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://523ab909aa2cbac6043b3c572a17c40733e9b2155d4017253d68998cb85d3e7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://523ab909aa2cbac6043b3c572a17c40733e9b2155d4017253d68998cb85d3e7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-274vf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:22Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:22 crc kubenswrapper[4811]: I0122 09:06:22.652773 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n2kj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66a7f3f9-ab88-4b1a-a28a-900480fa9651\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3a88defded9a5435941e2608c8a29baee840bc0ce48454a3731f344299347cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z47mj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n2kj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:22Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:22 crc kubenswrapper[4811]: I0122 09:06:22.674208 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:22 crc kubenswrapper[4811]: I0122 09:06:22.674333 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:22 crc kubenswrapper[4811]: I0122 09:06:22.674531 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:22 crc kubenswrapper[4811]: I0122 09:06:22.674696 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:22 crc kubenswrapper[4811]: I0122 09:06:22.674844 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:22Z","lastTransitionTime":"2026-01-22T09:06:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:22 crc kubenswrapper[4811]: I0122 09:06:22.693780 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f776b1bc70646779cf35092ded02c00aa3b990c873de09d21e1f0d76258b0b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:22Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:22 crc kubenswrapper[4811]: I0122 09:06:22.738490 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cf1d28a-4ee8-45bb-9a86-3687d8db2105\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684f8ca14b2eabcbf9280085d495af9a6f1fa76fb187f15b8fa5659178efdc93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64382d1748f1eaeb9f2f09d243b35c56f9fd70b9a9765ddb64cb0c4866ed493b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://590bb419e3791ec3a267b9a5aa4022ee74ba5e302b047e85211563e135c4cf31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b14bb77b0cf24bacdb8f5edae5d02ccea019b12feb315e6d6c4887feb1a9354\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0d10794bea5b8ea94b9247775e0c424fa9d481267c663d4bcae16eb4e7eb729\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09c7d347a923430e4eb2af48033ea3fc7715bedad38dd1b644c7b29a1ab617fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09c7d347a923430e4eb2af48033ea3fc7715bedad38dd1b644c7b29a1ab617fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba74d0a51550a0637b01b701bbaa33c5efe84fb88a69b7294d1125f1f400b7cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba74d0a51550a0637b01b701bbaa33c5efe84fb88a69b7294d1125f1f400b7cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9a2da74ed29f44340dd1a4778f15ba78ba4f673eb1b34b676e35c7d7c84af31d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a2da74ed29f44340dd1a4778f15ba78ba4f673eb1b34b676e35c7d7c84af31d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:05:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:22Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:22 crc kubenswrapper[4811]: I0122 09:06:22.773235 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:22Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:22 crc kubenswrapper[4811]: I0122 09:06:22.776980 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:22 crc kubenswrapper[4811]: I0122 09:06:22.777057 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:22 crc kubenswrapper[4811]: I0122 09:06:22.777112 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:22 crc kubenswrapper[4811]: I0122 09:06:22.777165 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:22 crc kubenswrapper[4811]: I0122 09:06:22.777214 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:22Z","lastTransitionTime":"2026-01-22T09:06:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:22 crc kubenswrapper[4811]: I0122 09:06:22.818176 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kfqgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2555861-d1bb-4f21-be4a-165ed9212932\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ececfc4b2e4f2a120e7a3307235d606db00733e1a5631b6e19b5312afc8e8ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jt7cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kfqgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:22Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:22 crc kubenswrapper[4811]: I0122 09:06:22.878845 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:22 crc kubenswrapper[4811]: I0122 09:06:22.878958 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:22 crc kubenswrapper[4811]: I0122 09:06:22.879047 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:22 crc kubenswrapper[4811]: I0122 09:06:22.879122 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:22 crc kubenswrapper[4811]: I0122 09:06:22.879183 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:22Z","lastTransitionTime":"2026-01-22T09:06:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:22 crc kubenswrapper[4811]: I0122 09:06:22.977752 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 04:54:51.337979312 +0000 UTC Jan 22 09:06:22 crc kubenswrapper[4811]: I0122 09:06:22.981262 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:22 crc kubenswrapper[4811]: I0122 09:06:22.981302 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:22 crc kubenswrapper[4811]: I0122 09:06:22.981312 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:22 crc kubenswrapper[4811]: I0122 09:06:22.981327 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:22 crc kubenswrapper[4811]: I0122 09:06:22.981336 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:22Z","lastTransitionTime":"2026-01-22T09:06:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:23 crc kubenswrapper[4811]: I0122 09:06:23.082951 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:23 crc kubenswrapper[4811]: I0122 09:06:23.082981 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:23 crc kubenswrapper[4811]: I0122 09:06:23.082992 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:23 crc kubenswrapper[4811]: I0122 09:06:23.083004 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:23 crc kubenswrapper[4811]: I0122 09:06:23.083012 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:23Z","lastTransitionTime":"2026-01-22T09:06:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:23 crc kubenswrapper[4811]: I0122 09:06:23.147115 4811 generic.go:334] "Generic (PLEG): container finished" podID="3d23c9c9-89ca-4db5-99dc-1e5b9f80be38" containerID="66cb2961790d3d876438b07dabf72c50ed19c9e946e21dcdf90f098a38e02c84" exitCode=0 Jan 22 09:06:23 crc kubenswrapper[4811]: I0122 09:06:23.147160 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-9g4j8" event={"ID":"3d23c9c9-89ca-4db5-99dc-1e5b9f80be38","Type":"ContainerDied","Data":"66cb2961790d3d876438b07dabf72c50ed19c9e946e21dcdf90f098a38e02c84"} Jan 22 09:06:23 crc kubenswrapper[4811]: I0122 09:06:23.159204 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f776b1bc70646779cf35092ded02c00aa3b990c873de09d21e1f0d76258b0b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:23Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:23 crc kubenswrapper[4811]: I0122 09:06:23.177917 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cf1d28a-4ee8-45bb-9a86-3687d8db2105\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684f8ca14b2eabcbf9280085d495af9a6f1fa76fb187f15b8fa5659178efdc93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64382d1748f1eaeb9f2f09d243b35c56f9fd70b9a9765ddb64cb0c4866ed493b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://590bb419e3791ec3a267b9a5aa4022ee74ba5e302b047e85211563e135c4cf31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b14bb77b0cf24bacdb8f5edae5d02ccea019b12feb315e6d6c4887feb1a9354\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0d10794bea5b8ea94b9247775e0c424fa9d481267c663d4bcae16eb4e7eb729\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09c7d347a923430e4eb2af48033ea3fc7715bedad38dd1b644c7b29a1ab617fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09c7d347a923430e4eb2af48033ea3fc7715bedad38dd1b644c7b29a1ab617fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba74d0a51550a0637b01b701bbaa33c5efe84fb88a69b7294d1125f1f400b7cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba74d0a51550a0637b01b701bbaa33c5efe84fb88a69b7294d1125f1f400b7cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9a2da74ed29f44340dd1a4778f15ba78ba4f673eb1b34b676e35c7d7c84af31d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a2da74ed29f44340dd1a4778f15ba78ba4f673eb1b34b676e35c7d7c84af31d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:05:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:23Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:23 crc kubenswrapper[4811]: I0122 09:06:23.185225 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:23 crc kubenswrapper[4811]: I0122 09:06:23.185342 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:23 crc kubenswrapper[4811]: I0122 09:06:23.185401 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:23 crc kubenswrapper[4811]: I0122 09:06:23.185492 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:23 crc kubenswrapper[4811]: I0122 09:06:23.185676 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:23Z","lastTransitionTime":"2026-01-22T09:06:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:23 crc kubenswrapper[4811]: I0122 09:06:23.189403 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:23Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:23 crc kubenswrapper[4811]: I0122 09:06:23.200017 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kfqgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2555861-d1bb-4f21-be4a-165ed9212932\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ececfc4b2e4f2a120e7a3307235d606db00733e1a5631b6e19b5312afc8e8ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jt7cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kfqgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:23Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:23 crc kubenswrapper[4811]: I0122 09:06:23.210046 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30b44c40-e6d4-4902-98e9-a259269d8bf7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c0b4f518da69d1ede47cb9d322e9d9ca120cfc5eb770f109f0e5fd2e3260964\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aee61445030e02ac4fa9de4fefd94fe50e821ac8897c7403e173f2594a2e0ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d66c2a7980e144d2ce01cb8ebac91297e012c09d4b9aec617e747542e1ca304\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4daff941637e717d7dc33c5627db72d253f50f50288c23257e1eaa66cd38c5d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a86cb7b767900041a2946ed016b105369283ee61acdf725ade666acb1e926ed0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0122 09:06:13.075349 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0122 09:06:13.075574 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 09:06:13.077171 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1744166093/tls.crt::/tmp/serving-cert-1744166093/tls.key\\\\\\\"\\\\nI0122 09:06:13.324400 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 09:06:13.330700 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 09:06:13.330724 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 09:06:13.330758 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 09:06:13.330771 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 09:06:13.337838 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 09:06:13.337866 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:06:13.337872 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:06:13.337876 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 09:06:13.337879 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 09:06:13.337883 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 09:06:13.337886 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 09:06:13.338151 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 09:06:13.339610 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9efb1ddf2ce6601c9860410e3ccacc38359aef779a7c8524bf4ec5950743d36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85e811085c9b345899231a1fcf56619c1cc5c8b0e6d22fc897fb96f20ba72e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85e811085c9b345899231a1fcf56619c1cc5c8b0e6d22fc897fb96f20ba72e37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:05:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:23Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:23 crc kubenswrapper[4811]: I0122 09:06:23.218454 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:23Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:23 crc kubenswrapper[4811]: I0122 09:06:23.227378 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:23Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:23 crc kubenswrapper[4811]: I0122 09:06:23.234822 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xhhs7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f42dc1a6-d0c4-43e4-b9d9-b40c1f910400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a64d7f1d53f9a3d9616b76ea12902134c5e0a3a30c9a61113675074caae70d58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcfzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xhhs7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:23Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:23 crc kubenswrapper[4811]: I0122 09:06:23.244282 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9g4j8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d23c9c9-89ca-4db5-99dc-1e5b9f80be38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e01aaa7db78471fea3a39a7dfc2be7926746a5c38a6a92dc3d33ebab7b8bf3cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e01aaa7db78471fea3a39a7dfc2be7926746a5c38a6a92dc3d33ebab7b8bf3cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c4b6317408e75e2cc7947a3ff91f9268141cd5000f951dcbfdab3fdc5c2386c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c4b6317408e75e2cc7947a3ff91f9268141cd5000f951dcbfdab3fdc5c2386c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9e774edddd1e144cf32c598245081727747e9a4e3c04ee339ff63e330ceb5cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9e774edddd1e144cf32c598245081727747e9a4e3c04ee339ff63e330ceb5cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66cb2961790d3d876438b07dabf72c50ed19c9e946e21dcdf90f098a38e02c84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66cb2961790d3d876438b07dabf72c50ed19c9e946e21dcdf90f098a38e02c84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9g4j8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:23Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:23 crc kubenswrapper[4811]: I0122 09:06:23.252138 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84068a6b-e189-419b-87f5-f31428f6eafe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cde8418c9f22553519ec0a9ba1ddcdd553f64aafb44fa002e924499701236bca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thzhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbb067d04c1683ffd06d01aab41bb5988fdca44b8bd16012d35c42795d0d8011\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thzhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-txvcq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:23Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:23 crc kubenswrapper[4811]: I0122 09:06:23.260506 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab5dbe63-08fa-4157-9c52-003944e50d7b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af3bf6cdac4751814ecd71637d9e80cbdda1c0e99714275fe77cfa858b3823b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4419625f66ae957b2d9b146fc8b085a4a95c3f47860d6b4847f8fe71a6c4c86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7830976d23e073a099c3c6af1fa2392d9504ad9d41ddf2ee63c16632c22f833a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45fa5d37dcd1dcae1a7488bb855e4bd7c7dc39a1199fe80b82f83604be089f2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:05:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:23Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:23 crc kubenswrapper[4811]: I0122 09:06:23.287912 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:23 crc kubenswrapper[4811]: I0122 09:06:23.287937 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:23 crc kubenswrapper[4811]: I0122 09:06:23.287946 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:23 crc kubenswrapper[4811]: I0122 09:06:23.287957 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:23 crc kubenswrapper[4811]: I0122 09:06:23.287967 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:23Z","lastTransitionTime":"2026-01-22T09:06:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:23 crc kubenswrapper[4811]: I0122 09:06:23.293847 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a14bfcc7d6d2ab5e1ad0c45999b2e73332e750ed65d9299e188505dafb26d0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:23Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:23 crc kubenswrapper[4811]: I0122 09:06:23.336119 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bae2e22607dec646b15563dc62f66655e7a5c2b7ad809406c907e5f687f80105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75e290e203fab8f291146855a960477ceb9c33d965d15b4722c668dc2b3fded3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:23Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:23 crc kubenswrapper[4811]: I0122 09:06:23.380529 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-274vf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cd0f0db-de53-47c0-9b45-2ce8b37392a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://523ab909aa2cbac6043b3c572a17c40733e9b2155d4017253d68998cb85d3e7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://523ab909aa2cbac6043b3c572a17c40733e9b2155d4017253d68998cb85d3e7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-274vf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:23Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:23 crc kubenswrapper[4811]: I0122 09:06:23.389549 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:23 crc kubenswrapper[4811]: I0122 09:06:23.389615 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:23 crc kubenswrapper[4811]: I0122 09:06:23.389660 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:23 crc kubenswrapper[4811]: I0122 09:06:23.389684 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:23 crc kubenswrapper[4811]: I0122 09:06:23.389709 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:23Z","lastTransitionTime":"2026-01-22T09:06:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:23 crc kubenswrapper[4811]: I0122 09:06:23.413023 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n2kj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66a7f3f9-ab88-4b1a-a28a-900480fa9651\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3a88defded9a5435941e2608c8a29baee840bc0ce48454a3731f344299347cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z47mj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n2kj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:23Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:23 crc kubenswrapper[4811]: I0122 09:06:23.493232 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:23 crc kubenswrapper[4811]: I0122 09:06:23.493266 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:23 crc kubenswrapper[4811]: I0122 09:06:23.493277 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:23 crc kubenswrapper[4811]: I0122 09:06:23.493292 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:23 crc kubenswrapper[4811]: I0122 09:06:23.493304 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:23Z","lastTransitionTime":"2026-01-22T09:06:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:23 crc kubenswrapper[4811]: I0122 09:06:23.596127 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:23 crc kubenswrapper[4811]: I0122 09:06:23.596178 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:23 crc kubenswrapper[4811]: I0122 09:06:23.596188 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:23 crc kubenswrapper[4811]: I0122 09:06:23.596208 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:23 crc kubenswrapper[4811]: I0122 09:06:23.596221 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:23Z","lastTransitionTime":"2026-01-22T09:06:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:23 crc kubenswrapper[4811]: I0122 09:06:23.698815 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:23 crc kubenswrapper[4811]: I0122 09:06:23.698854 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:23 crc kubenswrapper[4811]: I0122 09:06:23.698863 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:23 crc kubenswrapper[4811]: I0122 09:06:23.698881 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:23 crc kubenswrapper[4811]: I0122 09:06:23.698896 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:23Z","lastTransitionTime":"2026-01-22T09:06:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:23 crc kubenswrapper[4811]: I0122 09:06:23.801482 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:23 crc kubenswrapper[4811]: I0122 09:06:23.801792 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:23 crc kubenswrapper[4811]: I0122 09:06:23.801803 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:23 crc kubenswrapper[4811]: I0122 09:06:23.801818 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:23 crc kubenswrapper[4811]: I0122 09:06:23.801828 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:23Z","lastTransitionTime":"2026-01-22T09:06:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:23 crc kubenswrapper[4811]: I0122 09:06:23.903693 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:23 crc kubenswrapper[4811]: I0122 09:06:23.903738 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:23 crc kubenswrapper[4811]: I0122 09:06:23.903747 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:23 crc kubenswrapper[4811]: I0122 09:06:23.903760 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:23 crc kubenswrapper[4811]: I0122 09:06:23.903769 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:23Z","lastTransitionTime":"2026-01-22T09:06:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:23 crc kubenswrapper[4811]: I0122 09:06:23.978664 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 23:00:55.821070686 +0000 UTC Jan 22 09:06:23 crc kubenswrapper[4811]: I0122 09:06:23.991973 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 09:06:23 crc kubenswrapper[4811]: I0122 09:06:23.991972 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 09:06:23 crc kubenswrapper[4811]: I0122 09:06:23.992031 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:06:23 crc kubenswrapper[4811]: E0122 09:06:23.992158 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 09:06:23 crc kubenswrapper[4811]: E0122 09:06:23.992242 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 09:06:23 crc kubenswrapper[4811]: E0122 09:06:23.992297 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 09:06:24 crc kubenswrapper[4811]: I0122 09:06:24.005753 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:24 crc kubenswrapper[4811]: I0122 09:06:24.005789 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:24 crc kubenswrapper[4811]: I0122 09:06:24.005801 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:24 crc kubenswrapper[4811]: I0122 09:06:24.005814 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:24 crc kubenswrapper[4811]: I0122 09:06:24.005824 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:24Z","lastTransitionTime":"2026-01-22T09:06:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:24 crc kubenswrapper[4811]: I0122 09:06:24.107973 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:24 crc kubenswrapper[4811]: I0122 09:06:24.108018 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:24 crc kubenswrapper[4811]: I0122 09:06:24.108028 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:24 crc kubenswrapper[4811]: I0122 09:06:24.108045 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:24 crc kubenswrapper[4811]: I0122 09:06:24.108055 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:24Z","lastTransitionTime":"2026-01-22T09:06:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:24 crc kubenswrapper[4811]: I0122 09:06:24.154747 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-274vf" event={"ID":"1cd0f0db-de53-47c0-9b45-2ce8b37392a3","Type":"ContainerStarted","Data":"c49ed8e6951254e31c1b622b0f32fc97b8a1f92bc9da38d5626a6e6148ee535b"} Jan 22 09:06:24 crc kubenswrapper[4811]: I0122 09:06:24.155059 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-274vf" Jan 22 09:06:24 crc kubenswrapper[4811]: I0122 09:06:24.159903 4811 generic.go:334] "Generic (PLEG): container finished" podID="3d23c9c9-89ca-4db5-99dc-1e5b9f80be38" containerID="016900d5e167b9638dd095fc155f63e33c55037edb90af28ebef58c91b86fe7b" exitCode=0 Jan 22 09:06:24 crc kubenswrapper[4811]: I0122 09:06:24.159958 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-9g4j8" event={"ID":"3d23c9c9-89ca-4db5-99dc-1e5b9f80be38","Type":"ContainerDied","Data":"016900d5e167b9638dd095fc155f63e33c55037edb90af28ebef58c91b86fe7b"} Jan 22 09:06:24 crc kubenswrapper[4811]: I0122 09:06:24.169724 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bae2e22607dec646b15563dc62f66655e7a5c2b7ad809406c907e5f687f80105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75e290e203fab8f291146855a960477ceb9c33d965d15b4722c668dc2b3fded3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:24Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:24 crc kubenswrapper[4811]: I0122 09:06:24.182740 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-274vf" Jan 22 09:06:24 crc kubenswrapper[4811]: I0122 09:06:24.187228 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-274vf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cd0f0db-de53-47c0-9b45-2ce8b37392a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5f0ec565b046b22d600103b58946f6f520873f1686cc3042bf7ddc7f24bd4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e071df53e8cd1652f507eacae3e2c54ac75e76a6b9a426620c970762e81ecd84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c261cdea10a90cb9b06369f3b725b7778d070fb329e81ab44552446e12662451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f0736426a25dadac460e5ed467f3082cd8c4806190c053022d6f360c4290799\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9197a8c82d49fb9b4b1c97827ee95582ca8148656abb88904e06abb25c135597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55f26cdf9d61842467eb1367c6ec156b4de9c40a5b05fe60d3d100df5136a8a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c49ed8e6951254e31c1b622b0f32fc97b8a1f92bc9da38d5626a6e6148ee535b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c741f2990f908528ccab348e1eb02e217a9b659becc7a41c4eb8da28bfd42e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://523ab909aa2cbac6043b3c572a17c40733e9b2155d4017253d68998cb85d3e7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://523ab909aa2cbac6043b3c572a17c40733e9b2155d4017253d68998cb85d3e7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-274vf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:24Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:24 crc kubenswrapper[4811]: I0122 09:06:24.197058 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n2kj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66a7f3f9-ab88-4b1a-a28a-900480fa9651\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3a88defded9a5435941e2608c8a29baee840bc0ce48454a3731f344299347cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z47mj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n2kj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:24Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:24 crc kubenswrapper[4811]: I0122 09:06:24.208477 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab5dbe63-08fa-4157-9c52-003944e50d7b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af3bf6cdac4751814ecd71637d9e80cbdda1c0e99714275fe77cfa858b3823b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4419625f66ae957b2d9b146fc8b085a4a95c3f47860d6b4847f8fe71a6c4c86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7830976d23e073a099c3c6af1fa2392d9504ad9d41ddf2ee63c16632c22f833a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45fa5d37dcd1dcae1a7488bb855e4bd7c7dc39a1199fe80b82f83604be089f2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:05:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:24Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:24 crc kubenswrapper[4811]: I0122 09:06:24.210089 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:24 crc kubenswrapper[4811]: I0122 09:06:24.210118 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:24 crc kubenswrapper[4811]: I0122 09:06:24.210127 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:24 crc kubenswrapper[4811]: I0122 09:06:24.210145 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:24 crc kubenswrapper[4811]: I0122 09:06:24.210155 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:24Z","lastTransitionTime":"2026-01-22T09:06:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:24 crc kubenswrapper[4811]: I0122 09:06:24.220141 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a14bfcc7d6d2ab5e1ad0c45999b2e73332e750ed65d9299e188505dafb26d0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:24Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:24 crc kubenswrapper[4811]: I0122 09:06:24.231209 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f776b1bc70646779cf35092ded02c00aa3b990c873de09d21e1f0d76258b0b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:24Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:24 crc kubenswrapper[4811]: I0122 09:06:24.246095 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cf1d28a-4ee8-45bb-9a86-3687d8db2105\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684f8ca14b2eabcbf9280085d495af9a6f1fa76fb187f15b8fa5659178efdc93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64382d1748f1eaeb9f2f09d243b35c56f9fd70b9a9765ddb64cb0c4866ed493b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://590bb419e3791ec3a267b9a5aa4022ee74ba5e302b047e85211563e135c4cf31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b14bb77b0cf24bacdb8f5edae5d02ccea019b12feb315e6d6c4887feb1a9354\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0d10794bea5b8ea94b9247775e0c424fa9d481267c663d4bcae16eb4e7eb729\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09c7d347a923430e4eb2af48033ea3fc7715bedad38dd1b644c7b29a1ab617fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09c7d347a923430e4eb2af48033ea3fc7715bedad38dd1b644c7b29a1ab617fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba74d0a51550a0637b01b701bbaa33c5efe84fb88a69b7294d1125f1f400b7cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba74d0a51550a0637b01b701bbaa33c5efe84fb88a69b7294d1125f1f400b7cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9a2da74ed29f44340dd1a4778f15ba78ba4f673eb1b34b676e35c7d7c84af31d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a2da74ed29f44340dd1a4778f15ba78ba4f673eb1b34b676e35c7d7c84af31d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:05:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:24Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:24 crc kubenswrapper[4811]: I0122 09:06:24.268815 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:24Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:24 crc kubenswrapper[4811]: I0122 09:06:24.282122 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kfqgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2555861-d1bb-4f21-be4a-165ed9212932\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ececfc4b2e4f2a120e7a3307235d606db00733e1a5631b6e19b5312afc8e8ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jt7cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kfqgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:24Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:24 crc kubenswrapper[4811]: I0122 09:06:24.297160 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xhhs7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f42dc1a6-d0c4-43e4-b9d9-b40c1f910400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a64d7f1d53f9a3d9616b76ea12902134c5e0a3a30c9a61113675074caae70d58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcfzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xhhs7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:24Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:24 crc kubenswrapper[4811]: I0122 09:06:24.311792 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:24 crc kubenswrapper[4811]: I0122 09:06:24.311826 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:24 crc kubenswrapper[4811]: I0122 09:06:24.311836 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:24 crc kubenswrapper[4811]: I0122 09:06:24.311851 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:24 crc kubenswrapper[4811]: I0122 09:06:24.311861 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:24Z","lastTransitionTime":"2026-01-22T09:06:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:24 crc kubenswrapper[4811]: I0122 09:06:24.325491 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9g4j8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d23c9c9-89ca-4db5-99dc-1e5b9f80be38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e01aaa7db78471fea3a39a7dfc2be7926746a5c38a6a92dc3d33ebab7b8bf3cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e01aaa7db78471fea3a39a7dfc2be7926746a5c38a6a92dc3d33ebab7b8bf3cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c4b6317408e75e2cc7947a3ff91f9268141cd5000f951dcbfdab3fdc5c2386c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c4b6317408e75e2cc7947a3ff91f9268141cd5000f951dcbfdab3fdc5c2386c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9e774edddd1e144cf32c598245081727747e9a4e3c04ee339ff63e330ceb5cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9e774edddd1e144cf32c598245081727747e9a4e3c04ee339ff63e330ceb5cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66cb2961790d3d876438b07dabf72c50ed19c9e946e21dcdf90f098a38e02c84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66cb2961790d3d876438b07dabf72c50ed19c9e946e21dcdf90f098a38e02c84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9g4j8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:24Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:24 crc kubenswrapper[4811]: I0122 09:06:24.337660 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84068a6b-e189-419b-87f5-f31428f6eafe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cde8418c9f22553519ec0a9ba1ddcdd553f64aafb44fa002e924499701236bca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thzhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbb067d04c1683ffd06d01aab41bb5988fdca44b8bd16012d35c42795d0d8011\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thzhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-txvcq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:24Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:24 crc kubenswrapper[4811]: I0122 09:06:24.353065 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30b44c40-e6d4-4902-98e9-a259269d8bf7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c0b4f518da69d1ede47cb9d322e9d9ca120cfc5eb770f109f0e5fd2e3260964\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aee61445030e02ac4fa9de4fefd94fe50e821ac8897c7403e173f2594a2e0ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d66c2a7980e144d2ce01cb8ebac91297e012c09d4b9aec617e747542e1ca304\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4daff941637e717d7dc33c5627db72d253f50f50288c23257e1eaa66cd38c5d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a86cb7b767900041a2946ed016b105369283ee61acdf725ade666acb1e926ed0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0122 09:06:13.075349 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0122 09:06:13.075574 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 09:06:13.077171 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1744166093/tls.crt::/tmp/serving-cert-1744166093/tls.key\\\\\\\"\\\\nI0122 09:06:13.324400 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 09:06:13.330700 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 09:06:13.330724 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 09:06:13.330758 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 09:06:13.330771 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 09:06:13.337838 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 09:06:13.337866 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:06:13.337872 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:06:13.337876 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 09:06:13.337879 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 09:06:13.337883 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 09:06:13.337886 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 09:06:13.338151 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 09:06:13.339610 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9efb1ddf2ce6601c9860410e3ccacc38359aef779a7c8524bf4ec5950743d36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85e811085c9b345899231a1fcf56619c1cc5c8b0e6d22fc897fb96f20ba72e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85e811085c9b345899231a1fcf56619c1cc5c8b0e6d22fc897fb96f20ba72e37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:05:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:24Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:24 crc kubenswrapper[4811]: I0122 09:06:24.367937 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:24Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:24 crc kubenswrapper[4811]: I0122 09:06:24.378418 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:24Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:24 crc kubenswrapper[4811]: I0122 09:06:24.388023 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30b44c40-e6d4-4902-98e9-a259269d8bf7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c0b4f518da69d1ede47cb9d322e9d9ca120cfc5eb770f109f0e5fd2e3260964\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aee61445030e02ac4fa9de4fefd94fe50e821ac8897c7403e173f2594a2e0ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d66c2a7980e144d2ce01cb8ebac91297e012c09d4b9aec617e747542e1ca304\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4daff941637e717d7dc33c5627db72d253f50f50288c23257e1eaa66cd38c5d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a86cb7b767900041a2946ed016b105369283ee61acdf725ade666acb1e926ed0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0122 09:06:13.075349 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0122 09:06:13.075574 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 09:06:13.077171 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1744166093/tls.crt::/tmp/serving-cert-1744166093/tls.key\\\\\\\"\\\\nI0122 09:06:13.324400 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 09:06:13.330700 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 09:06:13.330724 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 09:06:13.330758 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 09:06:13.330771 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 09:06:13.337838 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 09:06:13.337866 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:06:13.337872 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:06:13.337876 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 09:06:13.337879 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 09:06:13.337883 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 09:06:13.337886 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 09:06:13.338151 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 09:06:13.339610 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9efb1ddf2ce6601c9860410e3ccacc38359aef779a7c8524bf4ec5950743d36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85e811085c9b345899231a1fcf56619c1cc5c8b0e6d22fc897fb96f20ba72e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85e811085c9b345899231a1fcf56619c1cc5c8b0e6d22fc897fb96f20ba72e37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:05:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:24Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:24 crc kubenswrapper[4811]: I0122 09:06:24.398043 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:24Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:24 crc kubenswrapper[4811]: I0122 09:06:24.407389 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:24Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:24 crc kubenswrapper[4811]: I0122 09:06:24.413680 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:24 crc kubenswrapper[4811]: I0122 09:06:24.413724 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:24 crc kubenswrapper[4811]: I0122 09:06:24.413736 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:24 crc kubenswrapper[4811]: I0122 09:06:24.413750 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:24 crc kubenswrapper[4811]: I0122 09:06:24.413760 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:24Z","lastTransitionTime":"2026-01-22T09:06:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:24 crc kubenswrapper[4811]: I0122 09:06:24.414858 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xhhs7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f42dc1a6-d0c4-43e4-b9d9-b40c1f910400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a64d7f1d53f9a3d9616b76ea12902134c5e0a3a30c9a61113675074caae70d58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcfzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xhhs7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:24Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:24 crc kubenswrapper[4811]: I0122 09:06:24.428390 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9g4j8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d23c9c9-89ca-4db5-99dc-1e5b9f80be38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e01aaa7db78471fea3a39a7dfc2be7926746a5c38a6a92dc3d33ebab7b8bf3cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e01aaa7db78471fea3a39a7dfc2be7926746a5c38a6a92dc3d33ebab7b8bf3cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c4b6317408e75e2cc7947a3ff91f9268141cd5000f951dcbfdab3fdc5c2386c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c4b6317408e75e2cc7947a3ff91f9268141cd5000f951dcbfdab3fdc5c2386c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9e774edddd1e144cf32c598245081727747e9a4e3c04ee339ff63e330ceb5cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9e774edddd1e144cf32c598245081727747e9a4e3c04ee339ff63e330ceb5cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66cb2961790d3d876438b07dabf72c50ed19c9e946e21dcdf90f098a38e02c84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66cb2961790d3d876438b07dabf72c50ed19c9e946e21dcdf90f098a38e02c84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://016900d5e167b9638dd095fc155f63e33c55037edb90af28ebef58c91b86fe7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://016900d5e167b9638dd095fc155f63e33c55037edb90af28ebef58c91b86fe7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9g4j8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:24Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:24 crc kubenswrapper[4811]: I0122 09:06:24.438224 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84068a6b-e189-419b-87f5-f31428f6eafe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cde8418c9f22553519ec0a9ba1ddcdd553f64aafb44fa002e924499701236bca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thzhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbb067d04c1683ffd06d01aab41bb5988fdca44b8bd16012d35c42795d0d8011\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thzhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-txvcq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:24Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:24 crc kubenswrapper[4811]: I0122 09:06:24.447529 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab5dbe63-08fa-4157-9c52-003944e50d7b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af3bf6cdac4751814ecd71637d9e80cbdda1c0e99714275fe77cfa858b3823b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4419625f66ae957b2d9b146fc8b085a4a95c3f47860d6b4847f8fe71a6c4c86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7830976d23e073a099c3c6af1fa2392d9504ad9d41ddf2ee63c16632c22f833a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45fa5d37dcd1dcae1a7488bb855e4bd7c7dc39a1199fe80b82f83604be089f2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:05:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:24Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:24 crc kubenswrapper[4811]: I0122 09:06:24.457105 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a14bfcc7d6d2ab5e1ad0c45999b2e73332e750ed65d9299e188505dafb26d0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:24Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:24 crc kubenswrapper[4811]: I0122 09:06:24.466418 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bae2e22607dec646b15563dc62f66655e7a5c2b7ad809406c907e5f687f80105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75e290e203fab8f291146855a960477ceb9c33d965d15b4722c668dc2b3fded3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:24Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:24 crc kubenswrapper[4811]: I0122 09:06:24.482028 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-274vf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cd0f0db-de53-47c0-9b45-2ce8b37392a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5f0ec565b046b22d600103b58946f6f520873f1686cc3042bf7ddc7f24bd4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e071df53e8cd1652f507eacae3e2c54ac75e76a6b9a426620c970762e81ecd84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c261cdea10a90cb9b06369f3b725b7778d070fb329e81ab44552446e12662451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f0736426a25dadac460e5ed467f3082cd8c4806190c053022d6f360c4290799\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9197a8c82d49fb9b4b1c97827ee95582ca8148656abb88904e06abb25c135597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55f26cdf9d61842467eb1367c6ec156b4de9c40a5b05fe60d3d100df5136a8a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c49ed8e6951254e31c1b622b0f32fc97b8a1f92bc9da38d5626a6e6148ee535b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c741f2990f908528ccab348e1eb02e217a9b659becc7a41c4eb8da28bfd42e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://523ab909aa2cbac6043b3c572a17c40733e9b2155d4017253d68998cb85d3e7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://523ab909aa2cbac6043b3c572a17c40733e9b2155d4017253d68998cb85d3e7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-274vf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:24Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:24 crc kubenswrapper[4811]: I0122 09:06:24.489610 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n2kj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66a7f3f9-ab88-4b1a-a28a-900480fa9651\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3a88defded9a5435941e2608c8a29baee840bc0ce48454a3731f344299347cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z47mj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n2kj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:24Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:24 crc kubenswrapper[4811]: I0122 09:06:24.498333 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f776b1bc70646779cf35092ded02c00aa3b990c873de09d21e1f0d76258b0b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:24Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:24 crc kubenswrapper[4811]: I0122 09:06:24.515360 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:24 crc kubenswrapper[4811]: I0122 09:06:24.515396 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:24 crc kubenswrapper[4811]: I0122 09:06:24.515408 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:24 crc kubenswrapper[4811]: I0122 09:06:24.515422 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:24 crc kubenswrapper[4811]: I0122 09:06:24.515432 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:24Z","lastTransitionTime":"2026-01-22T09:06:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:24 crc kubenswrapper[4811]: I0122 09:06:24.541066 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cf1d28a-4ee8-45bb-9a86-3687d8db2105\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684f8ca14b2eabcbf9280085d495af9a6f1fa76fb187f15b8fa5659178efdc93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64382d1748f1eaeb9f2f09d243b35c56f9fd70b9a9765ddb64cb0c4866ed493b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://590bb419e3791ec3a267b9a5aa4022ee74ba5e302b047e85211563e135c4cf31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b14bb77b0cf24bacdb8f5edae5d02ccea019b12feb315e6d6c4887feb1a9354\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0d10794bea5b8ea94b9247775e0c424fa9d481267c663d4bcae16eb4e7eb729\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09c7d347a923430e4eb2af48033ea3fc7715bedad38dd1b644c7b29a1ab617fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09c7d347a923430e4eb2af48033ea3fc7715bedad38dd1b644c7b29a1ab617fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba74d0a51550a0637b01b701bbaa33c5efe84fb88a69b7294d1125f1f400b7cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba74d0a51550a0637b01b701bbaa33c5efe84fb88a69b7294d1125f1f400b7cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9a2da74ed29f44340dd1a4778f15ba78ba4f673eb1b34b676e35c7d7c84af31d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a2da74ed29f44340dd1a4778f15ba78ba4f673eb1b34b676e35c7d7c84af31d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:05:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:24Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:24 crc kubenswrapper[4811]: I0122 09:06:24.575025 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:24Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:24 crc kubenswrapper[4811]: I0122 09:06:24.614949 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kfqgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2555861-d1bb-4f21-be4a-165ed9212932\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ececfc4b2e4f2a120e7a3307235d606db00733e1a5631b6e19b5312afc8e8ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jt7cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kfqgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:24Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:24 crc kubenswrapper[4811]: I0122 09:06:24.617527 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:24 crc kubenswrapper[4811]: I0122 09:06:24.617560 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:24 crc kubenswrapper[4811]: I0122 09:06:24.617571 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:24 crc kubenswrapper[4811]: I0122 09:06:24.617586 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:24 crc kubenswrapper[4811]: I0122 09:06:24.617596 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:24Z","lastTransitionTime":"2026-01-22T09:06:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:24 crc kubenswrapper[4811]: I0122 09:06:24.719668 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:24 crc kubenswrapper[4811]: I0122 09:06:24.719712 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:24 crc kubenswrapper[4811]: I0122 09:06:24.719723 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:24 crc kubenswrapper[4811]: I0122 09:06:24.719738 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:24 crc kubenswrapper[4811]: I0122 09:06:24.719749 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:24Z","lastTransitionTime":"2026-01-22T09:06:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:24 crc kubenswrapper[4811]: I0122 09:06:24.822458 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:24 crc kubenswrapper[4811]: I0122 09:06:24.822505 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:24 crc kubenswrapper[4811]: I0122 09:06:24.822516 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:24 crc kubenswrapper[4811]: I0122 09:06:24.822544 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:24 crc kubenswrapper[4811]: I0122 09:06:24.822561 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:24Z","lastTransitionTime":"2026-01-22T09:06:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:24 crc kubenswrapper[4811]: I0122 09:06:24.924913 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:24 crc kubenswrapper[4811]: I0122 09:06:24.925167 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:24 crc kubenswrapper[4811]: I0122 09:06:24.925237 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:24 crc kubenswrapper[4811]: I0122 09:06:24.925322 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:24 crc kubenswrapper[4811]: I0122 09:06:24.925379 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:24Z","lastTransitionTime":"2026-01-22T09:06:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:24 crc kubenswrapper[4811]: I0122 09:06:24.980170 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 07:57:06.743864034 +0000 UTC Jan 22 09:06:25 crc kubenswrapper[4811]: I0122 09:06:25.027418 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:25 crc kubenswrapper[4811]: I0122 09:06:25.027448 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:25 crc kubenswrapper[4811]: I0122 09:06:25.027458 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:25 crc kubenswrapper[4811]: I0122 09:06:25.027471 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:25 crc kubenswrapper[4811]: I0122 09:06:25.027481 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:25Z","lastTransitionTime":"2026-01-22T09:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:25 crc kubenswrapper[4811]: I0122 09:06:25.129920 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:25 crc kubenswrapper[4811]: I0122 09:06:25.129959 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:25 crc kubenswrapper[4811]: I0122 09:06:25.129967 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:25 crc kubenswrapper[4811]: I0122 09:06:25.129980 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:25 crc kubenswrapper[4811]: I0122 09:06:25.129989 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:25Z","lastTransitionTime":"2026-01-22T09:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:25 crc kubenswrapper[4811]: I0122 09:06:25.165121 4811 generic.go:334] "Generic (PLEG): container finished" podID="3d23c9c9-89ca-4db5-99dc-1e5b9f80be38" containerID="4face38e0653080b6d84c1e1a126c96d043eb68c6e19f76dbd8def8bc96e9d40" exitCode=0 Jan 22 09:06:25 crc kubenswrapper[4811]: I0122 09:06:25.165253 4811 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 22 09:06:25 crc kubenswrapper[4811]: I0122 09:06:25.165219 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-9g4j8" event={"ID":"3d23c9c9-89ca-4db5-99dc-1e5b9f80be38","Type":"ContainerDied","Data":"4face38e0653080b6d84c1e1a126c96d043eb68c6e19f76dbd8def8bc96e9d40"} Jan 22 09:06:25 crc kubenswrapper[4811]: I0122 09:06:25.165571 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-274vf" Jan 22 09:06:25 crc kubenswrapper[4811]: I0122 09:06:25.190663 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30b44c40-e6d4-4902-98e9-a259269d8bf7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c0b4f518da69d1ede47cb9d322e9d9ca120cfc5eb770f109f0e5fd2e3260964\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aee61445030e02ac4fa9de4fefd94fe50e821ac8897c7403e173f2594a2e0ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d66c2a7980e144d2ce01cb8ebac91297e012c09d4b9aec617e747542e1ca304\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4daff941637e717d7dc33c5627db72d253f50f50288c23257e1eaa66cd38c5d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a86cb7b767900041a2946ed016b105369283ee61acdf725ade666acb1e926ed0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0122 09:06:13.075349 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0122 09:06:13.075574 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 09:06:13.077171 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1744166093/tls.crt::/tmp/serving-cert-1744166093/tls.key\\\\\\\"\\\\nI0122 09:06:13.324400 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 09:06:13.330700 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 09:06:13.330724 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 09:06:13.330758 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 09:06:13.330771 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 09:06:13.337838 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 09:06:13.337866 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:06:13.337872 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:06:13.337876 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 09:06:13.337879 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 09:06:13.337883 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 09:06:13.337886 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 09:06:13.338151 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 09:06:13.339610 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9efb1ddf2ce6601c9860410e3ccacc38359aef779a7c8524bf4ec5950743d36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85e811085c9b345899231a1fcf56619c1cc5c8b0e6d22fc897fb96f20ba72e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85e811085c9b345899231a1fcf56619c1cc5c8b0e6d22fc897fb96f20ba72e37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:05:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:25Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:25 crc kubenswrapper[4811]: I0122 09:06:25.206916 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-274vf" Jan 22 09:06:25 crc kubenswrapper[4811]: I0122 09:06:25.210499 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:25Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:25 crc kubenswrapper[4811]: I0122 09:06:25.222821 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:25Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:25 crc kubenswrapper[4811]: I0122 09:06:25.232323 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:25 crc kubenswrapper[4811]: I0122 09:06:25.232361 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:25 crc kubenswrapper[4811]: I0122 09:06:25.232371 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:25 crc kubenswrapper[4811]: I0122 09:06:25.232385 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:25 crc kubenswrapper[4811]: I0122 09:06:25.232394 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:25Z","lastTransitionTime":"2026-01-22T09:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:25 crc kubenswrapper[4811]: I0122 09:06:25.235213 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xhhs7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f42dc1a6-d0c4-43e4-b9d9-b40c1f910400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a64d7f1d53f9a3d9616b76ea12902134c5e0a3a30c9a61113675074caae70d58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcfzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xhhs7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:25Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:25 crc kubenswrapper[4811]: I0122 09:06:25.253073 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9g4j8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d23c9c9-89ca-4db5-99dc-1e5b9f80be38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e01aaa7db78471fea3a39a7dfc2be7926746a5c38a6a92dc3d33ebab7b8bf3cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e01aaa7db78471fea3a39a7dfc2be7926746a5c38a6a92dc3d33ebab7b8bf3cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c4b6317408e75e2cc7947a3ff91f9268141cd5000f951dcbfdab3fdc5c2386c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c4b6317408e75e2cc7947a3ff91f9268141cd5000f951dcbfdab3fdc5c2386c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9e774edddd1e144cf32c598245081727747e9a4e3c04ee339ff63e330ceb5cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9e774edddd1e144cf32c598245081727747e9a4e3c04ee339ff63e330ceb5cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66cb2961790d3d876438b07dabf72c50ed19c9e946e21dcdf90f098a38e02c84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66cb2961790d3d876438b07dabf72c50ed19c9e946e21dcdf90f098a38e02c84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://016900d5e167b9638dd095fc155f63e33c55037edb90af28ebef58c91b86fe7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://016900d5e167b9638dd095fc155f63e33c55037edb90af28ebef58c91b86fe7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4face38e0653080b6d84c1e1a126c96d043eb68c6e19f76dbd8def8bc96e9d40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4face38e0653080b6d84c1e1a126c96d043eb68c6e19f76dbd8def8bc96e9d40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9g4j8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:25Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:25 crc kubenswrapper[4811]: I0122 09:06:25.261753 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84068a6b-e189-419b-87f5-f31428f6eafe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cde8418c9f22553519ec0a9ba1ddcdd553f64aafb44fa002e924499701236bca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thzhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbb067d04c1683ffd06d01aab41bb5988fdca44b8bd16012d35c42795d0d8011\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thzhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-txvcq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:25Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:25 crc kubenswrapper[4811]: I0122 09:06:25.272476 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab5dbe63-08fa-4157-9c52-003944e50d7b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af3bf6cdac4751814ecd71637d9e80cbdda1c0e99714275fe77cfa858b3823b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4419625f66ae957b2d9b146fc8b085a4a95c3f47860d6b4847f8fe71a6c4c86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7830976d23e073a099c3c6af1fa2392d9504ad9d41ddf2ee63c16632c22f833a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45fa5d37dcd1dcae1a7488bb855e4bd7c7dc39a1199fe80b82f83604be089f2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:05:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:25Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:25 crc kubenswrapper[4811]: I0122 09:06:25.283186 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a14bfcc7d6d2ab5e1ad0c45999b2e73332e750ed65d9299e188505dafb26d0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:25Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:25 crc kubenswrapper[4811]: I0122 09:06:25.292420 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bae2e22607dec646b15563dc62f66655e7a5c2b7ad809406c907e5f687f80105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75e290e203fab8f291146855a960477ceb9c33d965d15b4722c668dc2b3fded3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:25Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:25 crc kubenswrapper[4811]: I0122 09:06:25.307133 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-274vf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cd0f0db-de53-47c0-9b45-2ce8b37392a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5f0ec565b046b22d600103b58946f6f520873f1686cc3042bf7ddc7f24bd4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e071df53e8cd1652f507eacae3e2c54ac75e76a6b9a426620c970762e81ecd84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c261cdea10a90cb9b06369f3b725b7778d070fb329e81ab44552446e12662451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f0736426a25dadac460e5ed467f3082cd8c4806190c053022d6f360c4290799\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9197a8c82d49fb9b4b1c97827ee95582ca8148656abb88904e06abb25c135597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55f26cdf9d61842467eb1367c6ec156b4de9c40a5b05fe60d3d100df5136a8a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c49ed8e6951254e31c1b622b0f32fc97b8a1f92bc9da38d5626a6e6148ee535b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c741f2990f908528ccab348e1eb02e217a9b659becc7a41c4eb8da28bfd42e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://523ab909aa2cbac6043b3c572a17c40733e9b2155d4017253d68998cb85d3e7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://523ab909aa2cbac6043b3c572a17c40733e9b2155d4017253d68998cb85d3e7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-274vf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:25Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:25 crc kubenswrapper[4811]: I0122 09:06:25.316272 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n2kj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66a7f3f9-ab88-4b1a-a28a-900480fa9651\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3a88defded9a5435941e2608c8a29baee840bc0ce48454a3731f344299347cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z47mj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n2kj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:25Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:25 crc kubenswrapper[4811]: I0122 09:06:25.326202 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f776b1bc70646779cf35092ded02c00aa3b990c873de09d21e1f0d76258b0b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:25Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:25 crc kubenswrapper[4811]: I0122 09:06:25.334091 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:25 crc kubenswrapper[4811]: I0122 09:06:25.334125 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:25 crc kubenswrapper[4811]: I0122 09:06:25.334134 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:25 crc kubenswrapper[4811]: I0122 09:06:25.334147 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:25 crc kubenswrapper[4811]: I0122 09:06:25.334157 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:25Z","lastTransitionTime":"2026-01-22T09:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:25 crc kubenswrapper[4811]: I0122 09:06:25.341906 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cf1d28a-4ee8-45bb-9a86-3687d8db2105\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684f8ca14b2eabcbf9280085d495af9a6f1fa76fb187f15b8fa5659178efdc93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64382d1748f1eaeb9f2f09d243b35c56f9fd70b9a9765ddb64cb0c4866ed493b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://590bb419e3791ec3a267b9a5aa4022ee74ba5e302b047e85211563e135c4cf31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b14bb77b0cf24bacdb8f5edae5d02ccea019b12feb315e6d6c4887feb1a9354\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0d10794bea5b8ea94b9247775e0c424fa9d481267c663d4bcae16eb4e7eb729\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09c7d347a923430e4eb2af48033ea3fc7715bedad38dd1b644c7b29a1ab617fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09c7d347a923430e4eb2af48033ea3fc7715bedad38dd1b644c7b29a1ab617fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba74d0a51550a0637b01b701bbaa33c5efe84fb88a69b7294d1125f1f400b7cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba74d0a51550a0637b01b701bbaa33c5efe84fb88a69b7294d1125f1f400b7cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9a2da74ed29f44340dd1a4778f15ba78ba4f673eb1b34b676e35c7d7c84af31d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a2da74ed29f44340dd1a4778f15ba78ba4f673eb1b34b676e35c7d7c84af31d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:05:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:25Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:25 crc kubenswrapper[4811]: I0122 09:06:25.353408 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:25Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:25 crc kubenswrapper[4811]: I0122 09:06:25.364596 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kfqgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2555861-d1bb-4f21-be4a-165ed9212932\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ececfc4b2e4f2a120e7a3307235d606db00733e1a5631b6e19b5312afc8e8ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jt7cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kfqgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:25Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:25 crc kubenswrapper[4811]: I0122 09:06:25.375007 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:25Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:25 crc kubenswrapper[4811]: I0122 09:06:25.384486 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xhhs7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f42dc1a6-d0c4-43e4-b9d9-b40c1f910400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a64d7f1d53f9a3d9616b76ea12902134c5e0a3a30c9a61113675074caae70d58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcfzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xhhs7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:25Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:25 crc kubenswrapper[4811]: I0122 09:06:25.395432 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9g4j8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d23c9c9-89ca-4db5-99dc-1e5b9f80be38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e01aaa7db78471fea3a39a7dfc2be7926746a5c38a6a92dc3d33ebab7b8bf3cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e01aaa7db78471fea3a39a7dfc2be7926746a5c38a6a92dc3d33ebab7b8bf3cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c4b6317408e75e2cc7947a3ff91f9268141cd5000f951dcbfdab3fdc5c2386c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c4b6317408e75e2cc7947a3ff91f9268141cd5000f951dcbfdab3fdc5c2386c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9e774edddd1e144cf32c598245081727747e9a4e3c04ee339ff63e330ceb5cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9e774edddd1e144cf32c598245081727747e9a4e3c04ee339ff63e330ceb5cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66cb2961790d3d876438b07dabf72c50ed19c9e946e21dcdf90f098a38e02c84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66cb2961790d3d876438b07dabf72c50ed19c9e946e21dcdf90f098a38e02c84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://016900d5e167b9638dd095fc155f63e33c55037edb90af28ebef58c91b86fe7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://016900d5e167b9638dd095fc155f63e33c55037edb90af28ebef58c91b86fe7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4face38e0653080b6d84c1e1a126c96d043eb68c6e19f76dbd8def8bc96e9d40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4face38e0653080b6d84c1e1a126c96d043eb68c6e19f76dbd8def8bc96e9d40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9g4j8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:25Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:25 crc kubenswrapper[4811]: I0122 09:06:25.409810 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84068a6b-e189-419b-87f5-f31428f6eafe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cde8418c9f22553519ec0a9ba1ddcdd553f64aafb44fa002e924499701236bca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thzhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbb067d04c1683ffd06d01aab41bb5988fdca44b8bd16012d35c42795d0d8011\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thzhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-txvcq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:25Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:25 crc kubenswrapper[4811]: I0122 09:06:25.420209 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30b44c40-e6d4-4902-98e9-a259269d8bf7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c0b4f518da69d1ede47cb9d322e9d9ca120cfc5eb770f109f0e5fd2e3260964\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aee61445030e02ac4fa9de4fefd94fe50e821ac8897c7403e173f2594a2e0ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d66c2a7980e144d2ce01cb8ebac91297e012c09d4b9aec617e747542e1ca304\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4daff941637e717d7dc33c5627db72d253f50f50288c23257e1eaa66cd38c5d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a86cb7b767900041a2946ed016b105369283ee61acdf725ade666acb1e926ed0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0122 09:06:13.075349 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0122 09:06:13.075574 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 09:06:13.077171 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1744166093/tls.crt::/tmp/serving-cert-1744166093/tls.key\\\\\\\"\\\\nI0122 09:06:13.324400 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 09:06:13.330700 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 09:06:13.330724 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 09:06:13.330758 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 09:06:13.330771 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 09:06:13.337838 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 09:06:13.337866 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:06:13.337872 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:06:13.337876 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 09:06:13.337879 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 09:06:13.337883 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 09:06:13.337886 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 09:06:13.338151 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 09:06:13.339610 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9efb1ddf2ce6601c9860410e3ccacc38359aef779a7c8524bf4ec5950743d36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85e811085c9b345899231a1fcf56619c1cc5c8b0e6d22fc897fb96f20ba72e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85e811085c9b345899231a1fcf56619c1cc5c8b0e6d22fc897fb96f20ba72e37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:05:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:25Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:25 crc kubenswrapper[4811]: I0122 09:06:25.436576 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:25 crc kubenswrapper[4811]: I0122 09:06:25.436614 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:25 crc kubenswrapper[4811]: I0122 09:06:25.436643 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:25 crc kubenswrapper[4811]: I0122 09:06:25.436659 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:25 crc kubenswrapper[4811]: I0122 09:06:25.436670 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:25Z","lastTransitionTime":"2026-01-22T09:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:25 crc kubenswrapper[4811]: I0122 09:06:25.454145 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:25Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:25 crc kubenswrapper[4811]: I0122 09:06:25.495193 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a14bfcc7d6d2ab5e1ad0c45999b2e73332e750ed65d9299e188505dafb26d0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:25Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:25 crc kubenswrapper[4811]: I0122 09:06:25.536746 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bae2e22607dec646b15563dc62f66655e7a5c2b7ad809406c907e5f687f80105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75e290e203fab8f291146855a960477ceb9c33d965d15b4722c668dc2b3fded3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:25Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:25 crc kubenswrapper[4811]: I0122 09:06:25.538619 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:25 crc kubenswrapper[4811]: I0122 09:06:25.538662 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:25 crc kubenswrapper[4811]: I0122 09:06:25.538673 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:25 crc kubenswrapper[4811]: I0122 09:06:25.538686 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:25 crc kubenswrapper[4811]: I0122 09:06:25.538697 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:25Z","lastTransitionTime":"2026-01-22T09:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:25 crc kubenswrapper[4811]: I0122 09:06:25.578594 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-274vf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cd0f0db-de53-47c0-9b45-2ce8b37392a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5f0ec565b046b22d600103b58946f6f520873f1686cc3042bf7ddc7f24bd4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e071df53e8cd1652f507eacae3e2c54ac75e76a6b9a426620c970762e81ecd84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c261cdea10a90cb9b06369f3b725b7778d070fb329e81ab44552446e12662451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f0736426a25dadac460e5ed467f3082cd8c4806190c053022d6f360c4290799\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9197a8c82d49fb9b4b1c97827ee95582ca8148656abb88904e06abb25c135597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55f26cdf9d61842467eb1367c6ec156b4de9c40a5b05fe60d3d100df5136a8a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c49ed8e6951254e31c1b622b0f32fc97b8a1f92bc9da38d5626a6e6148ee535b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c741f2990f908528ccab348e1eb02e217a9b659becc7a41c4eb8da28bfd42e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://523ab909aa2cbac6043b3c572a17c40733e9b2155d4017253d68998cb85d3e7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://523ab909aa2cbac6043b3c572a17c40733e9b2155d4017253d68998cb85d3e7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-274vf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:25Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:25 crc kubenswrapper[4811]: I0122 09:06:25.619484 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n2kj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66a7f3f9-ab88-4b1a-a28a-900480fa9651\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3a88defded9a5435941e2608c8a29baee840bc0ce48454a3731f344299347cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z47mj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n2kj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:25Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:25 crc kubenswrapper[4811]: I0122 09:06:25.641429 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:25 crc kubenswrapper[4811]: I0122 09:06:25.641467 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:25 crc kubenswrapper[4811]: I0122 09:06:25.641477 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:25 crc kubenswrapper[4811]: I0122 09:06:25.641492 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:25 crc kubenswrapper[4811]: I0122 09:06:25.641501 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:25Z","lastTransitionTime":"2026-01-22T09:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:25 crc kubenswrapper[4811]: I0122 09:06:25.656745 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab5dbe63-08fa-4157-9c52-003944e50d7b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af3bf6cdac4751814ecd71637d9e80cbdda1c0e99714275fe77cfa858b3823b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4419625f66ae957b2d9b146fc8b085a4a95c3f47860d6b4847f8fe71a6c4c86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7830976d23e073a099c3c6af1fa2392d9504ad9d41ddf2ee63c16632c22f833a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45fa5d37dcd1dcae1a7488bb855e4bd7c7dc39a1199fe80b82f83604be089f2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:05:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:25Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:25 crc kubenswrapper[4811]: I0122 09:06:25.694967 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f776b1bc70646779cf35092ded02c00aa3b990c873de09d21e1f0d76258b0b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:25Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:25 crc kubenswrapper[4811]: I0122 09:06:25.736760 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kfqgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2555861-d1bb-4f21-be4a-165ed9212932\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ececfc4b2e4f2a120e7a3307235d606db00733e1a5631b6e19b5312afc8e8ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jt7cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kfqgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:25Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:25 crc kubenswrapper[4811]: I0122 09:06:25.743446 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:25 crc kubenswrapper[4811]: I0122 09:06:25.743477 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:25 crc kubenswrapper[4811]: I0122 09:06:25.743488 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:25 crc kubenswrapper[4811]: I0122 09:06:25.743503 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:25 crc kubenswrapper[4811]: I0122 09:06:25.743513 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:25Z","lastTransitionTime":"2026-01-22T09:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:25 crc kubenswrapper[4811]: I0122 09:06:25.779866 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cf1d28a-4ee8-45bb-9a86-3687d8db2105\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684f8ca14b2eabcbf9280085d495af9a6f1fa76fb187f15b8fa5659178efdc93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64382d1748f1eaeb9f2f09d243b35c56f9fd70b9a9765ddb64cb0c4866ed493b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://590bb419e3791ec3a267b9a5aa4022ee74ba5e302b047e85211563e135c4cf31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b14bb77b0cf24bacdb8f5edae5d02ccea019b12feb315e6d6c4887feb1a9354\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0d10794bea5b8ea94b9247775e0c424fa9d481267c663d4bcae16eb4e7eb729\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09c7d347a923430e4eb2af48033ea3fc7715bedad38dd1b644c7b29a1ab617fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09c7d347a923430e4eb2af48033ea3fc7715bedad38dd1b644c7b29a1ab617fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba74d0a51550a0637b01b701bbaa33c5efe84fb88a69b7294d1125f1f400b7cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba74d0a51550a0637b01b701bbaa33c5efe84fb88a69b7294d1125f1f400b7cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9a2da74ed29f44340dd1a4778f15ba78ba4f673eb1b34b676e35c7d7c84af31d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a2da74ed29f44340dd1a4778f15ba78ba4f673eb1b34b676e35c7d7c84af31d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:05:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:25Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:25 crc kubenswrapper[4811]: I0122 09:06:25.816579 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:25Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:25 crc kubenswrapper[4811]: I0122 09:06:25.845337 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:25 crc kubenswrapper[4811]: I0122 09:06:25.845380 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:25 crc kubenswrapper[4811]: I0122 09:06:25.845389 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:25 crc kubenswrapper[4811]: I0122 09:06:25.845404 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:25 crc kubenswrapper[4811]: I0122 09:06:25.845413 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:25Z","lastTransitionTime":"2026-01-22T09:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:25 crc kubenswrapper[4811]: I0122 09:06:25.874866 4811 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 22 09:06:25 crc kubenswrapper[4811]: I0122 09:06:25.946752 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:25 crc kubenswrapper[4811]: I0122 09:06:25.946791 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:25 crc kubenswrapper[4811]: I0122 09:06:25.946804 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:25 crc kubenswrapper[4811]: I0122 09:06:25.946820 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:25 crc kubenswrapper[4811]: I0122 09:06:25.946836 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:25Z","lastTransitionTime":"2026-01-22T09:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:25 crc kubenswrapper[4811]: I0122 09:06:25.980462 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 01:39:13.474970944 +0000 UTC Jan 22 09:06:25 crc kubenswrapper[4811]: I0122 09:06:25.991771 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 09:06:25 crc kubenswrapper[4811]: I0122 09:06:25.991794 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:06:25 crc kubenswrapper[4811]: I0122 09:06:25.991844 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 09:06:25 crc kubenswrapper[4811]: E0122 09:06:25.991889 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 09:06:25 crc kubenswrapper[4811]: E0122 09:06:25.991951 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 09:06:25 crc kubenswrapper[4811]: E0122 09:06:25.992008 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 09:06:26 crc kubenswrapper[4811]: I0122 09:06:26.001520 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f776b1bc70646779cf35092ded02c00aa3b990c873de09d21e1f0d76258b0b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:25Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:26 crc kubenswrapper[4811]: I0122 09:06:26.011546 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kfqgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2555861-d1bb-4f21-be4a-165ed9212932\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ececfc4b2e4f2a120e7a3307235d606db00733e1a5631b6e19b5312afc8e8ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jt7cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kfqgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:26Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:26 crc kubenswrapper[4811]: I0122 09:06:26.030737 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cf1d28a-4ee8-45bb-9a86-3687d8db2105\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684f8ca14b2eabcbf9280085d495af9a6f1fa76fb187f15b8fa5659178efdc93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64382d1748f1eaeb9f2f09d243b35c56f9fd70b9a9765ddb64cb0c4866ed493b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://590bb419e3791ec3a267b9a5aa4022ee74ba5e302b047e85211563e135c4cf31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b14bb77b0cf24bacdb8f5edae5d02ccea019b12feb315e6d6c4887feb1a9354\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0d10794bea5b8ea94b9247775e0c424fa9d481267c663d4bcae16eb4e7eb729\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09c7d347a923430e4eb2af48033ea3fc7715bedad38dd1b644c7b29a1ab617fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09c7d347a923430e4eb2af48033ea3fc7715bedad38dd1b644c7b29a1ab617fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba74d0a51550a0637b01b701bbaa33c5efe84fb88a69b7294d1125f1f400b7cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba74d0a51550a0637b01b701bbaa33c5efe84fb88a69b7294d1125f1f400b7cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9a2da74ed29f44340dd1a4778f15ba78ba4f673eb1b34b676e35c7d7c84af31d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a2da74ed29f44340dd1a4778f15ba78ba4f673eb1b34b676e35c7d7c84af31d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:05:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:26Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:26 crc kubenswrapper[4811]: I0122 09:06:26.041885 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:26Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:26 crc kubenswrapper[4811]: I0122 09:06:26.048952 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:26 crc kubenswrapper[4811]: I0122 09:06:26.049087 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:26 crc kubenswrapper[4811]: I0122 09:06:26.049155 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:26 crc kubenswrapper[4811]: I0122 09:06:26.049231 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:26 crc kubenswrapper[4811]: I0122 09:06:26.049306 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:26Z","lastTransitionTime":"2026-01-22T09:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:26 crc kubenswrapper[4811]: I0122 09:06:26.052191 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:26Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:26 crc kubenswrapper[4811]: I0122 09:06:26.063953 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xhhs7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f42dc1a6-d0c4-43e4-b9d9-b40c1f910400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a64d7f1d53f9a3d9616b76ea12902134c5e0a3a30c9a61113675074caae70d58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcfzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xhhs7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:26Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:26 crc kubenswrapper[4811]: I0122 09:06:26.095190 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9g4j8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d23c9c9-89ca-4db5-99dc-1e5b9f80be38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e01aaa7db78471fea3a39a7dfc2be7926746a5c38a6a92dc3d33ebab7b8bf3cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e01aaa7db78471fea3a39a7dfc2be7926746a5c38a6a92dc3d33ebab7b8bf3cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c4b6317408e75e2cc7947a3ff91f9268141cd5000f951dcbfdab3fdc5c2386c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c4b6317408e75e2cc7947a3ff91f9268141cd5000f951dcbfdab3fdc5c2386c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9e774edddd1e144cf32c598245081727747e9a4e3c04ee339ff63e330ceb5cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9e774edddd1e144cf32c598245081727747e9a4e3c04ee339ff63e330ceb5cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66cb2961790d3d876438b07dabf72c50ed19c9e946e21dcdf90f098a38e02c84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66cb2961790d3d876438b07dabf72c50ed19c9e946e21dcdf90f098a38e02c84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://016900d5e167b9638dd095fc155f63e33c55037edb90af28ebef58c91b86fe7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://016900d5e167b9638dd095fc155f63e33c55037edb90af28ebef58c91b86fe7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4face38e0653080b6d84c1e1a126c96d043eb68c6e19f76dbd8def8bc96e9d40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4face38e0653080b6d84c1e1a126c96d043eb68c6e19f76dbd8def8bc96e9d40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9g4j8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:26Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:26 crc kubenswrapper[4811]: I0122 09:06:26.133547 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84068a6b-e189-419b-87f5-f31428f6eafe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cde8418c9f22553519ec0a9ba1ddcdd553f64aafb44fa002e924499701236bca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thzhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbb067d04c1683ffd06d01aab41bb5988fdca44b8bd16012d35c42795d0d8011\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thzhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-txvcq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:26Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:26 crc kubenswrapper[4811]: I0122 09:06:26.151994 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:26 crc kubenswrapper[4811]: I0122 09:06:26.152048 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:26 crc kubenswrapper[4811]: I0122 09:06:26.152059 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:26 crc kubenswrapper[4811]: I0122 09:06:26.152073 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:26 crc kubenswrapper[4811]: I0122 09:06:26.152085 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:26Z","lastTransitionTime":"2026-01-22T09:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:26 crc kubenswrapper[4811]: I0122 09:06:26.171884 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-9g4j8" event={"ID":"3d23c9c9-89ca-4db5-99dc-1e5b9f80be38","Type":"ContainerStarted","Data":"279ca4748f2c8229a6647c23a414668a7612c0c863b52ef33c5b3dd026de74ee"} Jan 22 09:06:26 crc kubenswrapper[4811]: I0122 09:06:26.173511 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-274vf_1cd0f0db-de53-47c0-9b45-2ce8b37392a3/ovnkube-controller/0.log" Jan 22 09:06:26 crc kubenswrapper[4811]: I0122 09:06:26.176244 4811 generic.go:334] "Generic (PLEG): container finished" podID="1cd0f0db-de53-47c0-9b45-2ce8b37392a3" containerID="c49ed8e6951254e31c1b622b0f32fc97b8a1f92bc9da38d5626a6e6148ee535b" exitCode=1 Jan 22 09:06:26 crc kubenswrapper[4811]: I0122 09:06:26.176299 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-274vf" event={"ID":"1cd0f0db-de53-47c0-9b45-2ce8b37392a3","Type":"ContainerDied","Data":"c49ed8e6951254e31c1b622b0f32fc97b8a1f92bc9da38d5626a6e6148ee535b"} Jan 22 09:06:26 crc kubenswrapper[4811]: I0122 09:06:26.177011 4811 scope.go:117] "RemoveContainer" containerID="c49ed8e6951254e31c1b622b0f32fc97b8a1f92bc9da38d5626a6e6148ee535b" Jan 22 09:06:26 crc kubenswrapper[4811]: I0122 09:06:26.177933 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30b44c40-e6d4-4902-98e9-a259269d8bf7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c0b4f518da69d1ede47cb9d322e9d9ca120cfc5eb770f109f0e5fd2e3260964\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aee61445030e02ac4fa9de4fefd94fe50e821ac8897c7403e173f2594a2e0ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d66c2a7980e144d2ce01cb8ebac91297e012c09d4b9aec617e747542e1ca304\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4daff941637e717d7dc33c5627db72d253f50f50288c23257e1eaa66cd38c5d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a86cb7b767900041a2946ed016b105369283ee61acdf725ade666acb1e926ed0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0122 09:06:13.075349 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0122 09:06:13.075574 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 09:06:13.077171 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1744166093/tls.crt::/tmp/serving-cert-1744166093/tls.key\\\\\\\"\\\\nI0122 09:06:13.324400 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 09:06:13.330700 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 09:06:13.330724 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 09:06:13.330758 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 09:06:13.330771 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 09:06:13.337838 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 09:06:13.337866 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:06:13.337872 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:06:13.337876 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 09:06:13.337879 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 09:06:13.337883 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 09:06:13.337886 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 09:06:13.338151 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 09:06:13.339610 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9efb1ddf2ce6601c9860410e3ccacc38359aef779a7c8524bf4ec5950743d36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85e811085c9b345899231a1fcf56619c1cc5c8b0e6d22fc897fb96f20ba72e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85e811085c9b345899231a1fcf56619c1cc5c8b0e6d22fc897fb96f20ba72e37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:05:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:26Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:26 crc kubenswrapper[4811]: I0122 09:06:26.215148 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:26Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:26 crc kubenswrapper[4811]: I0122 09:06:26.254613 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a14bfcc7d6d2ab5e1ad0c45999b2e73332e750ed65d9299e188505dafb26d0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:26Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:26 crc kubenswrapper[4811]: I0122 09:06:26.254743 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:26 crc kubenswrapper[4811]: I0122 09:06:26.254772 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:26 crc kubenswrapper[4811]: I0122 09:06:26.254783 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:26 crc kubenswrapper[4811]: I0122 09:06:26.254801 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:26 crc kubenswrapper[4811]: I0122 09:06:26.254813 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:26Z","lastTransitionTime":"2026-01-22T09:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:26 crc kubenswrapper[4811]: I0122 09:06:26.295253 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bae2e22607dec646b15563dc62f66655e7a5c2b7ad809406c907e5f687f80105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75e290e203fab8f291146855a960477ceb9c33d965d15b4722c668dc2b3fded3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:26Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:26 crc kubenswrapper[4811]: I0122 09:06:26.339011 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-274vf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cd0f0db-de53-47c0-9b45-2ce8b37392a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5f0ec565b046b22d600103b58946f6f520873f1686cc3042bf7ddc7f24bd4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e071df53e8cd1652f507eacae3e2c54ac75e76a6b9a426620c970762e81ecd84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c261cdea10a90cb9b06369f3b725b7778d070fb329e81ab44552446e12662451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f0736426a25dadac460e5ed467f3082cd8c4806190c053022d6f360c4290799\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9197a8c82d49fb9b4b1c97827ee95582ca8148656abb88904e06abb25c135597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55f26cdf9d61842467eb1367c6ec156b4de9c40a5b05fe60d3d100df5136a8a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c49ed8e6951254e31c1b622b0f32fc97b8a1f92bc9da38d5626a6e6148ee535b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c741f2990f908528ccab348e1eb02e217a9b659becc7a41c4eb8da28bfd42e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://523ab909aa2cbac6043b3c572a17c40733e9b2155d4017253d68998cb85d3e7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://523ab909aa2cbac6043b3c572a17c40733e9b2155d4017253d68998cb85d3e7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-274vf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:26Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:26 crc kubenswrapper[4811]: I0122 09:06:26.356934 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:26 crc kubenswrapper[4811]: I0122 09:06:26.356968 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:26 crc kubenswrapper[4811]: I0122 09:06:26.356978 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:26 crc kubenswrapper[4811]: I0122 09:06:26.356992 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:26 crc kubenswrapper[4811]: I0122 09:06:26.357004 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:26Z","lastTransitionTime":"2026-01-22T09:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:26 crc kubenswrapper[4811]: I0122 09:06:26.374650 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n2kj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66a7f3f9-ab88-4b1a-a28a-900480fa9651\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3a88defded9a5435941e2608c8a29baee840bc0ce48454a3731f344299347cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z47mj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n2kj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:26Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:26 crc kubenswrapper[4811]: I0122 09:06:26.417138 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab5dbe63-08fa-4157-9c52-003944e50d7b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af3bf6cdac4751814ecd71637d9e80cbdda1c0e99714275fe77cfa858b3823b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4419625f66ae957b2d9b146fc8b085a4a95c3f47860d6b4847f8fe71a6c4c86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7830976d23e073a099c3c6af1fa2392d9504ad9d41ddf2ee63c16632c22f833a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45fa5d37dcd1dcae1a7488bb855e4bd7c7dc39a1199fe80b82f83604be089f2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:05:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:26Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:26 crc kubenswrapper[4811]: I0122 09:06:26.458985 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:26 crc kubenswrapper[4811]: I0122 09:06:26.459024 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:26 crc kubenswrapper[4811]: I0122 09:06:26.459035 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:26 crc kubenswrapper[4811]: I0122 09:06:26.459050 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:26 crc kubenswrapper[4811]: I0122 09:06:26.459061 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:26Z","lastTransitionTime":"2026-01-22T09:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:26 crc kubenswrapper[4811]: I0122 09:06:26.459164 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:26Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:26 crc kubenswrapper[4811]: I0122 09:06:26.499120 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kfqgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2555861-d1bb-4f21-be4a-165ed9212932\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ececfc4b2e4f2a120e7a3307235d606db00733e1a5631b6e19b5312afc8e8ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jt7cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kfqgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:26Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:26 crc kubenswrapper[4811]: I0122 09:06:26.527109 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:26 crc kubenswrapper[4811]: I0122 09:06:26.527148 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:26 crc kubenswrapper[4811]: I0122 09:06:26.527158 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:26 crc kubenswrapper[4811]: I0122 09:06:26.527175 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:26 crc kubenswrapper[4811]: I0122 09:06:26.527186 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:26Z","lastTransitionTime":"2026-01-22T09:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:26 crc kubenswrapper[4811]: E0122 09:06:26.537491 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:06:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:06:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:06:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:06:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0066ca33-f035-4af3-9028-0da78d54d55e\\\",\\\"systemUUID\\\":\\\"463fdb35-dd0c-4804-a85e-31cd33c59ce4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:26Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:26 crc kubenswrapper[4811]: I0122 09:06:26.539028 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cf1d28a-4ee8-45bb-9a86-3687d8db2105\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684f8ca14b2eabcbf9280085d495af9a6f1fa76fb187f15b8fa5659178efdc93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64382d1748f1eaeb9f2f09d243b35c56f9fd70b9a9765ddb64cb0c4866ed493b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://590bb419e3791ec3a267b9a5aa4022ee74ba5e302b047e85211563e135c4cf31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b14bb77b0cf24bacdb8f5edae5d02ccea019b12feb315e6d6c4887feb1a9354\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0d10794bea5b8ea94b9247775e0c424fa9d481267c663d4bcae16eb4e7eb729\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09c7d347a923430e4eb2af48033ea3fc7715bedad38dd1b644c7b29a1ab617fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09c7d347a923430e4eb2af48033ea3fc7715bedad38dd1b644c7b29a1ab617fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba74d0a51550a0637b01b701bbaa33c5efe84fb88a69b7294d1125f1f400b7cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba74d0a51550a0637b01b701bbaa33c5efe84fb88a69b7294d1125f1f400b7cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9a2da74ed29f44340dd1a4778f15ba78ba4f673eb1b34b676e35c7d7c84af31d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a2da74ed29f44340dd1a4778f15ba78ba4f673eb1b34b676e35c7d7c84af31d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:05:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:26Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:26 crc kubenswrapper[4811]: I0122 09:06:26.541697 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:26 crc kubenswrapper[4811]: I0122 09:06:26.541744 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:26 crc kubenswrapper[4811]: I0122 09:06:26.541755 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:26 crc kubenswrapper[4811]: I0122 09:06:26.541772 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:26 crc kubenswrapper[4811]: I0122 09:06:26.541782 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:26Z","lastTransitionTime":"2026-01-22T09:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:26 crc kubenswrapper[4811]: E0122 09:06:26.550881 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:06:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:06:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:06:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:06:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0066ca33-f035-4af3-9028-0da78d54d55e\\\",\\\"systemUUID\\\":\\\"463fdb35-dd0c-4804-a85e-31cd33c59ce4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:26Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:26 crc kubenswrapper[4811]: I0122 09:06:26.553772 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:26 crc kubenswrapper[4811]: I0122 09:06:26.553812 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:26 crc kubenswrapper[4811]: I0122 09:06:26.553822 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:26 crc kubenswrapper[4811]: I0122 09:06:26.553837 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:26 crc kubenswrapper[4811]: I0122 09:06:26.553847 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:26Z","lastTransitionTime":"2026-01-22T09:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:26 crc kubenswrapper[4811]: E0122 09:06:26.563341 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:06:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:06:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:06:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:06:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0066ca33-f035-4af3-9028-0da78d54d55e\\\",\\\"systemUUID\\\":\\\"463fdb35-dd0c-4804-a85e-31cd33c59ce4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:26Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:26 crc kubenswrapper[4811]: I0122 09:06:26.566182 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:26 crc kubenswrapper[4811]: I0122 09:06:26.566213 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:26 crc kubenswrapper[4811]: I0122 09:06:26.566223 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:26 crc kubenswrapper[4811]: I0122 09:06:26.566238 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:26 crc kubenswrapper[4811]: I0122 09:06:26.566248 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:26Z","lastTransitionTime":"2026-01-22T09:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:26 crc kubenswrapper[4811]: I0122 09:06:26.577039 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:26Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:26 crc kubenswrapper[4811]: E0122 09:06:26.577426 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:06:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:06:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:06:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:06:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0066ca33-f035-4af3-9028-0da78d54d55e\\\",\\\"systemUUID\\\":\\\"463fdb35-dd0c-4804-a85e-31cd33c59ce4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:26Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:26 crc kubenswrapper[4811]: I0122 09:06:26.580138 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:26 crc kubenswrapper[4811]: I0122 09:06:26.580170 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:26 crc kubenswrapper[4811]: I0122 09:06:26.580181 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:26 crc kubenswrapper[4811]: I0122 09:06:26.580196 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:26 crc kubenswrapper[4811]: I0122 09:06:26.580208 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:26Z","lastTransitionTime":"2026-01-22T09:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:26 crc kubenswrapper[4811]: E0122 09:06:26.589421 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:06:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:06:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:06:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:06:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0066ca33-f035-4af3-9028-0da78d54d55e\\\",\\\"systemUUID\\\":\\\"463fdb35-dd0c-4804-a85e-31cd33c59ce4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:26Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:26 crc kubenswrapper[4811]: E0122 09:06:26.589526 4811 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 22 09:06:26 crc kubenswrapper[4811]: I0122 09:06:26.591062 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:26 crc kubenswrapper[4811]: I0122 09:06:26.591107 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:26 crc kubenswrapper[4811]: I0122 09:06:26.591127 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:26 crc kubenswrapper[4811]: I0122 09:06:26.591142 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:26 crc kubenswrapper[4811]: I0122 09:06:26.591155 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:26Z","lastTransitionTime":"2026-01-22T09:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:26 crc kubenswrapper[4811]: I0122 09:06:26.614968 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:26Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:26 crc kubenswrapper[4811]: I0122 09:06:26.653581 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xhhs7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f42dc1a6-d0c4-43e4-b9d9-b40c1f910400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a64d7f1d53f9a3d9616b76ea12902134c5e0a3a30c9a61113675074caae70d58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcfzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xhhs7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:26Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:26 crc kubenswrapper[4811]: I0122 09:06:26.692996 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:26 crc kubenswrapper[4811]: I0122 09:06:26.693055 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:26 crc kubenswrapper[4811]: I0122 09:06:26.693067 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:26 crc kubenswrapper[4811]: I0122 09:06:26.693083 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:26 crc kubenswrapper[4811]: I0122 09:06:26.693096 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:26Z","lastTransitionTime":"2026-01-22T09:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:26 crc kubenswrapper[4811]: I0122 09:06:26.697964 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9g4j8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d23c9c9-89ca-4db5-99dc-1e5b9f80be38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://279ca4748f2c8229a6647c23a414668a7612c0c863b52ef33c5b3dd026de74ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e01aaa7db78471fea3a39a7dfc2be7926746a5c38a6a92dc3d33ebab7b8bf3cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e01aaa7db78471fea3a39a7dfc2be7926746a5c38a6a92dc3d33ebab7b8bf3cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c4b6317408e75e2cc7947a3ff91f9268141cd5000f951dcbfdab3fdc5c2386c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c4b6317408e75e2cc7947a3ff91f9268141cd5000f951dcbfdab3fdc5c2386c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9e774edddd1e144cf32c598245081727747e9a4e3c04ee339ff63e330ceb5cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9e774edddd1e144cf32c598245081727747e9a4e3c04ee339ff63e330ceb5cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66cb2961790d3d876438b07dabf72c50ed19c9e946e21dcdf90f098a38e02c84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66cb2961790d3d876438b07dabf72c50ed19c9e946e21dcdf90f098a38e02c84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://016900d5e167b9638dd095fc155f63e33c55037edb90af28ebef58c91b86fe7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://016900d5e167b9638dd095fc155f63e33c55037edb90af28ebef58c91b86fe7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4face38e0653080b6d84c1e1a126c96d043eb68c6e19f76dbd8def8bc96e9d40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4face38e0653080b6d84c1e1a126c96d043eb68c6e19f76dbd8def8bc96e9d40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9g4j8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:26Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:26 crc kubenswrapper[4811]: I0122 09:06:26.740373 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84068a6b-e189-419b-87f5-f31428f6eafe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cde8418c9f22553519ec0a9ba1ddcdd553f64aafb44fa002e924499701236bca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thzhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbb067d04c1683ffd06d01aab41bb5988fdca44b8bd16012d35c42795d0d8011\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thzhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-txvcq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:26Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:26 crc kubenswrapper[4811]: I0122 09:06:26.775250 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30b44c40-e6d4-4902-98e9-a259269d8bf7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c0b4f518da69d1ede47cb9d322e9d9ca120cfc5eb770f109f0e5fd2e3260964\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aee61445030e02ac4fa9de4fefd94fe50e821ac8897c7403e173f2594a2e0ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d66c2a7980e144d2ce01cb8ebac91297e012c09d4b9aec617e747542e1ca304\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4daff941637e717d7dc33c5627db72d253f50f50288c23257e1eaa66cd38c5d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a86cb7b767900041a2946ed016b105369283ee61acdf725ade666acb1e926ed0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0122 09:06:13.075349 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0122 09:06:13.075574 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 09:06:13.077171 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1744166093/tls.crt::/tmp/serving-cert-1744166093/tls.key\\\\\\\"\\\\nI0122 09:06:13.324400 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 09:06:13.330700 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 09:06:13.330724 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 09:06:13.330758 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 09:06:13.330771 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 09:06:13.337838 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 09:06:13.337866 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:06:13.337872 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:06:13.337876 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 09:06:13.337879 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 09:06:13.337883 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 09:06:13.337886 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 09:06:13.338151 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 09:06:13.339610 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9efb1ddf2ce6601c9860410e3ccacc38359aef779a7c8524bf4ec5950743d36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85e811085c9b345899231a1fcf56619c1cc5c8b0e6d22fc897fb96f20ba72e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85e811085c9b345899231a1fcf56619c1cc5c8b0e6d22fc897fb96f20ba72e37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:05:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:26Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:26 crc kubenswrapper[4811]: I0122 09:06:26.794937 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:26 crc kubenswrapper[4811]: I0122 09:06:26.794971 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:26 crc kubenswrapper[4811]: I0122 09:06:26.794980 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:26 crc kubenswrapper[4811]: I0122 09:06:26.794992 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:26 crc kubenswrapper[4811]: I0122 09:06:26.795004 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:26Z","lastTransitionTime":"2026-01-22T09:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:26 crc kubenswrapper[4811]: I0122 09:06:26.814441 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab5dbe63-08fa-4157-9c52-003944e50d7b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af3bf6cdac4751814ecd71637d9e80cbdda1c0e99714275fe77cfa858b3823b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4419625f66ae957b2d9b146fc8b085a4a95c3f47860d6b4847f8fe71a6c4c86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7830976d23e073a099c3c6af1fa2392d9504ad9d41ddf2ee63c16632c22f833a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45fa5d37dcd1dcae1a7488bb855e4bd7c7dc39a1199fe80b82f83604be089f2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:05:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:26Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:26 crc kubenswrapper[4811]: I0122 09:06:26.854577 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a14bfcc7d6d2ab5e1ad0c45999b2e73332e750ed65d9299e188505dafb26d0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:26Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:26 crc kubenswrapper[4811]: I0122 09:06:26.894307 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bae2e22607dec646b15563dc62f66655e7a5c2b7ad809406c907e5f687f80105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75e290e203fab8f291146855a960477ceb9c33d965d15b4722c668dc2b3fded3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:26Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:26 crc kubenswrapper[4811]: I0122 09:06:26.896821 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:26 crc kubenswrapper[4811]: I0122 09:06:26.896851 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:26 crc kubenswrapper[4811]: I0122 09:06:26.896863 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:26 crc kubenswrapper[4811]: I0122 09:06:26.896879 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:26 crc kubenswrapper[4811]: I0122 09:06:26.896888 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:26Z","lastTransitionTime":"2026-01-22T09:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:26 crc kubenswrapper[4811]: I0122 09:06:26.937204 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-274vf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cd0f0db-de53-47c0-9b45-2ce8b37392a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5f0ec565b046b22d600103b58946f6f520873f1686cc3042bf7ddc7f24bd4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e071df53e8cd1652f507eacae3e2c54ac75e76a6b9a426620c970762e81ecd84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c261cdea10a90cb9b06369f3b725b7778d070fb329e81ab44552446e12662451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f0736426a25dadac460e5ed467f3082cd8c4806190c053022d6f360c4290799\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9197a8c82d49fb9b4b1c97827ee95582ca8148656abb88904e06abb25c135597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55f26cdf9d61842467eb1367c6ec156b4de9c40a5b05fe60d3d100df5136a8a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c49ed8e6951254e31c1b622b0f32fc97b8a1f92bc9da38d5626a6e6148ee535b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c49ed8e6951254e31c1b622b0f32fc97b8a1f92bc9da38d5626a6e6148ee535b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T09:06:25Z\\\",\\\"message\\\":\\\"plate:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0122 09:06:25.627287 6029 services_controller.go:452] Built service openshift-network-console/networking-console-plugin per-node LB for network=default: []services.LB{}\\\\nI0122 09:06:25.627310 6029 services_controller.go:453] Built service openshift-network-console/networking-console-plugin template LB for network=default: []services.LB{}\\\\nI0122 09:06:25.627330 6029 services_controller.go:454] Service openshift-network-console/networking-console-plugin for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI0122 09:06:25.627137 6029 ovn.go:134] Ensuring zone local for Pod openshift-kube-apiserver/kube-apiserver-crc in node crc\\\\nI0122 09:06:25.627362 6029 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI0122 09:06:25.627368 6029 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nF0122 09:06:25.627292 6029 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c741f2990f908528ccab348e1eb02e217a9b659becc7a41c4eb8da28bfd42e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://523ab909aa2cbac6043b3c572a17c40733e9b2155d4017253d68998cb85d3e7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://523ab909aa2cbac6043b3c572a17c40733e9b2155d4017253d68998cb85d3e7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-274vf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:26Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:26 crc kubenswrapper[4811]: I0122 09:06:26.973067 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n2kj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66a7f3f9-ab88-4b1a-a28a-900480fa9651\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3a88defded9a5435941e2608c8a29baee840bc0ce48454a3731f344299347cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z47mj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n2kj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:26Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:26 crc kubenswrapper[4811]: I0122 09:06:26.981228 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 00:53:06.151423119 +0000 UTC Jan 22 09:06:26 crc kubenswrapper[4811]: I0122 09:06:26.998961 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:26 crc kubenswrapper[4811]: I0122 09:06:26.999017 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:26 crc kubenswrapper[4811]: I0122 09:06:26.999028 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:26 crc kubenswrapper[4811]: I0122 09:06:26.999042 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:26 crc kubenswrapper[4811]: I0122 09:06:26.999052 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:26Z","lastTransitionTime":"2026-01-22T09:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:27 crc kubenswrapper[4811]: I0122 09:06:27.013184 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f776b1bc70646779cf35092ded02c00aa3b990c873de09d21e1f0d76258b0b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:27Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:27 crc kubenswrapper[4811]: I0122 09:06:27.101123 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:27 crc kubenswrapper[4811]: I0122 09:06:27.101155 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:27 crc kubenswrapper[4811]: I0122 09:06:27.101165 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:27 crc kubenswrapper[4811]: I0122 09:06:27.101179 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:27 crc kubenswrapper[4811]: I0122 09:06:27.101189 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:27Z","lastTransitionTime":"2026-01-22T09:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:27 crc kubenswrapper[4811]: I0122 09:06:27.179961 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-274vf_1cd0f0db-de53-47c0-9b45-2ce8b37392a3/ovnkube-controller/1.log" Jan 22 09:06:27 crc kubenswrapper[4811]: I0122 09:06:27.180402 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-274vf_1cd0f0db-de53-47c0-9b45-2ce8b37392a3/ovnkube-controller/0.log" Jan 22 09:06:27 crc kubenswrapper[4811]: I0122 09:06:27.182815 4811 generic.go:334] "Generic (PLEG): container finished" podID="1cd0f0db-de53-47c0-9b45-2ce8b37392a3" containerID="239f1575640cc017afbde44a6bc3d241facb1b9b7588e9c3902064b0192e9e97" exitCode=1 Jan 22 09:06:27 crc kubenswrapper[4811]: I0122 09:06:27.182897 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-274vf" event={"ID":"1cd0f0db-de53-47c0-9b45-2ce8b37392a3","Type":"ContainerDied","Data":"239f1575640cc017afbde44a6bc3d241facb1b9b7588e9c3902064b0192e9e97"} Jan 22 09:06:27 crc kubenswrapper[4811]: I0122 09:06:27.182943 4811 scope.go:117] "RemoveContainer" containerID="c49ed8e6951254e31c1b622b0f32fc97b8a1f92bc9da38d5626a6e6148ee535b" Jan 22 09:06:27 crc kubenswrapper[4811]: I0122 09:06:27.183370 4811 scope.go:117] "RemoveContainer" containerID="239f1575640cc017afbde44a6bc3d241facb1b9b7588e9c3902064b0192e9e97" Jan 22 09:06:27 crc kubenswrapper[4811]: E0122 09:06:27.183490 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-274vf_openshift-ovn-kubernetes(1cd0f0db-de53-47c0-9b45-2ce8b37392a3)\"" pod="openshift-ovn-kubernetes/ovnkube-node-274vf" podUID="1cd0f0db-de53-47c0-9b45-2ce8b37392a3" Jan 22 09:06:27 crc kubenswrapper[4811]: I0122 09:06:27.193759 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n2kj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66a7f3f9-ab88-4b1a-a28a-900480fa9651\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3a88defded9a5435941e2608c8a29baee840bc0ce48454a3731f344299347cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z47mj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n2kj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:27Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:27 crc kubenswrapper[4811]: I0122 09:06:27.203294 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:27 crc kubenswrapper[4811]: I0122 09:06:27.203433 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:27 crc kubenswrapper[4811]: I0122 09:06:27.203572 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:27 crc kubenswrapper[4811]: I0122 09:06:27.203764 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:27 crc kubenswrapper[4811]: I0122 09:06:27.203919 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:27Z","lastTransitionTime":"2026-01-22T09:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:27 crc kubenswrapper[4811]: I0122 09:06:27.204495 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab5dbe63-08fa-4157-9c52-003944e50d7b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af3bf6cdac4751814ecd71637d9e80cbdda1c0e99714275fe77cfa858b3823b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4419625f66ae957b2d9b146fc8b085a4a95c3f47860d6b4847f8fe71a6c4c86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7830976d23e073a099c3c6af1fa2392d9504ad9d41ddf2ee63c16632c22f833a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45fa5d37dcd1dcae1a7488bb855e4bd7c7dc39a1199fe80b82f83604be089f2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:05:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:27Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:27 crc kubenswrapper[4811]: I0122 09:06:27.215860 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a14bfcc7d6d2ab5e1ad0c45999b2e73332e750ed65d9299e188505dafb26d0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:27Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:27 crc kubenswrapper[4811]: I0122 09:06:27.223797 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bae2e22607dec646b15563dc62f66655e7a5c2b7ad809406c907e5f687f80105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75e290e203fab8f291146855a960477ceb9c33d965d15b4722c668dc2b3fded3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:27Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:27 crc kubenswrapper[4811]: I0122 09:06:27.235763 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-274vf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cd0f0db-de53-47c0-9b45-2ce8b37392a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5f0ec565b046b22d600103b58946f6f520873f1686cc3042bf7ddc7f24bd4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e071df53e8cd1652f507eacae3e2c54ac75e76a6b9a426620c970762e81ecd84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c261cdea10a90cb9b06369f3b725b7778d070fb329e81ab44552446e12662451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f0736426a25dadac460e5ed467f3082cd8c4806190c053022d6f360c4290799\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9197a8c82d49fb9b4b1c97827ee95582ca8148656abb88904e06abb25c135597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55f26cdf9d61842467eb1367c6ec156b4de9c40a5b05fe60d3d100df5136a8a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://239f1575640cc017afbde44a6bc3d241facb1b9b7588e9c3902064b0192e9e97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c49ed8e6951254e31c1b622b0f32fc97b8a1f92bc9da38d5626a6e6148ee535b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T09:06:25Z\\\",\\\"message\\\":\\\"plate:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0122 09:06:25.627287 6029 services_controller.go:452] Built service openshift-network-console/networking-console-plugin per-node LB for network=default: []services.LB{}\\\\nI0122 09:06:25.627310 6029 services_controller.go:453] Built service openshift-network-console/networking-console-plugin template LB for network=default: []services.LB{}\\\\nI0122 09:06:25.627330 6029 services_controller.go:454] Service openshift-network-console/networking-console-plugin for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI0122 09:06:25.627137 6029 ovn.go:134] Ensuring zone local for Pod openshift-kube-apiserver/kube-apiserver-crc in node crc\\\\nI0122 09:06:25.627362 6029 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI0122 09:06:25.627368 6029 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nF0122 09:06:25.627292 6029 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://239f1575640cc017afbde44a6bc3d241facb1b9b7588e9c3902064b0192e9e97\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T09:06:26Z\\\",\\\"message\\\":\\\"k8s.io/client-go/informers/factory.go:160\\\\nI0122 09:06:26.804316 6184 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 09:06:26.804420 6184 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0122 09:06:26.804469 6184 handler.go:208] Removed *v1.Node event handler 2\\\\nI0122 09:06:26.804791 6184 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 09:06:26.805266 6184 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 09:06:26.805478 6184 factory.go:656] Stopping watch factory\\\\nI0122 09:06:26.805690 6184 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 09:06:26.827772 6184 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0122 09:06:26.827839 6184 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0122 09:06:26.827915 6184 ovnkube.go:599] Stopped ovnkube\\\\nI0122 09:06:26.827965 6184 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0122 09:06:26.828033 6184 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c741f2990f908528ccab348e1eb02e217a9b659becc7a41c4eb8da28bfd42e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://523ab909aa2cbac6043b3c572a17c40733e9b2155d4017253d68998cb85d3e7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://523ab909aa2cbac6043b3c572a17c40733e9b2155d4017253d68998cb85d3e7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-274vf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:27Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:27 crc kubenswrapper[4811]: I0122 09:06:27.253342 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f776b1bc70646779cf35092ded02c00aa3b990c873de09d21e1f0d76258b0b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:27Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:27 crc kubenswrapper[4811]: I0122 09:06:27.286995 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-274vf" Jan 22 09:06:27 crc kubenswrapper[4811]: I0122 09:06:27.298192 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cf1d28a-4ee8-45bb-9a86-3687d8db2105\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684f8ca14b2eabcbf9280085d495af9a6f1fa76fb187f15b8fa5659178efdc93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64382d1748f1eaeb9f2f09d243b35c56f9fd70b9a9765ddb64cb0c4866ed493b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://590bb419e3791ec3a267b9a5aa4022ee74ba5e302b047e85211563e135c4cf31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b14bb77b0cf24bacdb8f5edae5d02ccea019b12feb315e6d6c4887feb1a9354\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0d10794bea5b8ea94b9247775e0c424fa9d481267c663d4bcae16eb4e7eb729\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09c7d347a923430e4eb2af48033ea3fc7715bedad38dd1b644c7b29a1ab617fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09c7d347a923430e4eb2af48033ea3fc7715bedad38dd1b644c7b29a1ab617fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba74d0a51550a0637b01b701bbaa33c5efe84fb88a69b7294d1125f1f400b7cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba74d0a51550a0637b01b701bbaa33c5efe84fb88a69b7294d1125f1f400b7cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9a2da74ed29f44340dd1a4778f15ba78ba4f673eb1b34b676e35c7d7c84af31d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a2da74ed29f44340dd1a4778f15ba78ba4f673eb1b34b676e35c7d7c84af31d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:05:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:27Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:27 crc kubenswrapper[4811]: I0122 09:06:27.301181 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 09:06:27 crc kubenswrapper[4811]: I0122 09:06:27.305898 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:27 crc kubenswrapper[4811]: I0122 09:06:27.305928 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:27 crc kubenswrapper[4811]: I0122 09:06:27.305938 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:27 crc kubenswrapper[4811]: I0122 09:06:27.305959 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:27 crc kubenswrapper[4811]: I0122 09:06:27.305971 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:27Z","lastTransitionTime":"2026-01-22T09:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:27 crc kubenswrapper[4811]: I0122 09:06:27.333596 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:27Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:27 crc kubenswrapper[4811]: I0122 09:06:27.376487 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kfqgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2555861-d1bb-4f21-be4a-165ed9212932\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ececfc4b2e4f2a120e7a3307235d606db00733e1a5631b6e19b5312afc8e8ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jt7cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kfqgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:27Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:27 crc kubenswrapper[4811]: I0122 09:06:27.408062 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:27 crc kubenswrapper[4811]: I0122 09:06:27.408092 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:27 crc kubenswrapper[4811]: I0122 09:06:27.408103 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:27 crc kubenswrapper[4811]: I0122 09:06:27.408115 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:27 crc kubenswrapper[4811]: I0122 09:06:27.408123 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:27Z","lastTransitionTime":"2026-01-22T09:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:27 crc kubenswrapper[4811]: I0122 09:06:27.412411 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84068a6b-e189-419b-87f5-f31428f6eafe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cde8418c9f22553519ec0a9ba1ddcdd553f64aafb44fa002e924499701236bca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thzhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbb067d04c1683ffd06d01aab41bb5988fdca44b8bd16012d35c42795d0d8011\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thzhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-txvcq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:27Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:27 crc kubenswrapper[4811]: I0122 09:06:27.453968 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30b44c40-e6d4-4902-98e9-a259269d8bf7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c0b4f518da69d1ede47cb9d322e9d9ca120cfc5eb770f109f0e5fd2e3260964\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aee61445030e02ac4fa9de4fefd94fe50e821ac8897c7403e173f2594a2e0ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d66c2a7980e144d2ce01cb8ebac91297e012c09d4b9aec617e747542e1ca304\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4daff941637e717d7dc33c5627db72d253f50f50288c23257e1eaa66cd38c5d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a86cb7b767900041a2946ed016b105369283ee61acdf725ade666acb1e926ed0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0122 09:06:13.075349 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0122 09:06:13.075574 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 09:06:13.077171 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1744166093/tls.crt::/tmp/serving-cert-1744166093/tls.key\\\\\\\"\\\\nI0122 09:06:13.324400 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 09:06:13.330700 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 09:06:13.330724 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 09:06:13.330758 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 09:06:13.330771 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 09:06:13.337838 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 09:06:13.337866 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:06:13.337872 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:06:13.337876 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 09:06:13.337879 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 09:06:13.337883 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 09:06:13.337886 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 09:06:13.338151 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 09:06:13.339610 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9efb1ddf2ce6601c9860410e3ccacc38359aef779a7c8524bf4ec5950743d36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85e811085c9b345899231a1fcf56619c1cc5c8b0e6d22fc897fb96f20ba72e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85e811085c9b345899231a1fcf56619c1cc5c8b0e6d22fc897fb96f20ba72e37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:05:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:27Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:27 crc kubenswrapper[4811]: I0122 09:06:27.493960 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:27Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:27 crc kubenswrapper[4811]: I0122 09:06:27.509758 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:27 crc kubenswrapper[4811]: I0122 09:06:27.509807 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:27 crc kubenswrapper[4811]: I0122 09:06:27.509821 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:27 crc kubenswrapper[4811]: I0122 09:06:27.509842 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:27 crc kubenswrapper[4811]: I0122 09:06:27.509857 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:27Z","lastTransitionTime":"2026-01-22T09:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:27 crc kubenswrapper[4811]: I0122 09:06:27.533375 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:27Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:27 crc kubenswrapper[4811]: I0122 09:06:27.572191 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xhhs7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f42dc1a6-d0c4-43e4-b9d9-b40c1f910400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a64d7f1d53f9a3d9616b76ea12902134c5e0a3a30c9a61113675074caae70d58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcfzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xhhs7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:27Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:27 crc kubenswrapper[4811]: I0122 09:06:27.611933 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:27 crc kubenswrapper[4811]: I0122 09:06:27.611973 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:27 crc kubenswrapper[4811]: I0122 09:06:27.611984 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:27 crc kubenswrapper[4811]: I0122 09:06:27.611996 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:27 crc kubenswrapper[4811]: I0122 09:06:27.612007 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:27Z","lastTransitionTime":"2026-01-22T09:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:27 crc kubenswrapper[4811]: I0122 09:06:27.617471 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9g4j8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d23c9c9-89ca-4db5-99dc-1e5b9f80be38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://279ca4748f2c8229a6647c23a414668a7612c0c863b52ef33c5b3dd026de74ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e01aaa7db78471fea3a39a7dfc2be7926746a5c38a6a92dc3d33ebab7b8bf3cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e01aaa7db78471fea3a39a7dfc2be7926746a5c38a6a92dc3d33ebab7b8bf3cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c4b6317408e75e2cc7947a3ff91f9268141cd5000f951dcbfdab3fdc5c2386c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c4b6317408e75e2cc7947a3ff91f9268141cd5000f951dcbfdab3fdc5c2386c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9e774edddd1e144cf32c598245081727747e9a4e3c04ee339ff63e330ceb5cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9e774edddd1e144cf32c598245081727747e9a4e3c04ee339ff63e330ceb5cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66cb2961790d3d876438b07dabf72c50ed19c9e946e21dcdf90f098a38e02c84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66cb2961790d3d876438b07dabf72c50ed19c9e946e21dcdf90f098a38e02c84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://016900d5e167b9638dd095fc155f63e33c55037edb90af28ebef58c91b86fe7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://016900d5e167b9638dd095fc155f63e33c55037edb90af28ebef58c91b86fe7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4face38e0653080b6d84c1e1a126c96d043eb68c6e19f76dbd8def8bc96e9d40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4face38e0653080b6d84c1e1a126c96d043eb68c6e19f76dbd8def8bc96e9d40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9g4j8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:27Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:27 crc kubenswrapper[4811]: I0122 09:06:27.658538 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cf1d28a-4ee8-45bb-9a86-3687d8db2105\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684f8ca14b2eabcbf9280085d495af9a6f1fa76fb187f15b8fa5659178efdc93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64382d1748f1eaeb9f2f09d243b35c56f9fd70b9a9765ddb64cb0c4866ed493b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://590bb419e3791ec3a267b9a5aa4022ee74ba5e302b047e85211563e135c4cf31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b14bb77b0cf24bacdb8f5edae5d02ccea019b12feb315e6d6c4887feb1a9354\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0d10794bea5b8ea94b9247775e0c424fa9d481267c663d4bcae16eb4e7eb729\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09c7d347a923430e4eb2af48033ea3fc7715bedad38dd1b644c7b29a1ab617fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09c7d347a923430e4eb2af48033ea3fc7715bedad38dd1b644c7b29a1ab617fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba74d0a51550a0637b01b701bbaa33c5efe84fb88a69b7294d1125f1f400b7cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba74d0a51550a0637b01b701bbaa33c5efe84fb88a69b7294d1125f1f400b7cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9a2da74ed29f44340dd1a4778f15ba78ba4f673eb1b34b676e35c7d7c84af31d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a2da74ed29f44340dd1a4778f15ba78ba4f673eb1b34b676e35c7d7c84af31d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:05:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:27Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:27 crc kubenswrapper[4811]: I0122 09:06:27.692950 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:27Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:27 crc kubenswrapper[4811]: I0122 09:06:27.714755 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:27 crc kubenswrapper[4811]: I0122 09:06:27.714787 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:27 crc kubenswrapper[4811]: I0122 09:06:27.714796 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:27 crc kubenswrapper[4811]: I0122 09:06:27.714808 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:27 crc kubenswrapper[4811]: I0122 09:06:27.714819 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:27Z","lastTransitionTime":"2026-01-22T09:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:27 crc kubenswrapper[4811]: I0122 09:06:27.733744 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kfqgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2555861-d1bb-4f21-be4a-165ed9212932\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ececfc4b2e4f2a120e7a3307235d606db00733e1a5631b6e19b5312afc8e8ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jt7cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kfqgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:27Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:27 crc kubenswrapper[4811]: I0122 09:06:27.774532 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30b44c40-e6d4-4902-98e9-a259269d8bf7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c0b4f518da69d1ede47cb9d322e9d9ca120cfc5eb770f109f0e5fd2e3260964\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aee61445030e02ac4fa9de4fefd94fe50e821ac8897c7403e173f2594a2e0ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d66c2a7980e144d2ce01cb8ebac91297e012c09d4b9aec617e747542e1ca304\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4daff941637e717d7dc33c5627db72d253f50f50288c23257e1eaa66cd38c5d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a86cb7b767900041a2946ed016b105369283ee61acdf725ade666acb1e926ed0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0122 09:06:13.075349 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0122 09:06:13.075574 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 09:06:13.077171 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1744166093/tls.crt::/tmp/serving-cert-1744166093/tls.key\\\\\\\"\\\\nI0122 09:06:13.324400 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 09:06:13.330700 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 09:06:13.330724 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 09:06:13.330758 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 09:06:13.330771 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 09:06:13.337838 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 09:06:13.337866 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:06:13.337872 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:06:13.337876 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 09:06:13.337879 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 09:06:13.337883 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 09:06:13.337886 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 09:06:13.338151 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 09:06:13.339610 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9efb1ddf2ce6601c9860410e3ccacc38359aef779a7c8524bf4ec5950743d36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85e811085c9b345899231a1fcf56619c1cc5c8b0e6d22fc897fb96f20ba72e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85e811085c9b345899231a1fcf56619c1cc5c8b0e6d22fc897fb96f20ba72e37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:05:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:27Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:27 crc kubenswrapper[4811]: I0122 09:06:27.814734 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:27Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:27 crc kubenswrapper[4811]: I0122 09:06:27.816607 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:27 crc kubenswrapper[4811]: I0122 09:06:27.816742 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:27 crc kubenswrapper[4811]: I0122 09:06:27.816818 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:27 crc kubenswrapper[4811]: I0122 09:06:27.816887 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:27 crc kubenswrapper[4811]: I0122 09:06:27.816955 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:27Z","lastTransitionTime":"2026-01-22T09:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:27 crc kubenswrapper[4811]: I0122 09:06:27.854144 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:27Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:27 crc kubenswrapper[4811]: I0122 09:06:27.892648 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xhhs7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f42dc1a6-d0c4-43e4-b9d9-b40c1f910400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a64d7f1d53f9a3d9616b76ea12902134c5e0a3a30c9a61113675074caae70d58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcfzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xhhs7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:27Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:27 crc kubenswrapper[4811]: I0122 09:06:27.919030 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:27 crc kubenswrapper[4811]: I0122 09:06:27.919140 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:27 crc kubenswrapper[4811]: I0122 09:06:27.919207 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:27 crc kubenswrapper[4811]: I0122 09:06:27.919269 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:27 crc kubenswrapper[4811]: I0122 09:06:27.919340 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:27Z","lastTransitionTime":"2026-01-22T09:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:27 crc kubenswrapper[4811]: I0122 09:06:27.934794 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9g4j8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d23c9c9-89ca-4db5-99dc-1e5b9f80be38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://279ca4748f2c8229a6647c23a414668a7612c0c863b52ef33c5b3dd026de74ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e01aaa7db78471fea3a39a7dfc2be7926746a5c38a6a92dc3d33ebab7b8bf3cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e01aaa7db78471fea3a39a7dfc2be7926746a5c38a6a92dc3d33ebab7b8bf3cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c4b6317408e75e2cc7947a3ff91f9268141cd5000f951dcbfdab3fdc5c2386c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c4b6317408e75e2cc7947a3ff91f9268141cd5000f951dcbfdab3fdc5c2386c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9e774edddd1e144cf32c598245081727747e9a4e3c04ee339ff63e330ceb5cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9e774edddd1e144cf32c598245081727747e9a4e3c04ee339ff63e330ceb5cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66cb2961790d3d876438b07dabf72c50ed19c9e946e21dcdf90f098a38e02c84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66cb2961790d3d876438b07dabf72c50ed19c9e946e21dcdf90f098a38e02c84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://016900d5e167b9638dd095fc155f63e33c55037edb90af28ebef58c91b86fe7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://016900d5e167b9638dd095fc155f63e33c55037edb90af28ebef58c91b86fe7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4face38e0653080b6d84c1e1a126c96d043eb68c6e19f76dbd8def8bc96e9d40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4face38e0653080b6d84c1e1a126c96d043eb68c6e19f76dbd8def8bc96e9d40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9g4j8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:27Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:27 crc kubenswrapper[4811]: I0122 09:06:27.973306 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84068a6b-e189-419b-87f5-f31428f6eafe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cde8418c9f22553519ec0a9ba1ddcdd553f64aafb44fa002e924499701236bca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thzhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbb067d04c1683ffd06d01aab41bb5988fdca44b8bd16012d35c42795d0d8011\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thzhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-txvcq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:27Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:27 crc kubenswrapper[4811]: I0122 09:06:27.981535 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 06:21:14.508136305 +0000 UTC Jan 22 09:06:27 crc kubenswrapper[4811]: I0122 09:06:27.991611 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 09:06:27 crc kubenswrapper[4811]: I0122 09:06:27.991697 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 09:06:27 crc kubenswrapper[4811]: I0122 09:06:27.991723 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:06:27 crc kubenswrapper[4811]: E0122 09:06:27.991793 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 09:06:27 crc kubenswrapper[4811]: E0122 09:06:27.991887 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 09:06:27 crc kubenswrapper[4811]: E0122 09:06:27.992049 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 09:06:28 crc kubenswrapper[4811]: I0122 09:06:28.021538 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:28 crc kubenswrapper[4811]: I0122 09:06:28.021576 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:28 crc kubenswrapper[4811]: I0122 09:06:28.021585 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:28 crc kubenswrapper[4811]: I0122 09:06:28.021599 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:28 crc kubenswrapper[4811]: I0122 09:06:28.021610 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:28Z","lastTransitionTime":"2026-01-22T09:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:28 crc kubenswrapper[4811]: I0122 09:06:28.028813 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab5dbe63-08fa-4157-9c52-003944e50d7b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af3bf6cdac4751814ecd71637d9e80cbdda1c0e99714275fe77cfa858b3823b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4419625f66ae957b2d9b146fc8b085a4a95c3f47860d6b4847f8fe71a6c4c86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7830976d23e073a099c3c6af1fa2392d9504ad9d41ddf2ee63c16632c22f833a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45fa5d37dcd1dcae1a7488bb855e4bd7c7dc39a1199fe80b82f83604be089f2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:05:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:28Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:28 crc kubenswrapper[4811]: I0122 09:06:28.058960 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a14bfcc7d6d2ab5e1ad0c45999b2e73332e750ed65d9299e188505dafb26d0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:28Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:28 crc kubenswrapper[4811]: I0122 09:06:28.095062 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bae2e22607dec646b15563dc62f66655e7a5c2b7ad809406c907e5f687f80105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75e290e203fab8f291146855a960477ceb9c33d965d15b4722c668dc2b3fded3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:28Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:28 crc kubenswrapper[4811]: I0122 09:06:28.123657 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:28 crc kubenswrapper[4811]: I0122 09:06:28.123692 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:28 crc kubenswrapper[4811]: I0122 09:06:28.123701 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:28 crc kubenswrapper[4811]: I0122 09:06:28.123725 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:28 crc kubenswrapper[4811]: I0122 09:06:28.123737 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:28Z","lastTransitionTime":"2026-01-22T09:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:28 crc kubenswrapper[4811]: I0122 09:06:28.140084 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-274vf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cd0f0db-de53-47c0-9b45-2ce8b37392a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5f0ec565b046b22d600103b58946f6f520873f1686cc3042bf7ddc7f24bd4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e071df53e8cd1652f507eacae3e2c54ac75e76a6b9a426620c970762e81ecd84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c261cdea10a90cb9b06369f3b725b7778d070fb329e81ab44552446e12662451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f0736426a25dadac460e5ed467f3082cd8c4806190c053022d6f360c4290799\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9197a8c82d49fb9b4b1c97827ee95582ca8148656abb88904e06abb25c135597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55f26cdf9d61842467eb1367c6ec156b4de9c40a5b05fe60d3d100df5136a8a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://239f1575640cc017afbde44a6bc3d241facb1b9b7588e9c3902064b0192e9e97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c49ed8e6951254e31c1b622b0f32fc97b8a1f92bc9da38d5626a6e6148ee535b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T09:06:25Z\\\",\\\"message\\\":\\\"plate:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0122 09:06:25.627287 6029 services_controller.go:452] Built service openshift-network-console/networking-console-plugin per-node LB for network=default: []services.LB{}\\\\nI0122 09:06:25.627310 6029 services_controller.go:453] Built service openshift-network-console/networking-console-plugin template LB for network=default: []services.LB{}\\\\nI0122 09:06:25.627330 6029 services_controller.go:454] Service openshift-network-console/networking-console-plugin for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI0122 09:06:25.627137 6029 ovn.go:134] Ensuring zone local for Pod openshift-kube-apiserver/kube-apiserver-crc in node crc\\\\nI0122 09:06:25.627362 6029 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI0122 09:06:25.627368 6029 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nF0122 09:06:25.627292 6029 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://239f1575640cc017afbde44a6bc3d241facb1b9b7588e9c3902064b0192e9e97\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T09:06:26Z\\\",\\\"message\\\":\\\"k8s.io/client-go/informers/factory.go:160\\\\nI0122 09:06:26.804316 6184 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 09:06:26.804420 6184 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0122 09:06:26.804469 6184 handler.go:208] Removed *v1.Node event handler 2\\\\nI0122 09:06:26.804791 6184 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 09:06:26.805266 6184 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 09:06:26.805478 6184 factory.go:656] Stopping watch factory\\\\nI0122 09:06:26.805690 6184 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 09:06:26.827772 6184 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0122 09:06:26.827839 6184 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0122 09:06:26.827915 6184 ovnkube.go:599] Stopped ovnkube\\\\nI0122 09:06:26.827965 6184 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0122 09:06:26.828033 6184 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c741f2990f908528ccab348e1eb02e217a9b659becc7a41c4eb8da28bfd42e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://523ab909aa2cbac6043b3c572a17c40733e9b2155d4017253d68998cb85d3e7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://523ab909aa2cbac6043b3c572a17c40733e9b2155d4017253d68998cb85d3e7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-274vf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:28Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:28 crc kubenswrapper[4811]: I0122 09:06:28.173681 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n2kj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66a7f3f9-ab88-4b1a-a28a-900480fa9651\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3a88defded9a5435941e2608c8a29baee840bc0ce48454a3731f344299347cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z47mj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n2kj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:28Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:28 crc kubenswrapper[4811]: I0122 09:06:28.188082 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-274vf_1cd0f0db-de53-47c0-9b45-2ce8b37392a3/ovnkube-controller/1.log" Jan 22 09:06:28 crc kubenswrapper[4811]: I0122 09:06:28.191704 4811 scope.go:117] "RemoveContainer" containerID="239f1575640cc017afbde44a6bc3d241facb1b9b7588e9c3902064b0192e9e97" Jan 22 09:06:28 crc kubenswrapper[4811]: E0122 09:06:28.191855 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-274vf_openshift-ovn-kubernetes(1cd0f0db-de53-47c0-9b45-2ce8b37392a3)\"" pod="openshift-ovn-kubernetes/ovnkube-node-274vf" podUID="1cd0f0db-de53-47c0-9b45-2ce8b37392a3" Jan 22 09:06:28 crc kubenswrapper[4811]: I0122 09:06:28.214417 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f776b1bc70646779cf35092ded02c00aa3b990c873de09d21e1f0d76258b0b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:28Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:28 crc kubenswrapper[4811]: I0122 09:06:28.225922 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:28 crc kubenswrapper[4811]: I0122 09:06:28.225959 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:28 crc kubenswrapper[4811]: I0122 09:06:28.225969 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:28 crc kubenswrapper[4811]: I0122 09:06:28.225990 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:28 crc kubenswrapper[4811]: I0122 09:06:28.226004 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:28Z","lastTransitionTime":"2026-01-22T09:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:28 crc kubenswrapper[4811]: I0122 09:06:28.255782 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:28Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:28 crc kubenswrapper[4811]: I0122 09:06:28.295101 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:28Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:28 crc kubenswrapper[4811]: I0122 09:06:28.327734 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:28 crc kubenswrapper[4811]: I0122 09:06:28.327774 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:28 crc kubenswrapper[4811]: I0122 09:06:28.327786 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:28 crc kubenswrapper[4811]: I0122 09:06:28.327801 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:28 crc kubenswrapper[4811]: I0122 09:06:28.327814 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:28Z","lastTransitionTime":"2026-01-22T09:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:28 crc kubenswrapper[4811]: I0122 09:06:28.333454 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xhhs7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f42dc1a6-d0c4-43e4-b9d9-b40c1f910400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a64d7f1d53f9a3d9616b76ea12902134c5e0a3a30c9a61113675074caae70d58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcfzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xhhs7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:28Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:28 crc kubenswrapper[4811]: I0122 09:06:28.376191 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9g4j8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d23c9c9-89ca-4db5-99dc-1e5b9f80be38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://279ca4748f2c8229a6647c23a414668a7612c0c863b52ef33c5b3dd026de74ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e01aaa7db78471fea3a39a7dfc2be7926746a5c38a6a92dc3d33ebab7b8bf3cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e01aaa7db78471fea3a39a7dfc2be7926746a5c38a6a92dc3d33ebab7b8bf3cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c4b6317408e75e2cc7947a3ff91f9268141cd5000f951dcbfdab3fdc5c2386c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c4b6317408e75e2cc7947a3ff91f9268141cd5000f951dcbfdab3fdc5c2386c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9e774edddd1e144cf32c598245081727747e9a4e3c04ee339ff63e330ceb5cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9e774edddd1e144cf32c598245081727747e9a4e3c04ee339ff63e330ceb5cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66cb2961790d3d876438b07dabf72c50ed19c9e946e21dcdf90f098a38e02c84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66cb2961790d3d876438b07dabf72c50ed19c9e946e21dcdf90f098a38e02c84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://016900d5e167b9638dd095fc155f63e33c55037edb90af28ebef58c91b86fe7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://016900d5e167b9638dd095fc155f63e33c55037edb90af28ebef58c91b86fe7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4face38e0653080b6d84c1e1a126c96d043eb68c6e19f76dbd8def8bc96e9d40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4face38e0653080b6d84c1e1a126c96d043eb68c6e19f76dbd8def8bc96e9d40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9g4j8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:28Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:28 crc kubenswrapper[4811]: I0122 09:06:28.413250 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84068a6b-e189-419b-87f5-f31428f6eafe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cde8418c9f22553519ec0a9ba1ddcdd553f64aafb44fa002e924499701236bca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thzhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbb067d04c1683ffd06d01aab41bb5988fdca44b8bd16012d35c42795d0d8011\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thzhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-txvcq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:28Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:28 crc kubenswrapper[4811]: I0122 09:06:28.429879 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:28 crc kubenswrapper[4811]: I0122 09:06:28.429910 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:28 crc kubenswrapper[4811]: I0122 09:06:28.429921 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:28 crc kubenswrapper[4811]: I0122 09:06:28.429933 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:28 crc kubenswrapper[4811]: I0122 09:06:28.429943 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:28Z","lastTransitionTime":"2026-01-22T09:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:28 crc kubenswrapper[4811]: I0122 09:06:28.455933 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30b44c40-e6d4-4902-98e9-a259269d8bf7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c0b4f518da69d1ede47cb9d322e9d9ca120cfc5eb770f109f0e5fd2e3260964\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aee61445030e02ac4fa9de4fefd94fe50e821ac8897c7403e173f2594a2e0ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d66c2a7980e144d2ce01cb8ebac91297e012c09d4b9aec617e747542e1ca304\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4daff941637e717d7dc33c5627db72d253f50f50288c23257e1eaa66cd38c5d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a86cb7b767900041a2946ed016b105369283ee61acdf725ade666acb1e926ed0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0122 09:06:13.075349 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0122 09:06:13.075574 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 09:06:13.077171 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1744166093/tls.crt::/tmp/serving-cert-1744166093/tls.key\\\\\\\"\\\\nI0122 09:06:13.324400 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 09:06:13.330700 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 09:06:13.330724 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 09:06:13.330758 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 09:06:13.330771 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 09:06:13.337838 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 09:06:13.337866 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:06:13.337872 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:06:13.337876 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 09:06:13.337879 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 09:06:13.337883 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 09:06:13.337886 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 09:06:13.338151 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 09:06:13.339610 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9efb1ddf2ce6601c9860410e3ccacc38359aef779a7c8524bf4ec5950743d36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85e811085c9b345899231a1fcf56619c1cc5c8b0e6d22fc897fb96f20ba72e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85e811085c9b345899231a1fcf56619c1cc5c8b0e6d22fc897fb96f20ba72e37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:05:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:28Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:28 crc kubenswrapper[4811]: I0122 09:06:28.494913 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab5dbe63-08fa-4157-9c52-003944e50d7b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af3bf6cdac4751814ecd71637d9e80cbdda1c0e99714275fe77cfa858b3823b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4419625f66ae957b2d9b146fc8b085a4a95c3f47860d6b4847f8fe71a6c4c86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7830976d23e073a099c3c6af1fa2392d9504ad9d41ddf2ee63c16632c22f833a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45fa5d37dcd1dcae1a7488bb855e4bd7c7dc39a1199fe80b82f83604be089f2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:05:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:28Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:28 crc kubenswrapper[4811]: I0122 09:06:28.532551 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:28 crc kubenswrapper[4811]: I0122 09:06:28.532595 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:28 crc kubenswrapper[4811]: I0122 09:06:28.532607 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:28 crc kubenswrapper[4811]: I0122 09:06:28.532620 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:28 crc kubenswrapper[4811]: I0122 09:06:28.532659 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:28Z","lastTransitionTime":"2026-01-22T09:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:28 crc kubenswrapper[4811]: I0122 09:06:28.534814 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a14bfcc7d6d2ab5e1ad0c45999b2e73332e750ed65d9299e188505dafb26d0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:28Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:28 crc kubenswrapper[4811]: I0122 09:06:28.575260 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bae2e22607dec646b15563dc62f66655e7a5c2b7ad809406c907e5f687f80105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75e290e203fab8f291146855a960477ceb9c33d965d15b4722c668dc2b3fded3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:28Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:28 crc kubenswrapper[4811]: I0122 09:06:28.619291 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-274vf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cd0f0db-de53-47c0-9b45-2ce8b37392a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5f0ec565b046b22d600103b58946f6f520873f1686cc3042bf7ddc7f24bd4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e071df53e8cd1652f507eacae3e2c54ac75e76a6b9a426620c970762e81ecd84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c261cdea10a90cb9b06369f3b725b7778d070fb329e81ab44552446e12662451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f0736426a25dadac460e5ed467f3082cd8c4806190c053022d6f360c4290799\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9197a8c82d49fb9b4b1c97827ee95582ca8148656abb88904e06abb25c135597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55f26cdf9d61842467eb1367c6ec156b4de9c40a5b05fe60d3d100df5136a8a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://239f1575640cc017afbde44a6bc3d241facb1b9b7588e9c3902064b0192e9e97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://239f1575640cc017afbde44a6bc3d241facb1b9b7588e9c3902064b0192e9e97\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T09:06:26Z\\\",\\\"message\\\":\\\"k8s.io/client-go/informers/factory.go:160\\\\nI0122 09:06:26.804316 6184 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 09:06:26.804420 6184 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0122 09:06:26.804469 6184 handler.go:208] Removed *v1.Node event handler 2\\\\nI0122 09:06:26.804791 6184 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 09:06:26.805266 6184 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 09:06:26.805478 6184 factory.go:656] Stopping watch factory\\\\nI0122 09:06:26.805690 6184 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 09:06:26.827772 6184 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0122 09:06:26.827839 6184 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0122 09:06:26.827915 6184 ovnkube.go:599] Stopped ovnkube\\\\nI0122 09:06:26.827965 6184 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0122 09:06:26.828033 6184 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-274vf_openshift-ovn-kubernetes(1cd0f0db-de53-47c0-9b45-2ce8b37392a3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c741f2990f908528ccab348e1eb02e217a9b659becc7a41c4eb8da28bfd42e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://523ab909aa2cbac6043b3c572a17c40733e9b2155d4017253d68998cb85d3e7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://523ab909aa2cbac6043b3c572a17c40733e9b2155d4017253d68998cb85d3e7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-274vf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:28Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:28 crc kubenswrapper[4811]: I0122 09:06:28.634546 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:28 crc kubenswrapper[4811]: I0122 09:06:28.634577 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:28 crc kubenswrapper[4811]: I0122 09:06:28.634592 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:28 crc kubenswrapper[4811]: I0122 09:06:28.634609 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:28 crc kubenswrapper[4811]: I0122 09:06:28.634619 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:28Z","lastTransitionTime":"2026-01-22T09:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:28 crc kubenswrapper[4811]: I0122 09:06:28.652465 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n2kj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66a7f3f9-ab88-4b1a-a28a-900480fa9651\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3a88defded9a5435941e2608c8a29baee840bc0ce48454a3731f344299347cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z47mj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n2kj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:28Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:28 crc kubenswrapper[4811]: I0122 09:06:28.696202 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f776b1bc70646779cf35092ded02c00aa3b990c873de09d21e1f0d76258b0b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:28Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:28 crc kubenswrapper[4811]: I0122 09:06:28.734948 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:28Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:28 crc kubenswrapper[4811]: I0122 09:06:28.736549 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:28 crc kubenswrapper[4811]: I0122 09:06:28.736585 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:28 crc kubenswrapper[4811]: I0122 09:06:28.736594 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:28 crc kubenswrapper[4811]: I0122 09:06:28.736608 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:28 crc kubenswrapper[4811]: I0122 09:06:28.736636 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:28Z","lastTransitionTime":"2026-01-22T09:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:28 crc kubenswrapper[4811]: I0122 09:06:28.774879 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kfqgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2555861-d1bb-4f21-be4a-165ed9212932\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ececfc4b2e4f2a120e7a3307235d606db00733e1a5631b6e19b5312afc8e8ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jt7cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kfqgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:28Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:28 crc kubenswrapper[4811]: I0122 09:06:28.819145 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cf1d28a-4ee8-45bb-9a86-3687d8db2105\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684f8ca14b2eabcbf9280085d495af9a6f1fa76fb187f15b8fa5659178efdc93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64382d1748f1eaeb9f2f09d243b35c56f9fd70b9a9765ddb64cb0c4866ed493b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://590bb419e3791ec3a267b9a5aa4022ee74ba5e302b047e85211563e135c4cf31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b14bb77b0cf24bacdb8f5edae5d02ccea019b12feb315e6d6c4887feb1a9354\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0d10794bea5b8ea94b9247775e0c424fa9d481267c663d4bcae16eb4e7eb729\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09c7d347a923430e4eb2af48033ea3fc7715bedad38dd1b644c7b29a1ab617fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09c7d347a923430e4eb2af48033ea3fc7715bedad38dd1b644c7b29a1ab617fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba74d0a51550a0637b01b701bbaa33c5efe84fb88a69b7294d1125f1f400b7cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba74d0a51550a0637b01b701bbaa33c5efe84fb88a69b7294d1125f1f400b7cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9a2da74ed29f44340dd1a4778f15ba78ba4f673eb1b34b676e35c7d7c84af31d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a2da74ed29f44340dd1a4778f15ba78ba4f673eb1b34b676e35c7d7c84af31d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:05:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:28Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:28 crc kubenswrapper[4811]: I0122 09:06:28.838480 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:28 crc kubenswrapper[4811]: I0122 09:06:28.838522 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:28 crc kubenswrapper[4811]: I0122 09:06:28.838534 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:28 crc kubenswrapper[4811]: I0122 09:06:28.838549 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:28 crc kubenswrapper[4811]: I0122 09:06:28.838560 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:28Z","lastTransitionTime":"2026-01-22T09:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:28 crc kubenswrapper[4811]: I0122 09:06:28.940358 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:28 crc kubenswrapper[4811]: I0122 09:06:28.940395 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:28 crc kubenswrapper[4811]: I0122 09:06:28.940405 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:28 crc kubenswrapper[4811]: I0122 09:06:28.940419 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:28 crc kubenswrapper[4811]: I0122 09:06:28.940436 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:28Z","lastTransitionTime":"2026-01-22T09:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:28 crc kubenswrapper[4811]: I0122 09:06:28.982512 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 08:25:16.026479475 +0000 UTC Jan 22 09:06:29 crc kubenswrapper[4811]: I0122 09:06:29.042042 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:29 crc kubenswrapper[4811]: I0122 09:06:29.042071 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:29 crc kubenswrapper[4811]: I0122 09:06:29.042080 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:29 crc kubenswrapper[4811]: I0122 09:06:29.042091 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:29 crc kubenswrapper[4811]: I0122 09:06:29.042100 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:29Z","lastTransitionTime":"2026-01-22T09:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:29 crc kubenswrapper[4811]: I0122 09:06:29.144106 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:29 crc kubenswrapper[4811]: I0122 09:06:29.144140 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:29 crc kubenswrapper[4811]: I0122 09:06:29.144149 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:29 crc kubenswrapper[4811]: I0122 09:06:29.144161 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:29 crc kubenswrapper[4811]: I0122 09:06:29.144171 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:29Z","lastTransitionTime":"2026-01-22T09:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:29 crc kubenswrapper[4811]: I0122 09:06:29.193761 4811 scope.go:117] "RemoveContainer" containerID="239f1575640cc017afbde44a6bc3d241facb1b9b7588e9c3902064b0192e9e97" Jan 22 09:06:29 crc kubenswrapper[4811]: E0122 09:06:29.193894 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-274vf_openshift-ovn-kubernetes(1cd0f0db-de53-47c0-9b45-2ce8b37392a3)\"" pod="openshift-ovn-kubernetes/ovnkube-node-274vf" podUID="1cd0f0db-de53-47c0-9b45-2ce8b37392a3" Jan 22 09:06:29 crc kubenswrapper[4811]: I0122 09:06:29.245482 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:29 crc kubenswrapper[4811]: I0122 09:06:29.245525 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:29 crc kubenswrapper[4811]: I0122 09:06:29.245535 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:29 crc kubenswrapper[4811]: I0122 09:06:29.245549 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:29 crc kubenswrapper[4811]: I0122 09:06:29.245557 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:29Z","lastTransitionTime":"2026-01-22T09:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:29 crc kubenswrapper[4811]: I0122 09:06:29.347224 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:29 crc kubenswrapper[4811]: I0122 09:06:29.347255 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:29 crc kubenswrapper[4811]: I0122 09:06:29.347264 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:29 crc kubenswrapper[4811]: I0122 09:06:29.347275 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:29 crc kubenswrapper[4811]: I0122 09:06:29.347288 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:29Z","lastTransitionTime":"2026-01-22T09:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:29 crc kubenswrapper[4811]: I0122 09:06:29.448804 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:29 crc kubenswrapper[4811]: I0122 09:06:29.448839 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:29 crc kubenswrapper[4811]: I0122 09:06:29.448848 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:29 crc kubenswrapper[4811]: I0122 09:06:29.448866 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:29 crc kubenswrapper[4811]: I0122 09:06:29.448876 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:29Z","lastTransitionTime":"2026-01-22T09:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:29 crc kubenswrapper[4811]: I0122 09:06:29.550962 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:29 crc kubenswrapper[4811]: I0122 09:06:29.550998 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:29 crc kubenswrapper[4811]: I0122 09:06:29.551008 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:29 crc kubenswrapper[4811]: I0122 09:06:29.551025 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:29 crc kubenswrapper[4811]: I0122 09:06:29.551035 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:29Z","lastTransitionTime":"2026-01-22T09:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:29 crc kubenswrapper[4811]: I0122 09:06:29.653161 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:29 crc kubenswrapper[4811]: I0122 09:06:29.653203 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:29 crc kubenswrapper[4811]: I0122 09:06:29.653215 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:29 crc kubenswrapper[4811]: I0122 09:06:29.653230 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:29 crc kubenswrapper[4811]: I0122 09:06:29.653242 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:29Z","lastTransitionTime":"2026-01-22T09:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:29 crc kubenswrapper[4811]: I0122 09:06:29.655472 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 09:06:29 crc kubenswrapper[4811]: I0122 09:06:29.655511 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 09:06:29 crc kubenswrapper[4811]: E0122 09:06:29.655669 4811 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 09:06:29 crc kubenswrapper[4811]: E0122 09:06:29.655696 4811 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 09:06:29 crc kubenswrapper[4811]: E0122 09:06:29.655672 4811 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 09:06:29 crc kubenswrapper[4811]: E0122 09:06:29.655718 4811 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 09:06:29 crc kubenswrapper[4811]: E0122 09:06:29.655728 4811 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 09:06:29 crc kubenswrapper[4811]: E0122 09:06:29.655739 4811 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 09:06:29 crc kubenswrapper[4811]: E0122 09:06:29.655769 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-22 09:06:45.65575168 +0000 UTC m=+49.977938793 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 09:06:29 crc kubenswrapper[4811]: E0122 09:06:29.655784 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-22 09:06:45.65577858 +0000 UTC m=+49.977965704 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 09:06:29 crc kubenswrapper[4811]: I0122 09:06:29.755390 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:29 crc kubenswrapper[4811]: I0122 09:06:29.755430 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:29 crc kubenswrapper[4811]: I0122 09:06:29.755440 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:29 crc kubenswrapper[4811]: I0122 09:06:29.755455 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:29 crc kubenswrapper[4811]: I0122 09:06:29.755467 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:29Z","lastTransitionTime":"2026-01-22T09:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:29 crc kubenswrapper[4811]: I0122 09:06:29.856971 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 09:06:29 crc kubenswrapper[4811]: E0122 09:06:29.857098 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 09:06:45.857083847 +0000 UTC m=+50.179270970 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:06:29 crc kubenswrapper[4811]: I0122 09:06:29.857140 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:06:29 crc kubenswrapper[4811]: I0122 09:06:29.857240 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:06:29 crc kubenswrapper[4811]: E0122 09:06:29.857282 4811 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 09:06:29 crc kubenswrapper[4811]: E0122 09:06:29.857329 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 09:06:45.857318048 +0000 UTC m=+50.179505171 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 09:06:29 crc kubenswrapper[4811]: E0122 09:06:29.857618 4811 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 09:06:29 crc kubenswrapper[4811]: E0122 09:06:29.857727 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 09:06:45.857695921 +0000 UTC m=+50.179883044 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 09:06:29 crc kubenswrapper[4811]: I0122 09:06:29.857773 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:29 crc kubenswrapper[4811]: I0122 09:06:29.857816 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:29 crc kubenswrapper[4811]: I0122 09:06:29.857825 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:29 crc kubenswrapper[4811]: I0122 09:06:29.857839 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:29 crc kubenswrapper[4811]: I0122 09:06:29.857848 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:29Z","lastTransitionTime":"2026-01-22T09:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:29 crc kubenswrapper[4811]: I0122 09:06:29.959501 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:29 crc kubenswrapper[4811]: I0122 09:06:29.959525 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:29 crc kubenswrapper[4811]: I0122 09:06:29.959533 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:29 crc kubenswrapper[4811]: I0122 09:06:29.959544 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:29 crc kubenswrapper[4811]: I0122 09:06:29.959553 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:29Z","lastTransitionTime":"2026-01-22T09:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:29 crc kubenswrapper[4811]: I0122 09:06:29.983304 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 10:56:57.685332909 +0000 UTC Jan 22 09:06:29 crc kubenswrapper[4811]: I0122 09:06:29.991597 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 09:06:29 crc kubenswrapper[4811]: I0122 09:06:29.991651 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:06:29 crc kubenswrapper[4811]: I0122 09:06:29.991686 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 09:06:29 crc kubenswrapper[4811]: E0122 09:06:29.991736 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 09:06:29 crc kubenswrapper[4811]: E0122 09:06:29.991822 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 09:06:29 crc kubenswrapper[4811]: E0122 09:06:29.991898 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 09:06:30 crc kubenswrapper[4811]: I0122 09:06:30.060763 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:30 crc kubenswrapper[4811]: I0122 09:06:30.060817 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:30 crc kubenswrapper[4811]: I0122 09:06:30.060828 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:30 crc kubenswrapper[4811]: I0122 09:06:30.060843 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:30 crc kubenswrapper[4811]: I0122 09:06:30.060853 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:30Z","lastTransitionTime":"2026-01-22T09:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:30 crc kubenswrapper[4811]: I0122 09:06:30.162605 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:30 crc kubenswrapper[4811]: I0122 09:06:30.162678 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:30 crc kubenswrapper[4811]: I0122 09:06:30.162700 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:30 crc kubenswrapper[4811]: I0122 09:06:30.162735 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:30 crc kubenswrapper[4811]: I0122 09:06:30.162749 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:30Z","lastTransitionTime":"2026-01-22T09:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:30 crc kubenswrapper[4811]: I0122 09:06:30.265155 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:30 crc kubenswrapper[4811]: I0122 09:06:30.265345 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:30 crc kubenswrapper[4811]: I0122 09:06:30.265412 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:30 crc kubenswrapper[4811]: I0122 09:06:30.265697 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:30 crc kubenswrapper[4811]: I0122 09:06:30.265775 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:30Z","lastTransitionTime":"2026-01-22T09:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:30 crc kubenswrapper[4811]: I0122 09:06:30.330823 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gwnx7"] Jan 22 09:06:30 crc kubenswrapper[4811]: I0122 09:06:30.331452 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gwnx7" Jan 22 09:06:30 crc kubenswrapper[4811]: I0122 09:06:30.333139 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 22 09:06:30 crc kubenswrapper[4811]: I0122 09:06:30.333552 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 22 09:06:30 crc kubenswrapper[4811]: I0122 09:06:30.342105 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f776b1bc70646779cf35092ded02c00aa3b990c873de09d21e1f0d76258b0b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:30Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:30 crc kubenswrapper[4811]: I0122 09:06:30.350543 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gwnx7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"153efd1a-2c09-4c49-94e1-2307bbf1e659\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nklsw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nklsw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gwnx7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:30Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:30 crc kubenswrapper[4811]: I0122 09:06:30.363536 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cf1d28a-4ee8-45bb-9a86-3687d8db2105\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684f8ca14b2eabcbf9280085d495af9a6f1fa76fb187f15b8fa5659178efdc93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64382d1748f1eaeb9f2f09d243b35c56f9fd70b9a9765ddb64cb0c4866ed493b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://590bb419e3791ec3a267b9a5aa4022ee74ba5e302b047e85211563e135c4cf31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b14bb77b0cf24bacdb8f5edae5d02ccea019b12feb315e6d6c4887feb1a9354\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0d10794bea5b8ea94b9247775e0c424fa9d481267c663d4bcae16eb4e7eb729\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09c7d347a923430e4eb2af48033ea3fc7715bedad38dd1b644c7b29a1ab617fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09c7d347a923430e4eb2af48033ea3fc7715bedad38dd1b644c7b29a1ab617fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba74d0a51550a0637b01b701bbaa33c5efe84fb88a69b7294d1125f1f400b7cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba74d0a51550a0637b01b701bbaa33c5efe84fb88a69b7294d1125f1f400b7cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9a2da74ed29f44340dd1a4778f15ba78ba4f673eb1b34b676e35c7d7c84af31d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a2da74ed29f44340dd1a4778f15ba78ba4f673eb1b34b676e35c7d7c84af31d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:05:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:30Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:30 crc kubenswrapper[4811]: I0122 09:06:30.367469 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:30 crc kubenswrapper[4811]: I0122 09:06:30.367508 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:30 crc kubenswrapper[4811]: I0122 09:06:30.367520 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:30 crc kubenswrapper[4811]: I0122 09:06:30.367538 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:30 crc kubenswrapper[4811]: I0122 09:06:30.367549 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:30Z","lastTransitionTime":"2026-01-22T09:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:30 crc kubenswrapper[4811]: I0122 09:06:30.371890 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:30Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:30 crc kubenswrapper[4811]: I0122 09:06:30.380400 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kfqgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2555861-d1bb-4f21-be4a-165ed9212932\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ececfc4b2e4f2a120e7a3307235d606db00733e1a5631b6e19b5312afc8e8ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jt7cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kfqgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:30Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:30 crc kubenswrapper[4811]: I0122 09:06:30.389087 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30b44c40-e6d4-4902-98e9-a259269d8bf7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c0b4f518da69d1ede47cb9d322e9d9ca120cfc5eb770f109f0e5fd2e3260964\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aee61445030e02ac4fa9de4fefd94fe50e821ac8897c7403e173f2594a2e0ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d66c2a7980e144d2ce01cb8ebac91297e012c09d4b9aec617e747542e1ca304\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4daff941637e717d7dc33c5627db72d253f50f50288c23257e1eaa66cd38c5d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a86cb7b767900041a2946ed016b105369283ee61acdf725ade666acb1e926ed0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0122 09:06:13.075349 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0122 09:06:13.075574 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 09:06:13.077171 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1744166093/tls.crt::/tmp/serving-cert-1744166093/tls.key\\\\\\\"\\\\nI0122 09:06:13.324400 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 09:06:13.330700 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 09:06:13.330724 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 09:06:13.330758 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 09:06:13.330771 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 09:06:13.337838 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 09:06:13.337866 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:06:13.337872 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:06:13.337876 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 09:06:13.337879 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 09:06:13.337883 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 09:06:13.337886 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 09:06:13.338151 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 09:06:13.339610 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9efb1ddf2ce6601c9860410e3ccacc38359aef779a7c8524bf4ec5950743d36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85e811085c9b345899231a1fcf56619c1cc5c8b0e6d22fc897fb96f20ba72e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85e811085c9b345899231a1fcf56619c1cc5c8b0e6d22fc897fb96f20ba72e37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:05:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:30Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:30 crc kubenswrapper[4811]: I0122 09:06:30.396998 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:30Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:30 crc kubenswrapper[4811]: I0122 09:06:30.405773 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:30Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:30 crc kubenswrapper[4811]: I0122 09:06:30.412514 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xhhs7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f42dc1a6-d0c4-43e4-b9d9-b40c1f910400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a64d7f1d53f9a3d9616b76ea12902134c5e0a3a30c9a61113675074caae70d58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcfzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xhhs7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:30Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:30 crc kubenswrapper[4811]: I0122 09:06:30.422388 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9g4j8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d23c9c9-89ca-4db5-99dc-1e5b9f80be38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://279ca4748f2c8229a6647c23a414668a7612c0c863b52ef33c5b3dd026de74ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e01aaa7db78471fea3a39a7dfc2be7926746a5c38a6a92dc3d33ebab7b8bf3cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e01aaa7db78471fea3a39a7dfc2be7926746a5c38a6a92dc3d33ebab7b8bf3cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c4b6317408e75e2cc7947a3ff91f9268141cd5000f951dcbfdab3fdc5c2386c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c4b6317408e75e2cc7947a3ff91f9268141cd5000f951dcbfdab3fdc5c2386c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9e774edddd1e144cf32c598245081727747e9a4e3c04ee339ff63e330ceb5cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9e774edddd1e144cf32c598245081727747e9a4e3c04ee339ff63e330ceb5cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66cb2961790d3d876438b07dabf72c50ed19c9e946e21dcdf90f098a38e02c84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66cb2961790d3d876438b07dabf72c50ed19c9e946e21dcdf90f098a38e02c84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://016900d5e167b9638dd095fc155f63e33c55037edb90af28ebef58c91b86fe7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://016900d5e167b9638dd095fc155f63e33c55037edb90af28ebef58c91b86fe7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4face38e0653080b6d84c1e1a126c96d043eb68c6e19f76dbd8def8bc96e9d40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4face38e0653080b6d84c1e1a126c96d043eb68c6e19f76dbd8def8bc96e9d40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9g4j8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:30Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:30 crc kubenswrapper[4811]: I0122 09:06:30.429871 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84068a6b-e189-419b-87f5-f31428f6eafe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cde8418c9f22553519ec0a9ba1ddcdd553f64aafb44fa002e924499701236bca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thzhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbb067d04c1683ffd06d01aab41bb5988fdca44b8bd16012d35c42795d0d8011\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thzhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-txvcq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:30Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:30 crc kubenswrapper[4811]: I0122 09:06:30.438758 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab5dbe63-08fa-4157-9c52-003944e50d7b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af3bf6cdac4751814ecd71637d9e80cbdda1c0e99714275fe77cfa858b3823b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4419625f66ae957b2d9b146fc8b085a4a95c3f47860d6b4847f8fe71a6c4c86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7830976d23e073a099c3c6af1fa2392d9504ad9d41ddf2ee63c16632c22f833a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45fa5d37dcd1dcae1a7488bb855e4bd7c7dc39a1199fe80b82f83604be089f2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:05:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:30Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:30 crc kubenswrapper[4811]: I0122 09:06:30.447340 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a14bfcc7d6d2ab5e1ad0c45999b2e73332e750ed65d9299e188505dafb26d0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:30Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:30 crc kubenswrapper[4811]: I0122 09:06:30.457060 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bae2e22607dec646b15563dc62f66655e7a5c2b7ad809406c907e5f687f80105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75e290e203fab8f291146855a960477ceb9c33d965d15b4722c668dc2b3fded3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:30Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:30 crc kubenswrapper[4811]: I0122 09:06:30.462858 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nklsw\" (UniqueName: \"kubernetes.io/projected/153efd1a-2c09-4c49-94e1-2307bbf1e659-kube-api-access-nklsw\") pod \"ovnkube-control-plane-749d76644c-gwnx7\" (UID: \"153efd1a-2c09-4c49-94e1-2307bbf1e659\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gwnx7" Jan 22 09:06:30 crc kubenswrapper[4811]: I0122 09:06:30.462915 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/153efd1a-2c09-4c49-94e1-2307bbf1e659-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-gwnx7\" (UID: \"153efd1a-2c09-4c49-94e1-2307bbf1e659\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gwnx7" Jan 22 09:06:30 crc kubenswrapper[4811]: I0122 09:06:30.462966 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/153efd1a-2c09-4c49-94e1-2307bbf1e659-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-gwnx7\" (UID: \"153efd1a-2c09-4c49-94e1-2307bbf1e659\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gwnx7" Jan 22 09:06:30 crc kubenswrapper[4811]: I0122 09:06:30.463006 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/153efd1a-2c09-4c49-94e1-2307bbf1e659-env-overrides\") pod \"ovnkube-control-plane-749d76644c-gwnx7\" (UID: \"153efd1a-2c09-4c49-94e1-2307bbf1e659\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gwnx7" Jan 22 09:06:30 crc kubenswrapper[4811]: I0122 09:06:30.469115 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:30 crc kubenswrapper[4811]: I0122 09:06:30.469218 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:30 crc kubenswrapper[4811]: I0122 09:06:30.469294 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:30 crc kubenswrapper[4811]: I0122 09:06:30.469370 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:30 crc kubenswrapper[4811]: I0122 09:06:30.469432 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:30Z","lastTransitionTime":"2026-01-22T09:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:30 crc kubenswrapper[4811]: I0122 09:06:30.471135 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-274vf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cd0f0db-de53-47c0-9b45-2ce8b37392a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5f0ec565b046b22d600103b58946f6f520873f1686cc3042bf7ddc7f24bd4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e071df53e8cd1652f507eacae3e2c54ac75e76a6b9a426620c970762e81ecd84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c261cdea10a90cb9b06369f3b725b7778d070fb329e81ab44552446e12662451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f0736426a25dadac460e5ed467f3082cd8c4806190c053022d6f360c4290799\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9197a8c82d49fb9b4b1c97827ee95582ca8148656abb88904e06abb25c135597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55f26cdf9d61842467eb1367c6ec156b4de9c40a5b05fe60d3d100df5136a8a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://239f1575640cc017afbde44a6bc3d241facb1b9b7588e9c3902064b0192e9e97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://239f1575640cc017afbde44a6bc3d241facb1b9b7588e9c3902064b0192e9e97\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T09:06:26Z\\\",\\\"message\\\":\\\"k8s.io/client-go/informers/factory.go:160\\\\nI0122 09:06:26.804316 6184 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 09:06:26.804420 6184 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0122 09:06:26.804469 6184 handler.go:208] Removed *v1.Node event handler 2\\\\nI0122 09:06:26.804791 6184 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 09:06:26.805266 6184 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 09:06:26.805478 6184 factory.go:656] Stopping watch factory\\\\nI0122 09:06:26.805690 6184 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 09:06:26.827772 6184 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0122 09:06:26.827839 6184 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0122 09:06:26.827915 6184 ovnkube.go:599] Stopped ovnkube\\\\nI0122 09:06:26.827965 6184 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0122 09:06:26.828033 6184 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-274vf_openshift-ovn-kubernetes(1cd0f0db-de53-47c0-9b45-2ce8b37392a3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c741f2990f908528ccab348e1eb02e217a9b659becc7a41c4eb8da28bfd42e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://523ab909aa2cbac6043b3c572a17c40733e9b2155d4017253d68998cb85d3e7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://523ab909aa2cbac6043b3c572a17c40733e9b2155d4017253d68998cb85d3e7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-274vf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:30Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:30 crc kubenswrapper[4811]: I0122 09:06:30.478170 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n2kj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66a7f3f9-ab88-4b1a-a28a-900480fa9651\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3a88defded9a5435941e2608c8a29baee840bc0ce48454a3731f344299347cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z47mj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n2kj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:30Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:30 crc kubenswrapper[4811]: I0122 09:06:30.564044 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nklsw\" (UniqueName: \"kubernetes.io/projected/153efd1a-2c09-4c49-94e1-2307bbf1e659-kube-api-access-nklsw\") pod \"ovnkube-control-plane-749d76644c-gwnx7\" (UID: \"153efd1a-2c09-4c49-94e1-2307bbf1e659\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gwnx7" Jan 22 09:06:30 crc kubenswrapper[4811]: I0122 09:06:30.564095 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/153efd1a-2c09-4c49-94e1-2307bbf1e659-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-gwnx7\" (UID: \"153efd1a-2c09-4c49-94e1-2307bbf1e659\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gwnx7" Jan 22 09:06:30 crc kubenswrapper[4811]: I0122 09:06:30.564136 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/153efd1a-2c09-4c49-94e1-2307bbf1e659-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-gwnx7\" (UID: \"153efd1a-2c09-4c49-94e1-2307bbf1e659\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gwnx7" Jan 22 09:06:30 crc kubenswrapper[4811]: I0122 09:06:30.564165 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/153efd1a-2c09-4c49-94e1-2307bbf1e659-env-overrides\") pod \"ovnkube-control-plane-749d76644c-gwnx7\" (UID: \"153efd1a-2c09-4c49-94e1-2307bbf1e659\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gwnx7" Jan 22 09:06:30 crc kubenswrapper[4811]: I0122 09:06:30.564740 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/153efd1a-2c09-4c49-94e1-2307bbf1e659-env-overrides\") pod \"ovnkube-control-plane-749d76644c-gwnx7\" (UID: \"153efd1a-2c09-4c49-94e1-2307bbf1e659\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gwnx7" Jan 22 09:06:30 crc kubenswrapper[4811]: I0122 09:06:30.565445 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/153efd1a-2c09-4c49-94e1-2307bbf1e659-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-gwnx7\" (UID: \"153efd1a-2c09-4c49-94e1-2307bbf1e659\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gwnx7" Jan 22 09:06:30 crc kubenswrapper[4811]: I0122 09:06:30.571937 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:30 crc kubenswrapper[4811]: I0122 09:06:30.571967 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:30 crc kubenswrapper[4811]: I0122 09:06:30.571976 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:30 crc kubenswrapper[4811]: I0122 09:06:30.571994 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:30 crc kubenswrapper[4811]: I0122 09:06:30.572009 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:30Z","lastTransitionTime":"2026-01-22T09:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:30 crc kubenswrapper[4811]: I0122 09:06:30.572608 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/153efd1a-2c09-4c49-94e1-2307bbf1e659-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-gwnx7\" (UID: \"153efd1a-2c09-4c49-94e1-2307bbf1e659\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gwnx7" Jan 22 09:06:30 crc kubenswrapper[4811]: I0122 09:06:30.577428 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nklsw\" (UniqueName: \"kubernetes.io/projected/153efd1a-2c09-4c49-94e1-2307bbf1e659-kube-api-access-nklsw\") pod \"ovnkube-control-plane-749d76644c-gwnx7\" (UID: \"153efd1a-2c09-4c49-94e1-2307bbf1e659\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gwnx7" Jan 22 09:06:30 crc kubenswrapper[4811]: I0122 09:06:30.642194 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gwnx7" Jan 22 09:06:30 crc kubenswrapper[4811]: W0122 09:06:30.655685 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod153efd1a_2c09_4c49_94e1_2307bbf1e659.slice/crio-f627f9584599b8fcd5f815695989deedbd4fdd71b32bcb097a817360205e6de0 WatchSource:0}: Error finding container f627f9584599b8fcd5f815695989deedbd4fdd71b32bcb097a817360205e6de0: Status 404 returned error can't find the container with id f627f9584599b8fcd5f815695989deedbd4fdd71b32bcb097a817360205e6de0 Jan 22 09:06:30 crc kubenswrapper[4811]: I0122 09:06:30.676006 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:30 crc kubenswrapper[4811]: I0122 09:06:30.676049 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:30 crc kubenswrapper[4811]: I0122 09:06:30.676061 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:30 crc kubenswrapper[4811]: I0122 09:06:30.676074 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:30 crc kubenswrapper[4811]: I0122 09:06:30.676085 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:30Z","lastTransitionTime":"2026-01-22T09:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:30 crc kubenswrapper[4811]: I0122 09:06:30.778388 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:30 crc kubenswrapper[4811]: I0122 09:06:30.778432 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:30 crc kubenswrapper[4811]: I0122 09:06:30.778442 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:30 crc kubenswrapper[4811]: I0122 09:06:30.778459 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:30 crc kubenswrapper[4811]: I0122 09:06:30.778471 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:30Z","lastTransitionTime":"2026-01-22T09:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:30 crc kubenswrapper[4811]: I0122 09:06:30.879907 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:30 crc kubenswrapper[4811]: I0122 09:06:30.880235 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:30 crc kubenswrapper[4811]: I0122 09:06:30.880246 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:30 crc kubenswrapper[4811]: I0122 09:06:30.880263 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:30 crc kubenswrapper[4811]: I0122 09:06:30.880276 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:30Z","lastTransitionTime":"2026-01-22T09:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:30 crc kubenswrapper[4811]: I0122 09:06:30.983038 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:30 crc kubenswrapper[4811]: I0122 09:06:30.983068 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:30 crc kubenswrapper[4811]: I0122 09:06:30.983078 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:30 crc kubenswrapper[4811]: I0122 09:06:30.983093 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:30 crc kubenswrapper[4811]: I0122 09:06:30.983102 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:30Z","lastTransitionTime":"2026-01-22T09:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:30 crc kubenswrapper[4811]: I0122 09:06:30.983408 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 02:13:38.626471744 +0000 UTC Jan 22 09:06:31 crc kubenswrapper[4811]: I0122 09:06:31.084951 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:31 crc kubenswrapper[4811]: I0122 09:06:31.084977 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:31 crc kubenswrapper[4811]: I0122 09:06:31.084986 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:31 crc kubenswrapper[4811]: I0122 09:06:31.084996 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:31 crc kubenswrapper[4811]: I0122 09:06:31.085025 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:31Z","lastTransitionTime":"2026-01-22T09:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:31 crc kubenswrapper[4811]: I0122 09:06:31.186976 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:31 crc kubenswrapper[4811]: I0122 09:06:31.187012 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:31 crc kubenswrapper[4811]: I0122 09:06:31.187021 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:31 crc kubenswrapper[4811]: I0122 09:06:31.187036 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:31 crc kubenswrapper[4811]: I0122 09:06:31.187046 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:31Z","lastTransitionTime":"2026-01-22T09:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:31 crc kubenswrapper[4811]: I0122 09:06:31.200766 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gwnx7" event={"ID":"153efd1a-2c09-4c49-94e1-2307bbf1e659","Type":"ContainerStarted","Data":"ddce971b4fbeb72e1a32c0adab8e46cb8bce2ff9736cd875f63e105bf39dcc27"} Jan 22 09:06:31 crc kubenswrapper[4811]: I0122 09:06:31.200813 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gwnx7" event={"ID":"153efd1a-2c09-4c49-94e1-2307bbf1e659","Type":"ContainerStarted","Data":"71a7748f45c9a14885785ca825c32a5036a887c7a54b36cadd1a89e57608111f"} Jan 22 09:06:31 crc kubenswrapper[4811]: I0122 09:06:31.200836 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gwnx7" event={"ID":"153efd1a-2c09-4c49-94e1-2307bbf1e659","Type":"ContainerStarted","Data":"f627f9584599b8fcd5f815695989deedbd4fdd71b32bcb097a817360205e6de0"} Jan 22 09:06:31 crc kubenswrapper[4811]: I0122 09:06:31.212321 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab5dbe63-08fa-4157-9c52-003944e50d7b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af3bf6cdac4751814ecd71637d9e80cbdda1c0e99714275fe77cfa858b3823b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4419625f66ae957b2d9b146fc8b085a4a95c3f47860d6b4847f8fe71a6c4c86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7830976d23e073a099c3c6af1fa2392d9504ad9d41ddf2ee63c16632c22f833a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45fa5d37dcd1dcae1a7488bb855e4bd7c7dc39a1199fe80b82f83604be089f2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:05:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:31Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:31 crc kubenswrapper[4811]: I0122 09:06:31.222225 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a14bfcc7d6d2ab5e1ad0c45999b2e73332e750ed65d9299e188505dafb26d0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:31Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:31 crc kubenswrapper[4811]: I0122 09:06:31.231146 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bae2e22607dec646b15563dc62f66655e7a5c2b7ad809406c907e5f687f80105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75e290e203fab8f291146855a960477ceb9c33d965d15b4722c668dc2b3fded3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:31Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:31 crc kubenswrapper[4811]: I0122 09:06:31.243852 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-274vf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cd0f0db-de53-47c0-9b45-2ce8b37392a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5f0ec565b046b22d600103b58946f6f520873f1686cc3042bf7ddc7f24bd4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e071df53e8cd1652f507eacae3e2c54ac75e76a6b9a426620c970762e81ecd84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c261cdea10a90cb9b06369f3b725b7778d070fb329e81ab44552446e12662451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f0736426a25dadac460e5ed467f3082cd8c4806190c053022d6f360c4290799\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9197a8c82d49fb9b4b1c97827ee95582ca8148656abb88904e06abb25c135597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55f26cdf9d61842467eb1367c6ec156b4de9c40a5b05fe60d3d100df5136a8a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://239f1575640cc017afbde44a6bc3d241facb1b9b7588e9c3902064b0192e9e97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://239f1575640cc017afbde44a6bc3d241facb1b9b7588e9c3902064b0192e9e97\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T09:06:26Z\\\",\\\"message\\\":\\\"k8s.io/client-go/informers/factory.go:160\\\\nI0122 09:06:26.804316 6184 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 09:06:26.804420 6184 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0122 09:06:26.804469 6184 handler.go:208] Removed *v1.Node event handler 2\\\\nI0122 09:06:26.804791 6184 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 09:06:26.805266 6184 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 09:06:26.805478 6184 factory.go:656] Stopping watch factory\\\\nI0122 09:06:26.805690 6184 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 09:06:26.827772 6184 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0122 09:06:26.827839 6184 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0122 09:06:26.827915 6184 ovnkube.go:599] Stopped ovnkube\\\\nI0122 09:06:26.827965 6184 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0122 09:06:26.828033 6184 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-274vf_openshift-ovn-kubernetes(1cd0f0db-de53-47c0-9b45-2ce8b37392a3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c741f2990f908528ccab348e1eb02e217a9b659becc7a41c4eb8da28bfd42e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://523ab909aa2cbac6043b3c572a17c40733e9b2155d4017253d68998cb85d3e7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://523ab909aa2cbac6043b3c572a17c40733e9b2155d4017253d68998cb85d3e7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-274vf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:31Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:31 crc kubenswrapper[4811]: I0122 09:06:31.251919 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n2kj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66a7f3f9-ab88-4b1a-a28a-900480fa9651\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3a88defded9a5435941e2608c8a29baee840bc0ce48454a3731f344299347cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z47mj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n2kj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:31Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:31 crc kubenswrapper[4811]: I0122 09:06:31.259963 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f776b1bc70646779cf35092ded02c00aa3b990c873de09d21e1f0d76258b0b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:31Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:31 crc kubenswrapper[4811]: I0122 09:06:31.267870 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gwnx7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"153efd1a-2c09-4c49-94e1-2307bbf1e659\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71a7748f45c9a14885785ca825c32a5036a887c7a54b36cadd1a89e57608111f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nklsw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddce971b4fbeb72e1a32c0adab8e46cb8bce2ff9736cd875f63e105bf39dcc27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nklsw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gwnx7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:31Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:31 crc kubenswrapper[4811]: I0122 09:06:31.281775 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cf1d28a-4ee8-45bb-9a86-3687d8db2105\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684f8ca14b2eabcbf9280085d495af9a6f1fa76fb187f15b8fa5659178efdc93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64382d1748f1eaeb9f2f09d243b35c56f9fd70b9a9765ddb64cb0c4866ed493b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://590bb419e3791ec3a267b9a5aa4022ee74ba5e302b047e85211563e135c4cf31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b14bb77b0cf24bacdb8f5edae5d02ccea019b12feb315e6d6c4887feb1a9354\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0d10794bea5b8ea94b9247775e0c424fa9d481267c663d4bcae16eb4e7eb729\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09c7d347a923430e4eb2af48033ea3fc7715bedad38dd1b644c7b29a1ab617fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09c7d347a923430e4eb2af48033ea3fc7715bedad38dd1b644c7b29a1ab617fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba74d0a51550a0637b01b701bbaa33c5efe84fb88a69b7294d1125f1f400b7cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba74d0a51550a0637b01b701bbaa33c5efe84fb88a69b7294d1125f1f400b7cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9a2da74ed29f44340dd1a4778f15ba78ba4f673eb1b34b676e35c7d7c84af31d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a2da74ed29f44340dd1a4778f15ba78ba4f673eb1b34b676e35c7d7c84af31d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:05:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:31Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:31 crc kubenswrapper[4811]: I0122 09:06:31.289020 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:31 crc kubenswrapper[4811]: I0122 09:06:31.289049 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:31 crc kubenswrapper[4811]: I0122 09:06:31.289059 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:31 crc kubenswrapper[4811]: I0122 09:06:31.289074 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:31 crc kubenswrapper[4811]: I0122 09:06:31.289084 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:31Z","lastTransitionTime":"2026-01-22T09:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:31 crc kubenswrapper[4811]: I0122 09:06:31.290898 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:31Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:31 crc kubenswrapper[4811]: I0122 09:06:31.300250 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kfqgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2555861-d1bb-4f21-be4a-165ed9212932\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ececfc4b2e4f2a120e7a3307235d606db00733e1a5631b6e19b5312afc8e8ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jt7cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kfqgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:31Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:31 crc kubenswrapper[4811]: I0122 09:06:31.310693 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30b44c40-e6d4-4902-98e9-a259269d8bf7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c0b4f518da69d1ede47cb9d322e9d9ca120cfc5eb770f109f0e5fd2e3260964\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aee61445030e02ac4fa9de4fefd94fe50e821ac8897c7403e173f2594a2e0ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d66c2a7980e144d2ce01cb8ebac91297e012c09d4b9aec617e747542e1ca304\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4daff941637e717d7dc33c5627db72d253f50f50288c23257e1eaa66cd38c5d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a86cb7b767900041a2946ed016b105369283ee61acdf725ade666acb1e926ed0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0122 09:06:13.075349 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0122 09:06:13.075574 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 09:06:13.077171 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1744166093/tls.crt::/tmp/serving-cert-1744166093/tls.key\\\\\\\"\\\\nI0122 09:06:13.324400 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 09:06:13.330700 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 09:06:13.330724 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 09:06:13.330758 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 09:06:13.330771 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 09:06:13.337838 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 09:06:13.337866 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:06:13.337872 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:06:13.337876 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 09:06:13.337879 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 09:06:13.337883 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 09:06:13.337886 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 09:06:13.338151 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 09:06:13.339610 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9efb1ddf2ce6601c9860410e3ccacc38359aef779a7c8524bf4ec5950743d36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85e811085c9b345899231a1fcf56619c1cc5c8b0e6d22fc897fb96f20ba72e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85e811085c9b345899231a1fcf56619c1cc5c8b0e6d22fc897fb96f20ba72e37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:05:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:31Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:31 crc kubenswrapper[4811]: I0122 09:06:31.320584 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:31Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:31 crc kubenswrapper[4811]: I0122 09:06:31.329720 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:31Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:31 crc kubenswrapper[4811]: I0122 09:06:31.336455 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xhhs7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f42dc1a6-d0c4-43e4-b9d9-b40c1f910400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a64d7f1d53f9a3d9616b76ea12902134c5e0a3a30c9a61113675074caae70d58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcfzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xhhs7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:31Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:31 crc kubenswrapper[4811]: I0122 09:06:31.346699 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9g4j8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d23c9c9-89ca-4db5-99dc-1e5b9f80be38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://279ca4748f2c8229a6647c23a414668a7612c0c863b52ef33c5b3dd026de74ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e01aaa7db78471fea3a39a7dfc2be7926746a5c38a6a92dc3d33ebab7b8bf3cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e01aaa7db78471fea3a39a7dfc2be7926746a5c38a6a92dc3d33ebab7b8bf3cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c4b6317408e75e2cc7947a3ff91f9268141cd5000f951dcbfdab3fdc5c2386c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c4b6317408e75e2cc7947a3ff91f9268141cd5000f951dcbfdab3fdc5c2386c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9e774edddd1e144cf32c598245081727747e9a4e3c04ee339ff63e330ceb5cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9e774edddd1e144cf32c598245081727747e9a4e3c04ee339ff63e330ceb5cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66cb2961790d3d876438b07dabf72c50ed19c9e946e21dcdf90f098a38e02c84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66cb2961790d3d876438b07dabf72c50ed19c9e946e21dcdf90f098a38e02c84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://016900d5e167b9638dd095fc155f63e33c55037edb90af28ebef58c91b86fe7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://016900d5e167b9638dd095fc155f63e33c55037edb90af28ebef58c91b86fe7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4face38e0653080b6d84c1e1a126c96d043eb68c6e19f76dbd8def8bc96e9d40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4face38e0653080b6d84c1e1a126c96d043eb68c6e19f76dbd8def8bc96e9d40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9g4j8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:31Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:31 crc kubenswrapper[4811]: I0122 09:06:31.354648 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84068a6b-e189-419b-87f5-f31428f6eafe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cde8418c9f22553519ec0a9ba1ddcdd553f64aafb44fa002e924499701236bca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thzhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbb067d04c1683ffd06d01aab41bb5988fdca44b8bd16012d35c42795d0d8011\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thzhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-txvcq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:31Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:31 crc kubenswrapper[4811]: I0122 09:06:31.390936 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:31 crc kubenswrapper[4811]: I0122 09:06:31.391058 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:31 crc kubenswrapper[4811]: I0122 09:06:31.391120 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:31 crc kubenswrapper[4811]: I0122 09:06:31.391197 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:31 crc kubenswrapper[4811]: I0122 09:06:31.391253 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:31Z","lastTransitionTime":"2026-01-22T09:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:31 crc kubenswrapper[4811]: I0122 09:06:31.407231 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-bhj4l"] Jan 22 09:06:31 crc kubenswrapper[4811]: I0122 09:06:31.407753 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bhj4l" Jan 22 09:06:31 crc kubenswrapper[4811]: E0122 09:06:31.407812 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bhj4l" podUID="de4b38a0-0c7a-4693-9f92-40fefd6bc9b4" Jan 22 09:06:31 crc kubenswrapper[4811]: I0122 09:06:31.417245 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gwnx7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"153efd1a-2c09-4c49-94e1-2307bbf1e659\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71a7748f45c9a14885785ca825c32a5036a887c7a54b36cadd1a89e57608111f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nklsw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddce971b4fbeb72e1a32c0adab8e46cb8bce2ff9736cd875f63e105bf39dcc27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nklsw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gwnx7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:31Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:31 crc kubenswrapper[4811]: I0122 09:06:31.426376 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f776b1bc70646779cf35092ded02c00aa3b990c873de09d21e1f0d76258b0b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:31Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:31 crc kubenswrapper[4811]: I0122 09:06:31.436840 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kfqgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2555861-d1bb-4f21-be4a-165ed9212932\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ececfc4b2e4f2a120e7a3307235d606db00733e1a5631b6e19b5312afc8e8ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jt7cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kfqgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:31Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:31 crc kubenswrapper[4811]: I0122 09:06:31.449853 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cf1d28a-4ee8-45bb-9a86-3687d8db2105\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684f8ca14b2eabcbf9280085d495af9a6f1fa76fb187f15b8fa5659178efdc93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64382d1748f1eaeb9f2f09d243b35c56f9fd70b9a9765ddb64cb0c4866ed493b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://590bb419e3791ec3a267b9a5aa4022ee74ba5e302b047e85211563e135c4cf31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b14bb77b0cf24bacdb8f5edae5d02ccea019b12feb315e6d6c4887feb1a9354\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0d10794bea5b8ea94b9247775e0c424fa9d481267c663d4bcae16eb4e7eb729\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09c7d347a923430e4eb2af48033ea3fc7715bedad38dd1b644c7b29a1ab617fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09c7d347a923430e4eb2af48033ea3fc7715bedad38dd1b644c7b29a1ab617fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba74d0a51550a0637b01b701bbaa33c5efe84fb88a69b7294d1125f1f400b7cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba74d0a51550a0637b01b701bbaa33c5efe84fb88a69b7294d1125f1f400b7cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9a2da74ed29f44340dd1a4778f15ba78ba4f673eb1b34b676e35c7d7c84af31d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a2da74ed29f44340dd1a4778f15ba78ba4f673eb1b34b676e35c7d7c84af31d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:05:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:31Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:31 crc kubenswrapper[4811]: I0122 09:06:31.458321 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:31Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:31 crc kubenswrapper[4811]: I0122 09:06:31.466283 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:31Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:31 crc kubenswrapper[4811]: I0122 09:06:31.472329 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/de4b38a0-0c7a-4693-9f92-40fefd6bc9b4-metrics-certs\") pod \"network-metrics-daemon-bhj4l\" (UID: \"de4b38a0-0c7a-4693-9f92-40fefd6bc9b4\") " pod="openshift-multus/network-metrics-daemon-bhj4l" Jan 22 09:06:31 crc kubenswrapper[4811]: I0122 09:06:31.472387 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mphcr\" (UniqueName: \"kubernetes.io/projected/de4b38a0-0c7a-4693-9f92-40fefd6bc9b4-kube-api-access-mphcr\") pod \"network-metrics-daemon-bhj4l\" (UID: \"de4b38a0-0c7a-4693-9f92-40fefd6bc9b4\") " pod="openshift-multus/network-metrics-daemon-bhj4l" Jan 22 09:06:31 crc kubenswrapper[4811]: I0122 09:06:31.473136 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xhhs7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f42dc1a6-d0c4-43e4-b9d9-b40c1f910400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a64d7f1d53f9a3d9616b76ea12902134c5e0a3a30c9a61113675074caae70d58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcfzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xhhs7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:31Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:31 crc kubenswrapper[4811]: I0122 09:06:31.483189 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9g4j8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d23c9c9-89ca-4db5-99dc-1e5b9f80be38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://279ca4748f2c8229a6647c23a414668a7612c0c863b52ef33c5b3dd026de74ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e01aaa7db78471fea3a39a7dfc2be7926746a5c38a6a92dc3d33ebab7b8bf3cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e01aaa7db78471fea3a39a7dfc2be7926746a5c38a6a92dc3d33ebab7b8bf3cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c4b6317408e75e2cc7947a3ff91f9268141cd5000f951dcbfdab3fdc5c2386c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c4b6317408e75e2cc7947a3ff91f9268141cd5000f951dcbfdab3fdc5c2386c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9e774edddd1e144cf32c598245081727747e9a4e3c04ee339ff63e330ceb5cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9e774edddd1e144cf32c598245081727747e9a4e3c04ee339ff63e330ceb5cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66cb2961790d3d876438b07dabf72c50ed19c9e946e21dcdf90f098a38e02c84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66cb2961790d3d876438b07dabf72c50ed19c9e946e21dcdf90f098a38e02c84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://016900d5e167b9638dd095fc155f63e33c55037edb90af28ebef58c91b86fe7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://016900d5e167b9638dd095fc155f63e33c55037edb90af28ebef58c91b86fe7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4face38e0653080b6d84c1e1a126c96d043eb68c6e19f76dbd8def8bc96e9d40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4face38e0653080b6d84c1e1a126c96d043eb68c6e19f76dbd8def8bc96e9d40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9g4j8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:31Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:31 crc kubenswrapper[4811]: I0122 09:06:31.491018 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84068a6b-e189-419b-87f5-f31428f6eafe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cde8418c9f22553519ec0a9ba1ddcdd553f64aafb44fa002e924499701236bca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thzhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbb067d04c1683ffd06d01aab41bb5988fdca44b8bd16012d35c42795d0d8011\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thzhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-txvcq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:31Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:31 crc kubenswrapper[4811]: I0122 09:06:31.493599 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:31 crc kubenswrapper[4811]: I0122 09:06:31.493651 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:31 crc kubenswrapper[4811]: I0122 09:06:31.493662 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:31 crc kubenswrapper[4811]: I0122 09:06:31.493676 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:31 crc kubenswrapper[4811]: I0122 09:06:31.493687 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:31Z","lastTransitionTime":"2026-01-22T09:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:31 crc kubenswrapper[4811]: I0122 09:06:31.499404 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bhj4l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de4b38a0-0c7a-4693-9f92-40fefd6bc9b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mphcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mphcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bhj4l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:31Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:31 crc kubenswrapper[4811]: I0122 09:06:31.513939 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30b44c40-e6d4-4902-98e9-a259269d8bf7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c0b4f518da69d1ede47cb9d322e9d9ca120cfc5eb770f109f0e5fd2e3260964\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aee61445030e02ac4fa9de4fefd94fe50e821ac8897c7403e173f2594a2e0ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d66c2a7980e144d2ce01cb8ebac91297e012c09d4b9aec617e747542e1ca304\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4daff941637e717d7dc33c5627db72d253f50f50288c23257e1eaa66cd38c5d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a86cb7b767900041a2946ed016b105369283ee61acdf725ade666acb1e926ed0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0122 09:06:13.075349 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0122 09:06:13.075574 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 09:06:13.077171 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1744166093/tls.crt::/tmp/serving-cert-1744166093/tls.key\\\\\\\"\\\\nI0122 09:06:13.324400 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 09:06:13.330700 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 09:06:13.330724 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 09:06:13.330758 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 09:06:13.330771 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 09:06:13.337838 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 09:06:13.337866 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:06:13.337872 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:06:13.337876 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 09:06:13.337879 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 09:06:13.337883 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 09:06:13.337886 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 09:06:13.338151 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 09:06:13.339610 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9efb1ddf2ce6601c9860410e3ccacc38359aef779a7c8524bf4ec5950743d36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85e811085c9b345899231a1fcf56619c1cc5c8b0e6d22fc897fb96f20ba72e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85e811085c9b345899231a1fcf56619c1cc5c8b0e6d22fc897fb96f20ba72e37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:05:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:31Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:31 crc kubenswrapper[4811]: I0122 09:06:31.522741 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:31Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:31 crc kubenswrapper[4811]: I0122 09:06:31.532142 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a14bfcc7d6d2ab5e1ad0c45999b2e73332e750ed65d9299e188505dafb26d0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:31Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:31 crc kubenswrapper[4811]: I0122 09:06:31.541144 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bae2e22607dec646b15563dc62f66655e7a5c2b7ad809406c907e5f687f80105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75e290e203fab8f291146855a960477ceb9c33d965d15b4722c668dc2b3fded3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:31Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:31 crc kubenswrapper[4811]: I0122 09:06:31.553772 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-274vf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cd0f0db-de53-47c0-9b45-2ce8b37392a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5f0ec565b046b22d600103b58946f6f520873f1686cc3042bf7ddc7f24bd4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e071df53e8cd1652f507eacae3e2c54ac75e76a6b9a426620c970762e81ecd84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c261cdea10a90cb9b06369f3b725b7778d070fb329e81ab44552446e12662451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f0736426a25dadac460e5ed467f3082cd8c4806190c053022d6f360c4290799\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9197a8c82d49fb9b4b1c97827ee95582ca8148656abb88904e06abb25c135597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55f26cdf9d61842467eb1367c6ec156b4de9c40a5b05fe60d3d100df5136a8a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://239f1575640cc017afbde44a6bc3d241facb1b9b7588e9c3902064b0192e9e97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://239f1575640cc017afbde44a6bc3d241facb1b9b7588e9c3902064b0192e9e97\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T09:06:26Z\\\",\\\"message\\\":\\\"k8s.io/client-go/informers/factory.go:160\\\\nI0122 09:06:26.804316 6184 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 09:06:26.804420 6184 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0122 09:06:26.804469 6184 handler.go:208] Removed *v1.Node event handler 2\\\\nI0122 09:06:26.804791 6184 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 09:06:26.805266 6184 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 09:06:26.805478 6184 factory.go:656] Stopping watch factory\\\\nI0122 09:06:26.805690 6184 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 09:06:26.827772 6184 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0122 09:06:26.827839 6184 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0122 09:06:26.827915 6184 ovnkube.go:599] Stopped ovnkube\\\\nI0122 09:06:26.827965 6184 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0122 09:06:26.828033 6184 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-274vf_openshift-ovn-kubernetes(1cd0f0db-de53-47c0-9b45-2ce8b37392a3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c741f2990f908528ccab348e1eb02e217a9b659becc7a41c4eb8da28bfd42e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://523ab909aa2cbac6043b3c572a17c40733e9b2155d4017253d68998cb85d3e7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://523ab909aa2cbac6043b3c572a17c40733e9b2155d4017253d68998cb85d3e7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-274vf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:31Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:31 crc kubenswrapper[4811]: I0122 09:06:31.560411 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n2kj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66a7f3f9-ab88-4b1a-a28a-900480fa9651\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3a88defded9a5435941e2608c8a29baee840bc0ce48454a3731f344299347cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z47mj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n2kj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:31Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:31 crc kubenswrapper[4811]: I0122 09:06:31.568577 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab5dbe63-08fa-4157-9c52-003944e50d7b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af3bf6cdac4751814ecd71637d9e80cbdda1c0e99714275fe77cfa858b3823b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4419625f66ae957b2d9b146fc8b085a4a95c3f47860d6b4847f8fe71a6c4c86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7830976d23e073a099c3c6af1fa2392d9504ad9d41ddf2ee63c16632c22f833a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45fa5d37dcd1dcae1a7488bb855e4bd7c7dc39a1199fe80b82f83604be089f2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:05:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:31Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:31 crc kubenswrapper[4811]: I0122 09:06:31.573002 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/de4b38a0-0c7a-4693-9f92-40fefd6bc9b4-metrics-certs\") pod \"network-metrics-daemon-bhj4l\" (UID: \"de4b38a0-0c7a-4693-9f92-40fefd6bc9b4\") " pod="openshift-multus/network-metrics-daemon-bhj4l" Jan 22 09:06:31 crc kubenswrapper[4811]: I0122 09:06:31.573123 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mphcr\" (UniqueName: \"kubernetes.io/projected/de4b38a0-0c7a-4693-9f92-40fefd6bc9b4-kube-api-access-mphcr\") pod \"network-metrics-daemon-bhj4l\" (UID: \"de4b38a0-0c7a-4693-9f92-40fefd6bc9b4\") " pod="openshift-multus/network-metrics-daemon-bhj4l" Jan 22 09:06:31 crc kubenswrapper[4811]: E0122 09:06:31.573185 4811 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 09:06:31 crc kubenswrapper[4811]: E0122 09:06:31.573309 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/de4b38a0-0c7a-4693-9f92-40fefd6bc9b4-metrics-certs podName:de4b38a0-0c7a-4693-9f92-40fefd6bc9b4 nodeName:}" failed. No retries permitted until 2026-01-22 09:06:32.073292383 +0000 UTC m=+36.395479507 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/de4b38a0-0c7a-4693-9f92-40fefd6bc9b4-metrics-certs") pod "network-metrics-daemon-bhj4l" (UID: "de4b38a0-0c7a-4693-9f92-40fefd6bc9b4") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 09:06:31 crc kubenswrapper[4811]: I0122 09:06:31.588005 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mphcr\" (UniqueName: \"kubernetes.io/projected/de4b38a0-0c7a-4693-9f92-40fefd6bc9b4-kube-api-access-mphcr\") pod \"network-metrics-daemon-bhj4l\" (UID: \"de4b38a0-0c7a-4693-9f92-40fefd6bc9b4\") " pod="openshift-multus/network-metrics-daemon-bhj4l" Jan 22 09:06:31 crc kubenswrapper[4811]: I0122 09:06:31.596076 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:31 crc kubenswrapper[4811]: I0122 09:06:31.596163 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:31 crc kubenswrapper[4811]: I0122 09:06:31.596221 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:31 crc kubenswrapper[4811]: I0122 09:06:31.596277 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:31 crc kubenswrapper[4811]: I0122 09:06:31.596339 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:31Z","lastTransitionTime":"2026-01-22T09:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:31 crc kubenswrapper[4811]: I0122 09:06:31.698730 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:31 crc kubenswrapper[4811]: I0122 09:06:31.698750 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:31 crc kubenswrapper[4811]: I0122 09:06:31.698759 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:31 crc kubenswrapper[4811]: I0122 09:06:31.698774 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:31 crc kubenswrapper[4811]: I0122 09:06:31.698783 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:31Z","lastTransitionTime":"2026-01-22T09:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:31 crc kubenswrapper[4811]: I0122 09:06:31.800497 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:31 crc kubenswrapper[4811]: I0122 09:06:31.800558 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:31 crc kubenswrapper[4811]: I0122 09:06:31.800567 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:31 crc kubenswrapper[4811]: I0122 09:06:31.800580 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:31 crc kubenswrapper[4811]: I0122 09:06:31.800590 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:31Z","lastTransitionTime":"2026-01-22T09:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:31 crc kubenswrapper[4811]: I0122 09:06:31.902428 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:31 crc kubenswrapper[4811]: I0122 09:06:31.902466 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:31 crc kubenswrapper[4811]: I0122 09:06:31.902475 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:31 crc kubenswrapper[4811]: I0122 09:06:31.902487 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:31 crc kubenswrapper[4811]: I0122 09:06:31.902498 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:31Z","lastTransitionTime":"2026-01-22T09:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:31 crc kubenswrapper[4811]: I0122 09:06:31.983905 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 11:24:44.432092655 +0000 UTC Jan 22 09:06:31 crc kubenswrapper[4811]: I0122 09:06:31.991232 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 09:06:31 crc kubenswrapper[4811]: E0122 09:06:31.991332 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 09:06:31 crc kubenswrapper[4811]: I0122 09:06:31.991496 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:06:31 crc kubenswrapper[4811]: E0122 09:06:31.991677 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 09:06:31 crc kubenswrapper[4811]: I0122 09:06:31.991708 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 09:06:31 crc kubenswrapper[4811]: E0122 09:06:31.991919 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 09:06:32 crc kubenswrapper[4811]: I0122 09:06:32.004241 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:32 crc kubenswrapper[4811]: I0122 09:06:32.004297 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:32 crc kubenswrapper[4811]: I0122 09:06:32.004307 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:32 crc kubenswrapper[4811]: I0122 09:06:32.004318 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:32 crc kubenswrapper[4811]: I0122 09:06:32.004327 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:32Z","lastTransitionTime":"2026-01-22T09:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:32 crc kubenswrapper[4811]: I0122 09:06:32.076279 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/de4b38a0-0c7a-4693-9f92-40fefd6bc9b4-metrics-certs\") pod \"network-metrics-daemon-bhj4l\" (UID: \"de4b38a0-0c7a-4693-9f92-40fefd6bc9b4\") " pod="openshift-multus/network-metrics-daemon-bhj4l" Jan 22 09:06:32 crc kubenswrapper[4811]: E0122 09:06:32.076441 4811 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 09:06:32 crc kubenswrapper[4811]: E0122 09:06:32.076498 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/de4b38a0-0c7a-4693-9f92-40fefd6bc9b4-metrics-certs podName:de4b38a0-0c7a-4693-9f92-40fefd6bc9b4 nodeName:}" failed. No retries permitted until 2026-01-22 09:06:33.07648521 +0000 UTC m=+37.398672333 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/de4b38a0-0c7a-4693-9f92-40fefd6bc9b4-metrics-certs") pod "network-metrics-daemon-bhj4l" (UID: "de4b38a0-0c7a-4693-9f92-40fefd6bc9b4") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 09:06:32 crc kubenswrapper[4811]: I0122 09:06:32.105607 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:32 crc kubenswrapper[4811]: I0122 09:06:32.105664 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:32 crc kubenswrapper[4811]: I0122 09:06:32.105675 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:32 crc kubenswrapper[4811]: I0122 09:06:32.105691 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:32 crc kubenswrapper[4811]: I0122 09:06:32.105702 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:32Z","lastTransitionTime":"2026-01-22T09:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:32 crc kubenswrapper[4811]: I0122 09:06:32.207589 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:32 crc kubenswrapper[4811]: I0122 09:06:32.207636 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:32 crc kubenswrapper[4811]: I0122 09:06:32.207647 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:32 crc kubenswrapper[4811]: I0122 09:06:32.207661 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:32 crc kubenswrapper[4811]: I0122 09:06:32.207672 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:32Z","lastTransitionTime":"2026-01-22T09:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:32 crc kubenswrapper[4811]: I0122 09:06:32.309653 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:32 crc kubenswrapper[4811]: I0122 09:06:32.309683 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:32 crc kubenswrapper[4811]: I0122 09:06:32.309693 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:32 crc kubenswrapper[4811]: I0122 09:06:32.309706 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:32 crc kubenswrapper[4811]: I0122 09:06:32.309723 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:32Z","lastTransitionTime":"2026-01-22T09:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:32 crc kubenswrapper[4811]: I0122 09:06:32.411662 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:32 crc kubenswrapper[4811]: I0122 09:06:32.411692 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:32 crc kubenswrapper[4811]: I0122 09:06:32.411701 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:32 crc kubenswrapper[4811]: I0122 09:06:32.411726 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:32 crc kubenswrapper[4811]: I0122 09:06:32.411739 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:32Z","lastTransitionTime":"2026-01-22T09:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:32 crc kubenswrapper[4811]: I0122 09:06:32.513504 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:32 crc kubenswrapper[4811]: I0122 09:06:32.513551 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:32 crc kubenswrapper[4811]: I0122 09:06:32.513561 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:32 crc kubenswrapper[4811]: I0122 09:06:32.513573 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:32 crc kubenswrapper[4811]: I0122 09:06:32.513580 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:32Z","lastTransitionTime":"2026-01-22T09:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:32 crc kubenswrapper[4811]: I0122 09:06:32.615984 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:32 crc kubenswrapper[4811]: I0122 09:06:32.616011 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:32 crc kubenswrapper[4811]: I0122 09:06:32.616021 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:32 crc kubenswrapper[4811]: I0122 09:06:32.616034 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:32 crc kubenswrapper[4811]: I0122 09:06:32.616045 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:32Z","lastTransitionTime":"2026-01-22T09:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:32 crc kubenswrapper[4811]: I0122 09:06:32.718347 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:32 crc kubenswrapper[4811]: I0122 09:06:32.718471 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:32 crc kubenswrapper[4811]: I0122 09:06:32.718556 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:32 crc kubenswrapper[4811]: I0122 09:06:32.718648 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:32 crc kubenswrapper[4811]: I0122 09:06:32.718727 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:32Z","lastTransitionTime":"2026-01-22T09:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:32 crc kubenswrapper[4811]: I0122 09:06:32.820137 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:32 crc kubenswrapper[4811]: I0122 09:06:32.820167 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:32 crc kubenswrapper[4811]: I0122 09:06:32.820179 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:32 crc kubenswrapper[4811]: I0122 09:06:32.820191 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:32 crc kubenswrapper[4811]: I0122 09:06:32.820200 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:32Z","lastTransitionTime":"2026-01-22T09:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:32 crc kubenswrapper[4811]: I0122 09:06:32.921731 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:32 crc kubenswrapper[4811]: I0122 09:06:32.921828 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:32 crc kubenswrapper[4811]: I0122 09:06:32.921886 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:32 crc kubenswrapper[4811]: I0122 09:06:32.921940 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:32 crc kubenswrapper[4811]: I0122 09:06:32.921987 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:32Z","lastTransitionTime":"2026-01-22T09:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:32 crc kubenswrapper[4811]: I0122 09:06:32.984681 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 10:02:12.735771392 +0000 UTC Jan 22 09:06:32 crc kubenswrapper[4811]: I0122 09:06:32.991936 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bhj4l" Jan 22 09:06:32 crc kubenswrapper[4811]: E0122 09:06:32.992046 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bhj4l" podUID="de4b38a0-0c7a-4693-9f92-40fefd6bc9b4" Jan 22 09:06:33 crc kubenswrapper[4811]: I0122 09:06:33.023259 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:33 crc kubenswrapper[4811]: I0122 09:06:33.023293 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:33 crc kubenswrapper[4811]: I0122 09:06:33.023303 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:33 crc kubenswrapper[4811]: I0122 09:06:33.023316 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:33 crc kubenswrapper[4811]: I0122 09:06:33.023327 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:33Z","lastTransitionTime":"2026-01-22T09:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:33 crc kubenswrapper[4811]: I0122 09:06:33.083498 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/de4b38a0-0c7a-4693-9f92-40fefd6bc9b4-metrics-certs\") pod \"network-metrics-daemon-bhj4l\" (UID: \"de4b38a0-0c7a-4693-9f92-40fefd6bc9b4\") " pod="openshift-multus/network-metrics-daemon-bhj4l" Jan 22 09:06:33 crc kubenswrapper[4811]: E0122 09:06:33.083619 4811 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 09:06:33 crc kubenswrapper[4811]: E0122 09:06:33.083690 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/de4b38a0-0c7a-4693-9f92-40fefd6bc9b4-metrics-certs podName:de4b38a0-0c7a-4693-9f92-40fefd6bc9b4 nodeName:}" failed. No retries permitted until 2026-01-22 09:06:35.083674109 +0000 UTC m=+39.405861242 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/de4b38a0-0c7a-4693-9f92-40fefd6bc9b4-metrics-certs") pod "network-metrics-daemon-bhj4l" (UID: "de4b38a0-0c7a-4693-9f92-40fefd6bc9b4") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 09:06:33 crc kubenswrapper[4811]: I0122 09:06:33.125475 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:33 crc kubenswrapper[4811]: I0122 09:06:33.125931 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:33 crc kubenswrapper[4811]: I0122 09:06:33.125999 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:33 crc kubenswrapper[4811]: I0122 09:06:33.126069 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:33 crc kubenswrapper[4811]: I0122 09:06:33.126126 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:33Z","lastTransitionTime":"2026-01-22T09:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:33 crc kubenswrapper[4811]: I0122 09:06:33.228200 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:33 crc kubenswrapper[4811]: I0122 09:06:33.228233 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:33 crc kubenswrapper[4811]: I0122 09:06:33.228244 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:33 crc kubenswrapper[4811]: I0122 09:06:33.228260 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:33 crc kubenswrapper[4811]: I0122 09:06:33.228269 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:33Z","lastTransitionTime":"2026-01-22T09:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:33 crc kubenswrapper[4811]: I0122 09:06:33.330367 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:33 crc kubenswrapper[4811]: I0122 09:06:33.330425 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:33 crc kubenswrapper[4811]: I0122 09:06:33.330444 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:33 crc kubenswrapper[4811]: I0122 09:06:33.330469 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:33 crc kubenswrapper[4811]: I0122 09:06:33.330483 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:33Z","lastTransitionTime":"2026-01-22T09:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:33 crc kubenswrapper[4811]: I0122 09:06:33.432613 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:33 crc kubenswrapper[4811]: I0122 09:06:33.432655 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:33 crc kubenswrapper[4811]: I0122 09:06:33.432665 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:33 crc kubenswrapper[4811]: I0122 09:06:33.432679 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:33 crc kubenswrapper[4811]: I0122 09:06:33.432688 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:33Z","lastTransitionTime":"2026-01-22T09:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:33 crc kubenswrapper[4811]: I0122 09:06:33.534281 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:33 crc kubenswrapper[4811]: I0122 09:06:33.534329 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:33 crc kubenswrapper[4811]: I0122 09:06:33.534339 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:33 crc kubenswrapper[4811]: I0122 09:06:33.534351 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:33 crc kubenswrapper[4811]: I0122 09:06:33.534359 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:33Z","lastTransitionTime":"2026-01-22T09:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:33 crc kubenswrapper[4811]: I0122 09:06:33.635885 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:33 crc kubenswrapper[4811]: I0122 09:06:33.635914 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:33 crc kubenswrapper[4811]: I0122 09:06:33.635923 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:33 crc kubenswrapper[4811]: I0122 09:06:33.635950 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:33 crc kubenswrapper[4811]: I0122 09:06:33.635959 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:33Z","lastTransitionTime":"2026-01-22T09:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:33 crc kubenswrapper[4811]: I0122 09:06:33.737311 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:33 crc kubenswrapper[4811]: I0122 09:06:33.737358 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:33 crc kubenswrapper[4811]: I0122 09:06:33.737369 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:33 crc kubenswrapper[4811]: I0122 09:06:33.737379 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:33 crc kubenswrapper[4811]: I0122 09:06:33.737385 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:33Z","lastTransitionTime":"2026-01-22T09:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:33 crc kubenswrapper[4811]: I0122 09:06:33.839180 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:33 crc kubenswrapper[4811]: I0122 09:06:33.839218 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:33 crc kubenswrapper[4811]: I0122 09:06:33.839229 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:33 crc kubenswrapper[4811]: I0122 09:06:33.839243 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:33 crc kubenswrapper[4811]: I0122 09:06:33.839252 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:33Z","lastTransitionTime":"2026-01-22T09:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:33 crc kubenswrapper[4811]: I0122 09:06:33.941284 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:33 crc kubenswrapper[4811]: I0122 09:06:33.941318 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:33 crc kubenswrapper[4811]: I0122 09:06:33.941328 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:33 crc kubenswrapper[4811]: I0122 09:06:33.941342 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:33 crc kubenswrapper[4811]: I0122 09:06:33.941353 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:33Z","lastTransitionTime":"2026-01-22T09:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:33 crc kubenswrapper[4811]: I0122 09:06:33.985823 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 07:13:47.820084407 +0000 UTC Jan 22 09:06:33 crc kubenswrapper[4811]: I0122 09:06:33.991606 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 09:06:33 crc kubenswrapper[4811]: I0122 09:06:33.991645 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 09:06:33 crc kubenswrapper[4811]: E0122 09:06:33.991891 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 09:06:33 crc kubenswrapper[4811]: E0122 09:06:33.991802 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 09:06:33 crc kubenswrapper[4811]: I0122 09:06:33.991645 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:06:33 crc kubenswrapper[4811]: E0122 09:06:33.991961 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 09:06:34 crc kubenswrapper[4811]: I0122 09:06:34.042713 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:34 crc kubenswrapper[4811]: I0122 09:06:34.042751 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:34 crc kubenswrapper[4811]: I0122 09:06:34.042760 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:34 crc kubenswrapper[4811]: I0122 09:06:34.042772 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:34 crc kubenswrapper[4811]: I0122 09:06:34.042781 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:34Z","lastTransitionTime":"2026-01-22T09:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:34 crc kubenswrapper[4811]: I0122 09:06:34.144615 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:34 crc kubenswrapper[4811]: I0122 09:06:34.145040 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:34 crc kubenswrapper[4811]: I0122 09:06:34.145111 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:34 crc kubenswrapper[4811]: I0122 09:06:34.145182 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:34 crc kubenswrapper[4811]: I0122 09:06:34.145242 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:34Z","lastTransitionTime":"2026-01-22T09:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:34 crc kubenswrapper[4811]: I0122 09:06:34.246712 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:34 crc kubenswrapper[4811]: I0122 09:06:34.246754 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:34 crc kubenswrapper[4811]: I0122 09:06:34.246763 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:34 crc kubenswrapper[4811]: I0122 09:06:34.246773 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:34 crc kubenswrapper[4811]: I0122 09:06:34.246785 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:34Z","lastTransitionTime":"2026-01-22T09:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:34 crc kubenswrapper[4811]: I0122 09:06:34.349000 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:34 crc kubenswrapper[4811]: I0122 09:06:34.349024 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:34 crc kubenswrapper[4811]: I0122 09:06:34.349031 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:34 crc kubenswrapper[4811]: I0122 09:06:34.349040 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:34 crc kubenswrapper[4811]: I0122 09:06:34.349048 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:34Z","lastTransitionTime":"2026-01-22T09:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:34 crc kubenswrapper[4811]: I0122 09:06:34.451212 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:34 crc kubenswrapper[4811]: I0122 09:06:34.451251 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:34 crc kubenswrapper[4811]: I0122 09:06:34.451259 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:34 crc kubenswrapper[4811]: I0122 09:06:34.451270 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:34 crc kubenswrapper[4811]: I0122 09:06:34.451281 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:34Z","lastTransitionTime":"2026-01-22T09:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:34 crc kubenswrapper[4811]: I0122 09:06:34.553268 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:34 crc kubenswrapper[4811]: I0122 09:06:34.553303 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:34 crc kubenswrapper[4811]: I0122 09:06:34.553311 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:34 crc kubenswrapper[4811]: I0122 09:06:34.553324 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:34 crc kubenswrapper[4811]: I0122 09:06:34.553332 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:34Z","lastTransitionTime":"2026-01-22T09:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:34 crc kubenswrapper[4811]: I0122 09:06:34.655063 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:34 crc kubenswrapper[4811]: I0122 09:06:34.655099 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:34 crc kubenswrapper[4811]: I0122 09:06:34.655109 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:34 crc kubenswrapper[4811]: I0122 09:06:34.655123 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:34 crc kubenswrapper[4811]: I0122 09:06:34.655132 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:34Z","lastTransitionTime":"2026-01-22T09:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:34 crc kubenswrapper[4811]: I0122 09:06:34.756879 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:34 crc kubenswrapper[4811]: I0122 09:06:34.756909 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:34 crc kubenswrapper[4811]: I0122 09:06:34.756918 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:34 crc kubenswrapper[4811]: I0122 09:06:34.756954 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:34 crc kubenswrapper[4811]: I0122 09:06:34.756964 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:34Z","lastTransitionTime":"2026-01-22T09:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:34 crc kubenswrapper[4811]: I0122 09:06:34.858597 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:34 crc kubenswrapper[4811]: I0122 09:06:34.858644 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:34 crc kubenswrapper[4811]: I0122 09:06:34.858653 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:34 crc kubenswrapper[4811]: I0122 09:06:34.858665 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:34 crc kubenswrapper[4811]: I0122 09:06:34.858672 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:34Z","lastTransitionTime":"2026-01-22T09:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:34 crc kubenswrapper[4811]: I0122 09:06:34.961063 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:34 crc kubenswrapper[4811]: I0122 09:06:34.961105 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:34 crc kubenswrapper[4811]: I0122 09:06:34.961115 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:34 crc kubenswrapper[4811]: I0122 09:06:34.961129 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:34 crc kubenswrapper[4811]: I0122 09:06:34.961139 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:34Z","lastTransitionTime":"2026-01-22T09:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:34 crc kubenswrapper[4811]: I0122 09:06:34.986449 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 00:33:13.738768148 +0000 UTC Jan 22 09:06:34 crc kubenswrapper[4811]: I0122 09:06:34.991732 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bhj4l" Jan 22 09:06:34 crc kubenswrapper[4811]: E0122 09:06:34.991821 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bhj4l" podUID="de4b38a0-0c7a-4693-9f92-40fefd6bc9b4" Jan 22 09:06:35 crc kubenswrapper[4811]: I0122 09:06:35.062645 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:35 crc kubenswrapper[4811]: I0122 09:06:35.062676 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:35 crc kubenswrapper[4811]: I0122 09:06:35.062687 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:35 crc kubenswrapper[4811]: I0122 09:06:35.062702 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:35 crc kubenswrapper[4811]: I0122 09:06:35.062711 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:35Z","lastTransitionTime":"2026-01-22T09:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:35 crc kubenswrapper[4811]: I0122 09:06:35.100177 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/de4b38a0-0c7a-4693-9f92-40fefd6bc9b4-metrics-certs\") pod \"network-metrics-daemon-bhj4l\" (UID: \"de4b38a0-0c7a-4693-9f92-40fefd6bc9b4\") " pod="openshift-multus/network-metrics-daemon-bhj4l" Jan 22 09:06:35 crc kubenswrapper[4811]: E0122 09:06:35.100293 4811 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 09:06:35 crc kubenswrapper[4811]: E0122 09:06:35.100361 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/de4b38a0-0c7a-4693-9f92-40fefd6bc9b4-metrics-certs podName:de4b38a0-0c7a-4693-9f92-40fefd6bc9b4 nodeName:}" failed. No retries permitted until 2026-01-22 09:06:39.100344078 +0000 UTC m=+43.422531211 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/de4b38a0-0c7a-4693-9f92-40fefd6bc9b4-metrics-certs") pod "network-metrics-daemon-bhj4l" (UID: "de4b38a0-0c7a-4693-9f92-40fefd6bc9b4") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 09:06:35 crc kubenswrapper[4811]: I0122 09:06:35.164736 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:35 crc kubenswrapper[4811]: I0122 09:06:35.164766 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:35 crc kubenswrapper[4811]: I0122 09:06:35.164776 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:35 crc kubenswrapper[4811]: I0122 09:06:35.164789 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:35 crc kubenswrapper[4811]: I0122 09:06:35.164797 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:35Z","lastTransitionTime":"2026-01-22T09:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:35 crc kubenswrapper[4811]: I0122 09:06:35.266872 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:35 crc kubenswrapper[4811]: I0122 09:06:35.266899 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:35 crc kubenswrapper[4811]: I0122 09:06:35.266907 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:35 crc kubenswrapper[4811]: I0122 09:06:35.266917 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:35 crc kubenswrapper[4811]: I0122 09:06:35.266923 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:35Z","lastTransitionTime":"2026-01-22T09:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:35 crc kubenswrapper[4811]: I0122 09:06:35.368432 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:35 crc kubenswrapper[4811]: I0122 09:06:35.368463 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:35 crc kubenswrapper[4811]: I0122 09:06:35.368471 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:35 crc kubenswrapper[4811]: I0122 09:06:35.368481 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:35 crc kubenswrapper[4811]: I0122 09:06:35.368490 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:35Z","lastTransitionTime":"2026-01-22T09:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:35 crc kubenswrapper[4811]: I0122 09:06:35.470477 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:35 crc kubenswrapper[4811]: I0122 09:06:35.470513 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:35 crc kubenswrapper[4811]: I0122 09:06:35.470523 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:35 crc kubenswrapper[4811]: I0122 09:06:35.470536 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:35 crc kubenswrapper[4811]: I0122 09:06:35.470545 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:35Z","lastTransitionTime":"2026-01-22T09:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:35 crc kubenswrapper[4811]: I0122 09:06:35.572461 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:35 crc kubenswrapper[4811]: I0122 09:06:35.572490 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:35 crc kubenswrapper[4811]: I0122 09:06:35.572498 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:35 crc kubenswrapper[4811]: I0122 09:06:35.572508 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:35 crc kubenswrapper[4811]: I0122 09:06:35.572516 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:35Z","lastTransitionTime":"2026-01-22T09:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:35 crc kubenswrapper[4811]: I0122 09:06:35.674364 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:35 crc kubenswrapper[4811]: I0122 09:06:35.674422 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:35 crc kubenswrapper[4811]: I0122 09:06:35.674434 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:35 crc kubenswrapper[4811]: I0122 09:06:35.674449 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:35 crc kubenswrapper[4811]: I0122 09:06:35.674458 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:35Z","lastTransitionTime":"2026-01-22T09:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:35 crc kubenswrapper[4811]: I0122 09:06:35.777337 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:35 crc kubenswrapper[4811]: I0122 09:06:35.777365 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:35 crc kubenswrapper[4811]: I0122 09:06:35.777374 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:35 crc kubenswrapper[4811]: I0122 09:06:35.777387 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:35 crc kubenswrapper[4811]: I0122 09:06:35.777395 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:35Z","lastTransitionTime":"2026-01-22T09:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:35 crc kubenswrapper[4811]: I0122 09:06:35.879458 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:35 crc kubenswrapper[4811]: I0122 09:06:35.879504 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:35 crc kubenswrapper[4811]: I0122 09:06:35.879515 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:35 crc kubenswrapper[4811]: I0122 09:06:35.879527 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:35 crc kubenswrapper[4811]: I0122 09:06:35.879536 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:35Z","lastTransitionTime":"2026-01-22T09:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:35 crc kubenswrapper[4811]: I0122 09:06:35.981576 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:35 crc kubenswrapper[4811]: I0122 09:06:35.981612 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:35 crc kubenswrapper[4811]: I0122 09:06:35.981639 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:35 crc kubenswrapper[4811]: I0122 09:06:35.981651 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:35 crc kubenswrapper[4811]: I0122 09:06:35.981659 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:35Z","lastTransitionTime":"2026-01-22T09:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:35 crc kubenswrapper[4811]: I0122 09:06:35.986766 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 15:06:36.387014247 +0000 UTC Jan 22 09:06:35 crc kubenswrapper[4811]: I0122 09:06:35.991052 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 09:06:35 crc kubenswrapper[4811]: I0122 09:06:35.991093 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 09:06:35 crc kubenswrapper[4811]: E0122 09:06:35.991158 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 09:06:35 crc kubenswrapper[4811]: E0122 09:06:35.991231 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 09:06:35 crc kubenswrapper[4811]: I0122 09:06:35.991360 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:06:35 crc kubenswrapper[4811]: E0122 09:06:35.991515 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 09:06:36 crc kubenswrapper[4811]: I0122 09:06:36.001026 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84068a6b-e189-419b-87f5-f31428f6eafe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cde8418c9f22553519ec0a9ba1ddcdd553f64aafb44fa002e924499701236bca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thzhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbb067d04c1683ffd06d01aab41bb5988fdca44b8bd16012d35c42795d0d8011\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thzhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-txvcq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:35Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:36 crc kubenswrapper[4811]: I0122 09:06:36.008532 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bhj4l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de4b38a0-0c7a-4693-9f92-40fefd6bc9b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mphcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mphcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bhj4l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:36Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:36 crc kubenswrapper[4811]: I0122 09:06:36.017239 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30b44c40-e6d4-4902-98e9-a259269d8bf7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c0b4f518da69d1ede47cb9d322e9d9ca120cfc5eb770f109f0e5fd2e3260964\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aee61445030e02ac4fa9de4fefd94fe50e821ac8897c7403e173f2594a2e0ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d66c2a7980e144d2ce01cb8ebac91297e012c09d4b9aec617e747542e1ca304\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4daff941637e717d7dc33c5627db72d253f50f50288c23257e1eaa66cd38c5d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a86cb7b767900041a2946ed016b105369283ee61acdf725ade666acb1e926ed0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0122 09:06:13.075349 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0122 09:06:13.075574 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 09:06:13.077171 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1744166093/tls.crt::/tmp/serving-cert-1744166093/tls.key\\\\\\\"\\\\nI0122 09:06:13.324400 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 09:06:13.330700 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 09:06:13.330724 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 09:06:13.330758 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 09:06:13.330771 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 09:06:13.337838 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 09:06:13.337866 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:06:13.337872 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:06:13.337876 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 09:06:13.337879 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 09:06:13.337883 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 09:06:13.337886 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 09:06:13.338151 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 09:06:13.339610 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9efb1ddf2ce6601c9860410e3ccacc38359aef779a7c8524bf4ec5950743d36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85e811085c9b345899231a1fcf56619c1cc5c8b0e6d22fc897fb96f20ba72e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85e811085c9b345899231a1fcf56619c1cc5c8b0e6d22fc897fb96f20ba72e37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:05:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:36Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:36 crc kubenswrapper[4811]: I0122 09:06:36.025046 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:36Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:36 crc kubenswrapper[4811]: I0122 09:06:36.033522 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:36Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:36 crc kubenswrapper[4811]: I0122 09:06:36.040305 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xhhs7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f42dc1a6-d0c4-43e4-b9d9-b40c1f910400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a64d7f1d53f9a3d9616b76ea12902134c5e0a3a30c9a61113675074caae70d58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcfzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xhhs7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:36Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:36 crc kubenswrapper[4811]: I0122 09:06:36.049321 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9g4j8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d23c9c9-89ca-4db5-99dc-1e5b9f80be38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://279ca4748f2c8229a6647c23a414668a7612c0c863b52ef33c5b3dd026de74ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e01aaa7db78471fea3a39a7dfc2be7926746a5c38a6a92dc3d33ebab7b8bf3cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e01aaa7db78471fea3a39a7dfc2be7926746a5c38a6a92dc3d33ebab7b8bf3cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c4b6317408e75e2cc7947a3ff91f9268141cd5000f951dcbfdab3fdc5c2386c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c4b6317408e75e2cc7947a3ff91f9268141cd5000f951dcbfdab3fdc5c2386c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9e774edddd1e144cf32c598245081727747e9a4e3c04ee339ff63e330ceb5cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9e774edddd1e144cf32c598245081727747e9a4e3c04ee339ff63e330ceb5cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66cb2961790d3d876438b07dabf72c50ed19c9e946e21dcdf90f098a38e02c84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66cb2961790d3d876438b07dabf72c50ed19c9e946e21dcdf90f098a38e02c84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://016900d5e167b9638dd095fc155f63e33c55037edb90af28ebef58c91b86fe7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://016900d5e167b9638dd095fc155f63e33c55037edb90af28ebef58c91b86fe7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4face38e0653080b6d84c1e1a126c96d043eb68c6e19f76dbd8def8bc96e9d40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4face38e0653080b6d84c1e1a126c96d043eb68c6e19f76dbd8def8bc96e9d40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9g4j8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:36Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:36 crc kubenswrapper[4811]: I0122 09:06:36.061186 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n2kj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66a7f3f9-ab88-4b1a-a28a-900480fa9651\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3a88defded9a5435941e2608c8a29baee840bc0ce48454a3731f344299347cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z47mj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n2kj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:36Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:36 crc kubenswrapper[4811]: I0122 09:06:36.069147 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab5dbe63-08fa-4157-9c52-003944e50d7b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af3bf6cdac4751814ecd71637d9e80cbdda1c0e99714275fe77cfa858b3823b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4419625f66ae957b2d9b146fc8b085a4a95c3f47860d6b4847f8fe71a6c4c86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7830976d23e073a099c3c6af1fa2392d9504ad9d41ddf2ee63c16632c22f833a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45fa5d37dcd1dcae1a7488bb855e4bd7c7dc39a1199fe80b82f83604be089f2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:05:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:36Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:36 crc kubenswrapper[4811]: I0122 09:06:36.077029 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a14bfcc7d6d2ab5e1ad0c45999b2e73332e750ed65d9299e188505dafb26d0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:36Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:36 crc kubenswrapper[4811]: I0122 09:06:36.083545 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:36 crc kubenswrapper[4811]: I0122 09:06:36.083576 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:36 crc kubenswrapper[4811]: I0122 09:06:36.083586 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:36 crc kubenswrapper[4811]: I0122 09:06:36.083598 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:36 crc kubenswrapper[4811]: I0122 09:06:36.083606 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:36Z","lastTransitionTime":"2026-01-22T09:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:36 crc kubenswrapper[4811]: I0122 09:06:36.085281 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bae2e22607dec646b15563dc62f66655e7a5c2b7ad809406c907e5f687f80105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75e290e203fab8f291146855a960477ceb9c33d965d15b4722c668dc2b3fded3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:36Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:36 crc kubenswrapper[4811]: I0122 09:06:36.097100 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-274vf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cd0f0db-de53-47c0-9b45-2ce8b37392a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5f0ec565b046b22d600103b58946f6f520873f1686cc3042bf7ddc7f24bd4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e071df53e8cd1652f507eacae3e2c54ac75e76a6b9a426620c970762e81ecd84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c261cdea10a90cb9b06369f3b725b7778d070fb329e81ab44552446e12662451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f0736426a25dadac460e5ed467f3082cd8c4806190c053022d6f360c4290799\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9197a8c82d49fb9b4b1c97827ee95582ca8148656abb88904e06abb25c135597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55f26cdf9d61842467eb1367c6ec156b4de9c40a5b05fe60d3d100df5136a8a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://239f1575640cc017afbde44a6bc3d241facb1b9b7588e9c3902064b0192e9e97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://239f1575640cc017afbde44a6bc3d241facb1b9b7588e9c3902064b0192e9e97\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T09:06:26Z\\\",\\\"message\\\":\\\"k8s.io/client-go/informers/factory.go:160\\\\nI0122 09:06:26.804316 6184 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 09:06:26.804420 6184 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0122 09:06:26.804469 6184 handler.go:208] Removed *v1.Node event handler 2\\\\nI0122 09:06:26.804791 6184 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 09:06:26.805266 6184 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 09:06:26.805478 6184 factory.go:656] Stopping watch factory\\\\nI0122 09:06:26.805690 6184 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 09:06:26.827772 6184 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0122 09:06:26.827839 6184 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0122 09:06:26.827915 6184 ovnkube.go:599] Stopped ovnkube\\\\nI0122 09:06:26.827965 6184 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0122 09:06:26.828033 6184 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-274vf_openshift-ovn-kubernetes(1cd0f0db-de53-47c0-9b45-2ce8b37392a3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c741f2990f908528ccab348e1eb02e217a9b659becc7a41c4eb8da28bfd42e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://523ab909aa2cbac6043b3c572a17c40733e9b2155d4017253d68998cb85d3e7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://523ab909aa2cbac6043b3c572a17c40733e9b2155d4017253d68998cb85d3e7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-274vf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:36Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:36 crc kubenswrapper[4811]: I0122 09:06:36.104666 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f776b1bc70646779cf35092ded02c00aa3b990c873de09d21e1f0d76258b0b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:36Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:36 crc kubenswrapper[4811]: I0122 09:06:36.111535 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gwnx7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"153efd1a-2c09-4c49-94e1-2307bbf1e659\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71a7748f45c9a14885785ca825c32a5036a887c7a54b36cadd1a89e57608111f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nklsw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddce971b4fbeb72e1a32c0adab8e46cb8bce2ff9736cd875f63e105bf39dcc27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nklsw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gwnx7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:36Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:36 crc kubenswrapper[4811]: I0122 09:06:36.124039 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cf1d28a-4ee8-45bb-9a86-3687d8db2105\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684f8ca14b2eabcbf9280085d495af9a6f1fa76fb187f15b8fa5659178efdc93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64382d1748f1eaeb9f2f09d243b35c56f9fd70b9a9765ddb64cb0c4866ed493b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://590bb419e3791ec3a267b9a5aa4022ee74ba5e302b047e85211563e135c4cf31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b14bb77b0cf24bacdb8f5edae5d02ccea019b12feb315e6d6c4887feb1a9354\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0d10794bea5b8ea94b9247775e0c424fa9d481267c663d4bcae16eb4e7eb729\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09c7d347a923430e4eb2af48033ea3fc7715bedad38dd1b644c7b29a1ab617fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09c7d347a923430e4eb2af48033ea3fc7715bedad38dd1b644c7b29a1ab617fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba74d0a51550a0637b01b701bbaa33c5efe84fb88a69b7294d1125f1f400b7cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba74d0a51550a0637b01b701bbaa33c5efe84fb88a69b7294d1125f1f400b7cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9a2da74ed29f44340dd1a4778f15ba78ba4f673eb1b34b676e35c7d7c84af31d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a2da74ed29f44340dd1a4778f15ba78ba4f673eb1b34b676e35c7d7c84af31d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:05:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:36Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:36 crc kubenswrapper[4811]: I0122 09:06:36.132353 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:36Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:36 crc kubenswrapper[4811]: I0122 09:06:36.140838 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kfqgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2555861-d1bb-4f21-be4a-165ed9212932\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ececfc4b2e4f2a120e7a3307235d606db00733e1a5631b6e19b5312afc8e8ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jt7cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kfqgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:36Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:36 crc kubenswrapper[4811]: I0122 09:06:36.185821 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:36 crc kubenswrapper[4811]: I0122 09:06:36.185855 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:36 crc kubenswrapper[4811]: I0122 09:06:36.185866 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:36 crc kubenswrapper[4811]: I0122 09:06:36.185882 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:36 crc kubenswrapper[4811]: I0122 09:06:36.185892 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:36Z","lastTransitionTime":"2026-01-22T09:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:36 crc kubenswrapper[4811]: I0122 09:06:36.287948 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:36 crc kubenswrapper[4811]: I0122 09:06:36.287978 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:36 crc kubenswrapper[4811]: I0122 09:06:36.287986 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:36 crc kubenswrapper[4811]: I0122 09:06:36.288000 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:36 crc kubenswrapper[4811]: I0122 09:06:36.288009 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:36Z","lastTransitionTime":"2026-01-22T09:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:36 crc kubenswrapper[4811]: I0122 09:06:36.389603 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:36 crc kubenswrapper[4811]: I0122 09:06:36.389730 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:36 crc kubenswrapper[4811]: I0122 09:06:36.389796 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:36 crc kubenswrapper[4811]: I0122 09:06:36.389871 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:36 crc kubenswrapper[4811]: I0122 09:06:36.389925 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:36Z","lastTransitionTime":"2026-01-22T09:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:36 crc kubenswrapper[4811]: I0122 09:06:36.491433 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:36 crc kubenswrapper[4811]: I0122 09:06:36.491484 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:36 crc kubenswrapper[4811]: I0122 09:06:36.491495 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:36 crc kubenswrapper[4811]: I0122 09:06:36.491509 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:36 crc kubenswrapper[4811]: I0122 09:06:36.491520 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:36Z","lastTransitionTime":"2026-01-22T09:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:36 crc kubenswrapper[4811]: I0122 09:06:36.593807 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:36 crc kubenswrapper[4811]: I0122 09:06:36.593898 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:36 crc kubenswrapper[4811]: I0122 09:06:36.593913 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:36 crc kubenswrapper[4811]: I0122 09:06:36.593928 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:36 crc kubenswrapper[4811]: I0122 09:06:36.593937 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:36Z","lastTransitionTime":"2026-01-22T09:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:36 crc kubenswrapper[4811]: I0122 09:06:36.695961 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:36 crc kubenswrapper[4811]: I0122 09:06:36.695996 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:36 crc kubenswrapper[4811]: I0122 09:06:36.696003 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:36 crc kubenswrapper[4811]: I0122 09:06:36.696015 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:36 crc kubenswrapper[4811]: I0122 09:06:36.696022 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:36Z","lastTransitionTime":"2026-01-22T09:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:36 crc kubenswrapper[4811]: I0122 09:06:36.797750 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:36 crc kubenswrapper[4811]: I0122 09:06:36.797779 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:36 crc kubenswrapper[4811]: I0122 09:06:36.797789 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:36 crc kubenswrapper[4811]: I0122 09:06:36.797799 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:36 crc kubenswrapper[4811]: I0122 09:06:36.797826 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:36Z","lastTransitionTime":"2026-01-22T09:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:36 crc kubenswrapper[4811]: I0122 09:06:36.880289 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:36 crc kubenswrapper[4811]: I0122 09:06:36.880318 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:36 crc kubenswrapper[4811]: I0122 09:06:36.880326 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:36 crc kubenswrapper[4811]: I0122 09:06:36.880336 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:36 crc kubenswrapper[4811]: I0122 09:06:36.880343 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:36Z","lastTransitionTime":"2026-01-22T09:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:36 crc kubenswrapper[4811]: E0122 09:06:36.889089 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:06:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:06:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:06:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:06:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0066ca33-f035-4af3-9028-0da78d54d55e\\\",\\\"systemUUID\\\":\\\"463fdb35-dd0c-4804-a85e-31cd33c59ce4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:36Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:36 crc kubenswrapper[4811]: I0122 09:06:36.891283 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:36 crc kubenswrapper[4811]: I0122 09:06:36.891330 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:36 crc kubenswrapper[4811]: I0122 09:06:36.891339 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:36 crc kubenswrapper[4811]: I0122 09:06:36.891348 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:36 crc kubenswrapper[4811]: I0122 09:06:36.891354 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:36Z","lastTransitionTime":"2026-01-22T09:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:36 crc kubenswrapper[4811]: E0122 09:06:36.898764 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:06:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:06:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:06:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:06:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0066ca33-f035-4af3-9028-0da78d54d55e\\\",\\\"systemUUID\\\":\\\"463fdb35-dd0c-4804-a85e-31cd33c59ce4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:36Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:36 crc kubenswrapper[4811]: I0122 09:06:36.900592 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:36 crc kubenswrapper[4811]: I0122 09:06:36.900617 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:36 crc kubenswrapper[4811]: I0122 09:06:36.900642 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:36 crc kubenswrapper[4811]: I0122 09:06:36.900652 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:36 crc kubenswrapper[4811]: I0122 09:06:36.900659 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:36Z","lastTransitionTime":"2026-01-22T09:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:36 crc kubenswrapper[4811]: E0122 09:06:36.908167 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:06:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:06:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:06:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:06:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0066ca33-f035-4af3-9028-0da78d54d55e\\\",\\\"systemUUID\\\":\\\"463fdb35-dd0c-4804-a85e-31cd33c59ce4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:36Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:36 crc kubenswrapper[4811]: I0122 09:06:36.910237 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:36 crc kubenswrapper[4811]: I0122 09:06:36.910276 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:36 crc kubenswrapper[4811]: I0122 09:06:36.910284 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:36 crc kubenswrapper[4811]: I0122 09:06:36.910293 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:36 crc kubenswrapper[4811]: I0122 09:06:36.910300 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:36Z","lastTransitionTime":"2026-01-22T09:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:36 crc kubenswrapper[4811]: E0122 09:06:36.917993 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:06:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:06:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:06:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:06:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0066ca33-f035-4af3-9028-0da78d54d55e\\\",\\\"systemUUID\\\":\\\"463fdb35-dd0c-4804-a85e-31cd33c59ce4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:36Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:36 crc kubenswrapper[4811]: I0122 09:06:36.919944 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:36 crc kubenswrapper[4811]: I0122 09:06:36.919976 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:36 crc kubenswrapper[4811]: I0122 09:06:36.919984 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:36 crc kubenswrapper[4811]: I0122 09:06:36.919997 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:36 crc kubenswrapper[4811]: I0122 09:06:36.920006 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:36Z","lastTransitionTime":"2026-01-22T09:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:36 crc kubenswrapper[4811]: E0122 09:06:36.927275 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:06:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:06:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:06:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:06:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0066ca33-f035-4af3-9028-0da78d54d55e\\\",\\\"systemUUID\\\":\\\"463fdb35-dd0c-4804-a85e-31cd33c59ce4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:36Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:36 crc kubenswrapper[4811]: E0122 09:06:36.927373 4811 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 22 09:06:36 crc kubenswrapper[4811]: I0122 09:06:36.928270 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:36 crc kubenswrapper[4811]: I0122 09:06:36.928299 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:36 crc kubenswrapper[4811]: I0122 09:06:36.928308 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:36 crc kubenswrapper[4811]: I0122 09:06:36.928318 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:36 crc kubenswrapper[4811]: I0122 09:06:36.928325 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:36Z","lastTransitionTime":"2026-01-22T09:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:36 crc kubenswrapper[4811]: I0122 09:06:36.987183 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 18:31:17.35462358 +0000 UTC Jan 22 09:06:36 crc kubenswrapper[4811]: I0122 09:06:36.991450 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bhj4l" Jan 22 09:06:36 crc kubenswrapper[4811]: E0122 09:06:36.991614 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bhj4l" podUID="de4b38a0-0c7a-4693-9f92-40fefd6bc9b4" Jan 22 09:06:37 crc kubenswrapper[4811]: I0122 09:06:37.029557 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:37 crc kubenswrapper[4811]: I0122 09:06:37.029585 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:37 crc kubenswrapper[4811]: I0122 09:06:37.029594 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:37 crc kubenswrapper[4811]: I0122 09:06:37.029607 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:37 crc kubenswrapper[4811]: I0122 09:06:37.029616 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:37Z","lastTransitionTime":"2026-01-22T09:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:37 crc kubenswrapper[4811]: I0122 09:06:37.131489 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:37 crc kubenswrapper[4811]: I0122 09:06:37.131527 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:37 crc kubenswrapper[4811]: I0122 09:06:37.131548 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:37 crc kubenswrapper[4811]: I0122 09:06:37.131562 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:37 crc kubenswrapper[4811]: I0122 09:06:37.131570 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:37Z","lastTransitionTime":"2026-01-22T09:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:37 crc kubenswrapper[4811]: I0122 09:06:37.233035 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:37 crc kubenswrapper[4811]: I0122 09:06:37.233086 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:37 crc kubenswrapper[4811]: I0122 09:06:37.233096 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:37 crc kubenswrapper[4811]: I0122 09:06:37.233107 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:37 crc kubenswrapper[4811]: I0122 09:06:37.233116 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:37Z","lastTransitionTime":"2026-01-22T09:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:37 crc kubenswrapper[4811]: I0122 09:06:37.334407 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:37 crc kubenswrapper[4811]: I0122 09:06:37.334446 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:37 crc kubenswrapper[4811]: I0122 09:06:37.334455 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:37 crc kubenswrapper[4811]: I0122 09:06:37.334471 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:37 crc kubenswrapper[4811]: I0122 09:06:37.334482 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:37Z","lastTransitionTime":"2026-01-22T09:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:37 crc kubenswrapper[4811]: I0122 09:06:37.436049 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:37 crc kubenswrapper[4811]: I0122 09:06:37.436082 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:37 crc kubenswrapper[4811]: I0122 09:06:37.436091 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:37 crc kubenswrapper[4811]: I0122 09:06:37.436103 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:37 crc kubenswrapper[4811]: I0122 09:06:37.436111 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:37Z","lastTransitionTime":"2026-01-22T09:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:37 crc kubenswrapper[4811]: I0122 09:06:37.537853 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:37 crc kubenswrapper[4811]: I0122 09:06:37.537901 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:37 crc kubenswrapper[4811]: I0122 09:06:37.537913 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:37 crc kubenswrapper[4811]: I0122 09:06:37.537929 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:37 crc kubenswrapper[4811]: I0122 09:06:37.537939 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:37Z","lastTransitionTime":"2026-01-22T09:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:37 crc kubenswrapper[4811]: I0122 09:06:37.639315 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:37 crc kubenswrapper[4811]: I0122 09:06:37.639343 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:37 crc kubenswrapper[4811]: I0122 09:06:37.639351 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:37 crc kubenswrapper[4811]: I0122 09:06:37.639361 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:37 crc kubenswrapper[4811]: I0122 09:06:37.639368 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:37Z","lastTransitionTime":"2026-01-22T09:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:37 crc kubenswrapper[4811]: I0122 09:06:37.741325 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:37 crc kubenswrapper[4811]: I0122 09:06:37.741361 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:37 crc kubenswrapper[4811]: I0122 09:06:37.741370 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:37 crc kubenswrapper[4811]: I0122 09:06:37.741384 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:37 crc kubenswrapper[4811]: I0122 09:06:37.741392 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:37Z","lastTransitionTime":"2026-01-22T09:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:37 crc kubenswrapper[4811]: I0122 09:06:37.846591 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:37 crc kubenswrapper[4811]: I0122 09:06:37.846660 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:37 crc kubenswrapper[4811]: I0122 09:06:37.846675 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:37 crc kubenswrapper[4811]: I0122 09:06:37.846688 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:37 crc kubenswrapper[4811]: I0122 09:06:37.846697 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:37Z","lastTransitionTime":"2026-01-22T09:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:37 crc kubenswrapper[4811]: I0122 09:06:37.948608 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:37 crc kubenswrapper[4811]: I0122 09:06:37.948673 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:37 crc kubenswrapper[4811]: I0122 09:06:37.948684 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:37 crc kubenswrapper[4811]: I0122 09:06:37.948706 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:37 crc kubenswrapper[4811]: I0122 09:06:37.948719 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:37Z","lastTransitionTime":"2026-01-22T09:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:37 crc kubenswrapper[4811]: I0122 09:06:37.987922 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 15:27:47.488517698 +0000 UTC Jan 22 09:06:37 crc kubenswrapper[4811]: I0122 09:06:37.991240 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 09:06:37 crc kubenswrapper[4811]: I0122 09:06:37.991253 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 09:06:37 crc kubenswrapper[4811]: E0122 09:06:37.991367 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 09:06:37 crc kubenswrapper[4811]: E0122 09:06:37.991445 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 09:06:37 crc kubenswrapper[4811]: I0122 09:06:37.991584 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:06:37 crc kubenswrapper[4811]: E0122 09:06:37.991819 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 09:06:38 crc kubenswrapper[4811]: I0122 09:06:38.050430 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:38 crc kubenswrapper[4811]: I0122 09:06:38.050464 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:38 crc kubenswrapper[4811]: I0122 09:06:38.050476 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:38 crc kubenswrapper[4811]: I0122 09:06:38.050491 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:38 crc kubenswrapper[4811]: I0122 09:06:38.050504 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:38Z","lastTransitionTime":"2026-01-22T09:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:38 crc kubenswrapper[4811]: I0122 09:06:38.152843 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:38 crc kubenswrapper[4811]: I0122 09:06:38.152869 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:38 crc kubenswrapper[4811]: I0122 09:06:38.152878 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:38 crc kubenswrapper[4811]: I0122 09:06:38.152892 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:38 crc kubenswrapper[4811]: I0122 09:06:38.152903 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:38Z","lastTransitionTime":"2026-01-22T09:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:38 crc kubenswrapper[4811]: I0122 09:06:38.255161 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:38 crc kubenswrapper[4811]: I0122 09:06:38.255192 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:38 crc kubenswrapper[4811]: I0122 09:06:38.255202 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:38 crc kubenswrapper[4811]: I0122 09:06:38.255215 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:38 crc kubenswrapper[4811]: I0122 09:06:38.255224 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:38Z","lastTransitionTime":"2026-01-22T09:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:38 crc kubenswrapper[4811]: I0122 09:06:38.357373 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:38 crc kubenswrapper[4811]: I0122 09:06:38.357399 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:38 crc kubenswrapper[4811]: I0122 09:06:38.357409 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:38 crc kubenswrapper[4811]: I0122 09:06:38.357421 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:38 crc kubenswrapper[4811]: I0122 09:06:38.357431 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:38Z","lastTransitionTime":"2026-01-22T09:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:38 crc kubenswrapper[4811]: I0122 09:06:38.459957 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:38 crc kubenswrapper[4811]: I0122 09:06:38.459996 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:38 crc kubenswrapper[4811]: I0122 09:06:38.460008 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:38 crc kubenswrapper[4811]: I0122 09:06:38.460022 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:38 crc kubenswrapper[4811]: I0122 09:06:38.460030 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:38Z","lastTransitionTime":"2026-01-22T09:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:38 crc kubenswrapper[4811]: I0122 09:06:38.562464 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:38 crc kubenswrapper[4811]: I0122 09:06:38.562513 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:38 crc kubenswrapper[4811]: I0122 09:06:38.562525 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:38 crc kubenswrapper[4811]: I0122 09:06:38.562542 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:38 crc kubenswrapper[4811]: I0122 09:06:38.562553 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:38Z","lastTransitionTime":"2026-01-22T09:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:38 crc kubenswrapper[4811]: I0122 09:06:38.664482 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:38 crc kubenswrapper[4811]: I0122 09:06:38.664529 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:38 crc kubenswrapper[4811]: I0122 09:06:38.664539 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:38 crc kubenswrapper[4811]: I0122 09:06:38.664557 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:38 crc kubenswrapper[4811]: I0122 09:06:38.664569 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:38Z","lastTransitionTime":"2026-01-22T09:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:38 crc kubenswrapper[4811]: I0122 09:06:38.766292 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:38 crc kubenswrapper[4811]: I0122 09:06:38.766327 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:38 crc kubenswrapper[4811]: I0122 09:06:38.766336 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:38 crc kubenswrapper[4811]: I0122 09:06:38.766349 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:38 crc kubenswrapper[4811]: I0122 09:06:38.766358 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:38Z","lastTransitionTime":"2026-01-22T09:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:38 crc kubenswrapper[4811]: I0122 09:06:38.868228 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:38 crc kubenswrapper[4811]: I0122 09:06:38.868266 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:38 crc kubenswrapper[4811]: I0122 09:06:38.868277 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:38 crc kubenswrapper[4811]: I0122 09:06:38.868293 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:38 crc kubenswrapper[4811]: I0122 09:06:38.868302 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:38Z","lastTransitionTime":"2026-01-22T09:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:38 crc kubenswrapper[4811]: I0122 09:06:38.972050 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:38 crc kubenswrapper[4811]: I0122 09:06:38.972257 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:38 crc kubenswrapper[4811]: I0122 09:06:38.972268 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:38 crc kubenswrapper[4811]: I0122 09:06:38.972284 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:38 crc kubenswrapper[4811]: I0122 09:06:38.972294 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:38Z","lastTransitionTime":"2026-01-22T09:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:38 crc kubenswrapper[4811]: I0122 09:06:38.988676 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 11:25:33.536231643 +0000 UTC Jan 22 09:06:38 crc kubenswrapper[4811]: I0122 09:06:38.991900 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bhj4l" Jan 22 09:06:38 crc kubenswrapper[4811]: E0122 09:06:38.992008 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bhj4l" podUID="de4b38a0-0c7a-4693-9f92-40fefd6bc9b4" Jan 22 09:06:39 crc kubenswrapper[4811]: I0122 09:06:39.074316 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:39 crc kubenswrapper[4811]: I0122 09:06:39.074358 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:39 crc kubenswrapper[4811]: I0122 09:06:39.074369 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:39 crc kubenswrapper[4811]: I0122 09:06:39.074386 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:39 crc kubenswrapper[4811]: I0122 09:06:39.074402 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:39Z","lastTransitionTime":"2026-01-22T09:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:39 crc kubenswrapper[4811]: I0122 09:06:39.135068 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/de4b38a0-0c7a-4693-9f92-40fefd6bc9b4-metrics-certs\") pod \"network-metrics-daemon-bhj4l\" (UID: \"de4b38a0-0c7a-4693-9f92-40fefd6bc9b4\") " pod="openshift-multus/network-metrics-daemon-bhj4l" Jan 22 09:06:39 crc kubenswrapper[4811]: E0122 09:06:39.135193 4811 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 09:06:39 crc kubenswrapper[4811]: E0122 09:06:39.135233 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/de4b38a0-0c7a-4693-9f92-40fefd6bc9b4-metrics-certs podName:de4b38a0-0c7a-4693-9f92-40fefd6bc9b4 nodeName:}" failed. No retries permitted until 2026-01-22 09:06:47.135219629 +0000 UTC m=+51.457406743 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/de4b38a0-0c7a-4693-9f92-40fefd6bc9b4-metrics-certs") pod "network-metrics-daemon-bhj4l" (UID: "de4b38a0-0c7a-4693-9f92-40fefd6bc9b4") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 09:06:39 crc kubenswrapper[4811]: I0122 09:06:39.175912 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:39 crc kubenswrapper[4811]: I0122 09:06:39.176052 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:39 crc kubenswrapper[4811]: I0122 09:06:39.176125 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:39 crc kubenswrapper[4811]: I0122 09:06:39.176215 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:39 crc kubenswrapper[4811]: I0122 09:06:39.176362 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:39Z","lastTransitionTime":"2026-01-22T09:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:39 crc kubenswrapper[4811]: I0122 09:06:39.278399 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:39 crc kubenswrapper[4811]: I0122 09:06:39.278443 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:39 crc kubenswrapper[4811]: I0122 09:06:39.278454 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:39 crc kubenswrapper[4811]: I0122 09:06:39.278469 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:39 crc kubenswrapper[4811]: I0122 09:06:39.278482 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:39Z","lastTransitionTime":"2026-01-22T09:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:39 crc kubenswrapper[4811]: I0122 09:06:39.380261 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:39 crc kubenswrapper[4811]: I0122 09:06:39.380292 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:39 crc kubenswrapper[4811]: I0122 09:06:39.380301 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:39 crc kubenswrapper[4811]: I0122 09:06:39.380317 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:39 crc kubenswrapper[4811]: I0122 09:06:39.380330 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:39Z","lastTransitionTime":"2026-01-22T09:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:39 crc kubenswrapper[4811]: I0122 09:06:39.482390 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:39 crc kubenswrapper[4811]: I0122 09:06:39.482429 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:39 crc kubenswrapper[4811]: I0122 09:06:39.482439 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:39 crc kubenswrapper[4811]: I0122 09:06:39.482455 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:39 crc kubenswrapper[4811]: I0122 09:06:39.482469 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:39Z","lastTransitionTime":"2026-01-22T09:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:39 crc kubenswrapper[4811]: I0122 09:06:39.584054 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:39 crc kubenswrapper[4811]: I0122 09:06:39.584089 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:39 crc kubenswrapper[4811]: I0122 09:06:39.584100 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:39 crc kubenswrapper[4811]: I0122 09:06:39.584114 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:39 crc kubenswrapper[4811]: I0122 09:06:39.584124 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:39Z","lastTransitionTime":"2026-01-22T09:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:39 crc kubenswrapper[4811]: I0122 09:06:39.685851 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:39 crc kubenswrapper[4811]: I0122 09:06:39.685886 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:39 crc kubenswrapper[4811]: I0122 09:06:39.685895 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:39 crc kubenswrapper[4811]: I0122 09:06:39.685909 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:39 crc kubenswrapper[4811]: I0122 09:06:39.685919 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:39Z","lastTransitionTime":"2026-01-22T09:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:39 crc kubenswrapper[4811]: I0122 09:06:39.787867 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:39 crc kubenswrapper[4811]: I0122 09:06:39.787891 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:39 crc kubenswrapper[4811]: I0122 09:06:39.787899 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:39 crc kubenswrapper[4811]: I0122 09:06:39.787911 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:39 crc kubenswrapper[4811]: I0122 09:06:39.787924 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:39Z","lastTransitionTime":"2026-01-22T09:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:39 crc kubenswrapper[4811]: I0122 09:06:39.889140 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:39 crc kubenswrapper[4811]: I0122 09:06:39.889210 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:39 crc kubenswrapper[4811]: I0122 09:06:39.889221 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:39 crc kubenswrapper[4811]: I0122 09:06:39.889235 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:39 crc kubenswrapper[4811]: I0122 09:06:39.889246 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:39Z","lastTransitionTime":"2026-01-22T09:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:39 crc kubenswrapper[4811]: I0122 09:06:39.988766 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 00:06:17.678244719 +0000 UTC Jan 22 09:06:39 crc kubenswrapper[4811]: I0122 09:06:39.990881 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:39 crc kubenswrapper[4811]: I0122 09:06:39.990901 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:39 crc kubenswrapper[4811]: I0122 09:06:39.990909 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:39 crc kubenswrapper[4811]: I0122 09:06:39.990920 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:39 crc kubenswrapper[4811]: I0122 09:06:39.990928 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:39Z","lastTransitionTime":"2026-01-22T09:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:39 crc kubenswrapper[4811]: I0122 09:06:39.990985 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 09:06:39 crc kubenswrapper[4811]: I0122 09:06:39.991006 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 09:06:39 crc kubenswrapper[4811]: E0122 09:06:39.991083 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 09:06:39 crc kubenswrapper[4811]: I0122 09:06:39.991145 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:06:39 crc kubenswrapper[4811]: E0122 09:06:39.991249 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 09:06:39 crc kubenswrapper[4811]: E0122 09:06:39.991348 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 09:06:40 crc kubenswrapper[4811]: I0122 09:06:40.092768 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:40 crc kubenswrapper[4811]: I0122 09:06:40.092802 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:40 crc kubenswrapper[4811]: I0122 09:06:40.092812 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:40 crc kubenswrapper[4811]: I0122 09:06:40.092824 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:40 crc kubenswrapper[4811]: I0122 09:06:40.092834 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:40Z","lastTransitionTime":"2026-01-22T09:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:40 crc kubenswrapper[4811]: I0122 09:06:40.194366 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:40 crc kubenswrapper[4811]: I0122 09:06:40.194398 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:40 crc kubenswrapper[4811]: I0122 09:06:40.194409 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:40 crc kubenswrapper[4811]: I0122 09:06:40.194421 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:40 crc kubenswrapper[4811]: I0122 09:06:40.194431 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:40Z","lastTransitionTime":"2026-01-22T09:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:40 crc kubenswrapper[4811]: I0122 09:06:40.296563 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:40 crc kubenswrapper[4811]: I0122 09:06:40.296599 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:40 crc kubenswrapper[4811]: I0122 09:06:40.296608 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:40 crc kubenswrapper[4811]: I0122 09:06:40.296639 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:40 crc kubenswrapper[4811]: I0122 09:06:40.296649 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:40Z","lastTransitionTime":"2026-01-22T09:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:40 crc kubenswrapper[4811]: I0122 09:06:40.398637 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:40 crc kubenswrapper[4811]: I0122 09:06:40.398664 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:40 crc kubenswrapper[4811]: I0122 09:06:40.398673 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:40 crc kubenswrapper[4811]: I0122 09:06:40.398684 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:40 crc kubenswrapper[4811]: I0122 09:06:40.398694 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:40Z","lastTransitionTime":"2026-01-22T09:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:40 crc kubenswrapper[4811]: I0122 09:06:40.500539 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:40 crc kubenswrapper[4811]: I0122 09:06:40.500570 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:40 crc kubenswrapper[4811]: I0122 09:06:40.500578 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:40 crc kubenswrapper[4811]: I0122 09:06:40.500589 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:40 crc kubenswrapper[4811]: I0122 09:06:40.500599 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:40Z","lastTransitionTime":"2026-01-22T09:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:40 crc kubenswrapper[4811]: I0122 09:06:40.601995 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:40 crc kubenswrapper[4811]: I0122 09:06:40.602049 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:40 crc kubenswrapper[4811]: I0122 09:06:40.602062 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:40 crc kubenswrapper[4811]: I0122 09:06:40.602083 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:40 crc kubenswrapper[4811]: I0122 09:06:40.602098 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:40Z","lastTransitionTime":"2026-01-22T09:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:40 crc kubenswrapper[4811]: I0122 09:06:40.704399 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:40 crc kubenswrapper[4811]: I0122 09:06:40.704436 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:40 crc kubenswrapper[4811]: I0122 09:06:40.704460 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:40 crc kubenswrapper[4811]: I0122 09:06:40.704471 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:40 crc kubenswrapper[4811]: I0122 09:06:40.704479 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:40Z","lastTransitionTime":"2026-01-22T09:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:40 crc kubenswrapper[4811]: I0122 09:06:40.806236 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:40 crc kubenswrapper[4811]: I0122 09:06:40.806266 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:40 crc kubenswrapper[4811]: I0122 09:06:40.806274 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:40 crc kubenswrapper[4811]: I0122 09:06:40.806282 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:40 crc kubenswrapper[4811]: I0122 09:06:40.806291 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:40Z","lastTransitionTime":"2026-01-22T09:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:40 crc kubenswrapper[4811]: I0122 09:06:40.908413 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:40 crc kubenswrapper[4811]: I0122 09:06:40.908442 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:40 crc kubenswrapper[4811]: I0122 09:06:40.908451 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:40 crc kubenswrapper[4811]: I0122 09:06:40.908464 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:40 crc kubenswrapper[4811]: I0122 09:06:40.908475 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:40Z","lastTransitionTime":"2026-01-22T09:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:40 crc kubenswrapper[4811]: I0122 09:06:40.989640 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 20:53:29.438166226 +0000 UTC Jan 22 09:06:40 crc kubenswrapper[4811]: I0122 09:06:40.991893 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bhj4l" Jan 22 09:06:40 crc kubenswrapper[4811]: E0122 09:06:40.991996 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bhj4l" podUID="de4b38a0-0c7a-4693-9f92-40fefd6bc9b4" Jan 22 09:06:41 crc kubenswrapper[4811]: I0122 09:06:41.010470 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:41 crc kubenswrapper[4811]: I0122 09:06:41.010523 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:41 crc kubenswrapper[4811]: I0122 09:06:41.010535 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:41 crc kubenswrapper[4811]: I0122 09:06:41.010547 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:41 crc kubenswrapper[4811]: I0122 09:06:41.010556 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:41Z","lastTransitionTime":"2026-01-22T09:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:41 crc kubenswrapper[4811]: I0122 09:06:41.112183 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:41 crc kubenswrapper[4811]: I0122 09:06:41.112233 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:41 crc kubenswrapper[4811]: I0122 09:06:41.112251 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:41 crc kubenswrapper[4811]: I0122 09:06:41.112268 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:41 crc kubenswrapper[4811]: I0122 09:06:41.112283 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:41Z","lastTransitionTime":"2026-01-22T09:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:41 crc kubenswrapper[4811]: I0122 09:06:41.213872 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:41 crc kubenswrapper[4811]: I0122 09:06:41.213997 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:41 crc kubenswrapper[4811]: I0122 09:06:41.214080 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:41 crc kubenswrapper[4811]: I0122 09:06:41.214154 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:41 crc kubenswrapper[4811]: I0122 09:06:41.214218 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:41Z","lastTransitionTime":"2026-01-22T09:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:41 crc kubenswrapper[4811]: I0122 09:06:41.316308 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:41 crc kubenswrapper[4811]: I0122 09:06:41.316870 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:41 crc kubenswrapper[4811]: I0122 09:06:41.316886 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:41 crc kubenswrapper[4811]: I0122 09:06:41.316905 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:41 crc kubenswrapper[4811]: I0122 09:06:41.316915 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:41Z","lastTransitionTime":"2026-01-22T09:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:41 crc kubenswrapper[4811]: I0122 09:06:41.419053 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:41 crc kubenswrapper[4811]: I0122 09:06:41.419082 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:41 crc kubenswrapper[4811]: I0122 09:06:41.419092 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:41 crc kubenswrapper[4811]: I0122 09:06:41.419106 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:41 crc kubenswrapper[4811]: I0122 09:06:41.419117 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:41Z","lastTransitionTime":"2026-01-22T09:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:41 crc kubenswrapper[4811]: I0122 09:06:41.521131 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:41 crc kubenswrapper[4811]: I0122 09:06:41.521155 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:41 crc kubenswrapper[4811]: I0122 09:06:41.521163 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:41 crc kubenswrapper[4811]: I0122 09:06:41.521174 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:41 crc kubenswrapper[4811]: I0122 09:06:41.521215 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:41Z","lastTransitionTime":"2026-01-22T09:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:41 crc kubenswrapper[4811]: I0122 09:06:41.622754 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:41 crc kubenswrapper[4811]: I0122 09:06:41.622781 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:41 crc kubenswrapper[4811]: I0122 09:06:41.622790 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:41 crc kubenswrapper[4811]: I0122 09:06:41.622801 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:41 crc kubenswrapper[4811]: I0122 09:06:41.622837 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:41Z","lastTransitionTime":"2026-01-22T09:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:41 crc kubenswrapper[4811]: I0122 09:06:41.724696 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:41 crc kubenswrapper[4811]: I0122 09:06:41.724719 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:41 crc kubenswrapper[4811]: I0122 09:06:41.724726 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:41 crc kubenswrapper[4811]: I0122 09:06:41.724743 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:41 crc kubenswrapper[4811]: I0122 09:06:41.724749 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:41Z","lastTransitionTime":"2026-01-22T09:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:41 crc kubenswrapper[4811]: I0122 09:06:41.826221 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:41 crc kubenswrapper[4811]: I0122 09:06:41.826271 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:41 crc kubenswrapper[4811]: I0122 09:06:41.826279 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:41 crc kubenswrapper[4811]: I0122 09:06:41.826295 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:41 crc kubenswrapper[4811]: I0122 09:06:41.826303 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:41Z","lastTransitionTime":"2026-01-22T09:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:41 crc kubenswrapper[4811]: I0122 09:06:41.928552 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:41 crc kubenswrapper[4811]: I0122 09:06:41.928592 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:41 crc kubenswrapper[4811]: I0122 09:06:41.928601 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:41 crc kubenswrapper[4811]: I0122 09:06:41.928616 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:41 crc kubenswrapper[4811]: I0122 09:06:41.928645 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:41Z","lastTransitionTime":"2026-01-22T09:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:41 crc kubenswrapper[4811]: I0122 09:06:41.990277 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 00:42:31.817562087 +0000 UTC Jan 22 09:06:41 crc kubenswrapper[4811]: I0122 09:06:41.991513 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 09:06:41 crc kubenswrapper[4811]: I0122 09:06:41.991528 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:06:41 crc kubenswrapper[4811]: I0122 09:06:41.991535 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 09:06:41 crc kubenswrapper[4811]: E0122 09:06:41.991718 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 09:06:41 crc kubenswrapper[4811]: E0122 09:06:41.992069 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 09:06:41 crc kubenswrapper[4811]: E0122 09:06:41.992121 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 09:06:41 crc kubenswrapper[4811]: I0122 09:06:41.992567 4811 scope.go:117] "RemoveContainer" containerID="239f1575640cc017afbde44a6bc3d241facb1b9b7588e9c3902064b0192e9e97" Jan 22 09:06:42 crc kubenswrapper[4811]: I0122 09:06:42.031104 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:42 crc kubenswrapper[4811]: I0122 09:06:42.031134 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:42 crc kubenswrapper[4811]: I0122 09:06:42.031143 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:42 crc kubenswrapper[4811]: I0122 09:06:42.031156 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:42 crc kubenswrapper[4811]: I0122 09:06:42.031167 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:42Z","lastTransitionTime":"2026-01-22T09:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:42 crc kubenswrapper[4811]: I0122 09:06:42.133349 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:42 crc kubenswrapper[4811]: I0122 09:06:42.133395 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:42 crc kubenswrapper[4811]: I0122 09:06:42.133407 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:42 crc kubenswrapper[4811]: I0122 09:06:42.133430 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:42 crc kubenswrapper[4811]: I0122 09:06:42.133442 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:42Z","lastTransitionTime":"2026-01-22T09:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:42 crc kubenswrapper[4811]: I0122 09:06:42.233963 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-274vf_1cd0f0db-de53-47c0-9b45-2ce8b37392a3/ovnkube-controller/1.log" Jan 22 09:06:42 crc kubenswrapper[4811]: I0122 09:06:42.235376 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:42 crc kubenswrapper[4811]: I0122 09:06:42.235417 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:42 crc kubenswrapper[4811]: I0122 09:06:42.235427 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:42 crc kubenswrapper[4811]: I0122 09:06:42.235458 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:42 crc kubenswrapper[4811]: I0122 09:06:42.235470 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:42Z","lastTransitionTime":"2026-01-22T09:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:42 crc kubenswrapper[4811]: I0122 09:06:42.236871 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-274vf" event={"ID":"1cd0f0db-de53-47c0-9b45-2ce8b37392a3","Type":"ContainerStarted","Data":"454af35caa860192c078a3fb4028bf2c4544607849290f3bd427688821757b2f"} Jan 22 09:06:42 crc kubenswrapper[4811]: I0122 09:06:42.237405 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-274vf" Jan 22 09:06:42 crc kubenswrapper[4811]: I0122 09:06:42.251261 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n2kj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66a7f3f9-ab88-4b1a-a28a-900480fa9651\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3a88defded9a5435941e2608c8a29baee840bc0ce48454a3731f344299347cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z47mj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n2kj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:42Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:42 crc kubenswrapper[4811]: I0122 09:06:42.264100 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab5dbe63-08fa-4157-9c52-003944e50d7b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af3bf6cdac4751814ecd71637d9e80cbdda1c0e99714275fe77cfa858b3823b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4419625f66ae957b2d9b146fc8b085a4a95c3f47860d6b4847f8fe71a6c4c86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7830976d23e073a099c3c6af1fa2392d9504ad9d41ddf2ee63c16632c22f833a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45fa5d37dcd1dcae1a7488bb855e4bd7c7dc39a1199fe80b82f83604be089f2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:05:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:42Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:42 crc kubenswrapper[4811]: I0122 09:06:42.280972 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a14bfcc7d6d2ab5e1ad0c45999b2e73332e750ed65d9299e188505dafb26d0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:42Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:42 crc kubenswrapper[4811]: I0122 09:06:42.302112 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bae2e22607dec646b15563dc62f66655e7a5c2b7ad809406c907e5f687f80105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75e290e203fab8f291146855a960477ceb9c33d965d15b4722c668dc2b3fded3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:42Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:42 crc kubenswrapper[4811]: I0122 09:06:42.331113 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-274vf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cd0f0db-de53-47c0-9b45-2ce8b37392a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5f0ec565b046b22d600103b58946f6f520873f1686cc3042bf7ddc7f24bd4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e071df53e8cd1652f507eacae3e2c54ac75e76a6b9a426620c970762e81ecd84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c261cdea10a90cb9b06369f3b725b7778d070fb329e81ab44552446e12662451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f0736426a25dadac460e5ed467f3082cd8c4806190c053022d6f360c4290799\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9197a8c82d49fb9b4b1c97827ee95582ca8148656abb88904e06abb25c135597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55f26cdf9d61842467eb1367c6ec156b4de9c40a5b05fe60d3d100df5136a8a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://454af35caa860192c078a3fb4028bf2c4544607849290f3bd427688821757b2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://239f1575640cc017afbde44a6bc3d241facb1b9b7588e9c3902064b0192e9e97\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T09:06:26Z\\\",\\\"message\\\":\\\"k8s.io/client-go/informers/factory.go:160\\\\nI0122 09:06:26.804316 6184 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 09:06:26.804420 6184 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0122 09:06:26.804469 6184 handler.go:208] Removed *v1.Node event handler 2\\\\nI0122 09:06:26.804791 6184 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 09:06:26.805266 6184 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 09:06:26.805478 6184 factory.go:656] Stopping watch factory\\\\nI0122 09:06:26.805690 6184 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 09:06:26.827772 6184 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0122 09:06:26.827839 6184 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0122 09:06:26.827915 6184 ovnkube.go:599] Stopped ovnkube\\\\nI0122 09:06:26.827965 6184 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0122 09:06:26.828033 6184 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c741f2990f908528ccab348e1eb02e217a9b659becc7a41c4eb8da28bfd42e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://523ab909aa2cbac6043b3c572a17c40733e9b2155d4017253d68998cb85d3e7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://523ab909aa2cbac6043b3c572a17c40733e9b2155d4017253d68998cb85d3e7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-274vf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:42Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:42 crc kubenswrapper[4811]: I0122 09:06:42.338311 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:42 crc kubenswrapper[4811]: I0122 09:06:42.338375 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:42 crc kubenswrapper[4811]: I0122 09:06:42.338394 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:42 crc kubenswrapper[4811]: I0122 09:06:42.338410 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:42 crc kubenswrapper[4811]: I0122 09:06:42.338437 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:42Z","lastTransitionTime":"2026-01-22T09:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:42 crc kubenswrapper[4811]: I0122 09:06:42.343674 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f776b1bc70646779cf35092ded02c00aa3b990c873de09d21e1f0d76258b0b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:42Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:42 crc kubenswrapper[4811]: I0122 09:06:42.351535 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gwnx7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"153efd1a-2c09-4c49-94e1-2307bbf1e659\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71a7748f45c9a14885785ca825c32a5036a887c7a54b36cadd1a89e57608111f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nklsw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddce971b4fbeb72e1a32c0adab8e46cb8bce2ff9736cd875f63e105bf39dcc27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nklsw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gwnx7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:42Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:42 crc kubenswrapper[4811]: I0122 09:06:42.365916 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cf1d28a-4ee8-45bb-9a86-3687d8db2105\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684f8ca14b2eabcbf9280085d495af9a6f1fa76fb187f15b8fa5659178efdc93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64382d1748f1eaeb9f2f09d243b35c56f9fd70b9a9765ddb64cb0c4866ed493b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://590bb419e3791ec3a267b9a5aa4022ee74ba5e302b047e85211563e135c4cf31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b14bb77b0cf24bacdb8f5edae5d02ccea019b12feb315e6d6c4887feb1a9354\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0d10794bea5b8ea94b9247775e0c424fa9d481267c663d4bcae16eb4e7eb729\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09c7d347a923430e4eb2af48033ea3fc7715bedad38dd1b644c7b29a1ab617fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09c7d347a923430e4eb2af48033ea3fc7715bedad38dd1b644c7b29a1ab617fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba74d0a51550a0637b01b701bbaa33c5efe84fb88a69b7294d1125f1f400b7cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba74d0a51550a0637b01b701bbaa33c5efe84fb88a69b7294d1125f1f400b7cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9a2da74ed29f44340dd1a4778f15ba78ba4f673eb1b34b676e35c7d7c84af31d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a2da74ed29f44340dd1a4778f15ba78ba4f673eb1b34b676e35c7d7c84af31d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:05:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:42Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:42 crc kubenswrapper[4811]: I0122 09:06:42.374544 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:42Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:42 crc kubenswrapper[4811]: I0122 09:06:42.383601 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kfqgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2555861-d1bb-4f21-be4a-165ed9212932\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ececfc4b2e4f2a120e7a3307235d606db00733e1a5631b6e19b5312afc8e8ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jt7cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kfqgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:42Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:42 crc kubenswrapper[4811]: I0122 09:06:42.391782 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84068a6b-e189-419b-87f5-f31428f6eafe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cde8418c9f22553519ec0a9ba1ddcdd553f64aafb44fa002e924499701236bca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thzhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbb067d04c1683ffd06d01aab41bb5988fdca44b8bd16012d35c42795d0d8011\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thzhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-txvcq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:42Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:42 crc kubenswrapper[4811]: I0122 09:06:42.399152 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bhj4l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de4b38a0-0c7a-4693-9f92-40fefd6bc9b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mphcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mphcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bhj4l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:42Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:42 crc kubenswrapper[4811]: I0122 09:06:42.410324 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30b44c40-e6d4-4902-98e9-a259269d8bf7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c0b4f518da69d1ede47cb9d322e9d9ca120cfc5eb770f109f0e5fd2e3260964\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aee61445030e02ac4fa9de4fefd94fe50e821ac8897c7403e173f2594a2e0ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d66c2a7980e144d2ce01cb8ebac91297e012c09d4b9aec617e747542e1ca304\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4daff941637e717d7dc33c5627db72d253f50f50288c23257e1eaa66cd38c5d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a86cb7b767900041a2946ed016b105369283ee61acdf725ade666acb1e926ed0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0122 09:06:13.075349 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0122 09:06:13.075574 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 09:06:13.077171 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1744166093/tls.crt::/tmp/serving-cert-1744166093/tls.key\\\\\\\"\\\\nI0122 09:06:13.324400 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 09:06:13.330700 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 09:06:13.330724 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 09:06:13.330758 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 09:06:13.330771 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 09:06:13.337838 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 09:06:13.337866 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:06:13.337872 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:06:13.337876 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 09:06:13.337879 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 09:06:13.337883 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 09:06:13.337886 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 09:06:13.338151 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 09:06:13.339610 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9efb1ddf2ce6601c9860410e3ccacc38359aef779a7c8524bf4ec5950743d36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85e811085c9b345899231a1fcf56619c1cc5c8b0e6d22fc897fb96f20ba72e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85e811085c9b345899231a1fcf56619c1cc5c8b0e6d22fc897fb96f20ba72e37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:05:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:42Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:42 crc kubenswrapper[4811]: I0122 09:06:42.418765 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:42Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:42 crc kubenswrapper[4811]: I0122 09:06:42.429648 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:42Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:42 crc kubenswrapper[4811]: I0122 09:06:42.439263 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xhhs7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f42dc1a6-d0c4-43e4-b9d9-b40c1f910400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a64d7f1d53f9a3d9616b76ea12902134c5e0a3a30c9a61113675074caae70d58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcfzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xhhs7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:42Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:42 crc kubenswrapper[4811]: I0122 09:06:42.440491 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:42 crc kubenswrapper[4811]: I0122 09:06:42.440522 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:42 crc kubenswrapper[4811]: I0122 09:06:42.440533 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:42 crc kubenswrapper[4811]: I0122 09:06:42.440553 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:42 crc kubenswrapper[4811]: I0122 09:06:42.440568 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:42Z","lastTransitionTime":"2026-01-22T09:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:42 crc kubenswrapper[4811]: I0122 09:06:42.449673 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9g4j8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d23c9c9-89ca-4db5-99dc-1e5b9f80be38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://279ca4748f2c8229a6647c23a414668a7612c0c863b52ef33c5b3dd026de74ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e01aaa7db78471fea3a39a7dfc2be7926746a5c38a6a92dc3d33ebab7b8bf3cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e01aaa7db78471fea3a39a7dfc2be7926746a5c38a6a92dc3d33ebab7b8bf3cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c4b6317408e75e2cc7947a3ff91f9268141cd5000f951dcbfdab3fdc5c2386c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c4b6317408e75e2cc7947a3ff91f9268141cd5000f951dcbfdab3fdc5c2386c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9e774edddd1e144cf32c598245081727747e9a4e3c04ee339ff63e330ceb5cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9e774edddd1e144cf32c598245081727747e9a4e3c04ee339ff63e330ceb5cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66cb2961790d3d876438b07dabf72c50ed19c9e946e21dcdf90f098a38e02c84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66cb2961790d3d876438b07dabf72c50ed19c9e946e21dcdf90f098a38e02c84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://016900d5e167b9638dd095fc155f63e33c55037edb90af28ebef58c91b86fe7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://016900d5e167b9638dd095fc155f63e33c55037edb90af28ebef58c91b86fe7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4face38e0653080b6d84c1e1a126c96d043eb68c6e19f76dbd8def8bc96e9d40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4face38e0653080b6d84c1e1a126c96d043eb68c6e19f76dbd8def8bc96e9d40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9g4j8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:42Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:42 crc kubenswrapper[4811]: I0122 09:06:42.542667 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:42 crc kubenswrapper[4811]: I0122 09:06:42.542705 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:42 crc kubenswrapper[4811]: I0122 09:06:42.542718 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:42 crc kubenswrapper[4811]: I0122 09:06:42.542744 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:42 crc kubenswrapper[4811]: I0122 09:06:42.542764 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:42Z","lastTransitionTime":"2026-01-22T09:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:42 crc kubenswrapper[4811]: I0122 09:06:42.644542 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:42 crc kubenswrapper[4811]: I0122 09:06:42.644591 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:42 crc kubenswrapper[4811]: I0122 09:06:42.644601 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:42 crc kubenswrapper[4811]: I0122 09:06:42.644617 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:42 crc kubenswrapper[4811]: I0122 09:06:42.644649 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:42Z","lastTransitionTime":"2026-01-22T09:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:42 crc kubenswrapper[4811]: I0122 09:06:42.746639 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:42 crc kubenswrapper[4811]: I0122 09:06:42.746682 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:42 crc kubenswrapper[4811]: I0122 09:06:42.746693 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:42 crc kubenswrapper[4811]: I0122 09:06:42.746710 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:42 crc kubenswrapper[4811]: I0122 09:06:42.746723 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:42Z","lastTransitionTime":"2026-01-22T09:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:42 crc kubenswrapper[4811]: I0122 09:06:42.848596 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:42 crc kubenswrapper[4811]: I0122 09:06:42.848667 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:42 crc kubenswrapper[4811]: I0122 09:06:42.848677 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:42 crc kubenswrapper[4811]: I0122 09:06:42.848692 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:42 crc kubenswrapper[4811]: I0122 09:06:42.848704 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:42Z","lastTransitionTime":"2026-01-22T09:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:42 crc kubenswrapper[4811]: I0122 09:06:42.956387 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:42 crc kubenswrapper[4811]: I0122 09:06:42.956433 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:42 crc kubenswrapper[4811]: I0122 09:06:42.956444 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:42 crc kubenswrapper[4811]: I0122 09:06:42.956464 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:42 crc kubenswrapper[4811]: I0122 09:06:42.956476 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:42Z","lastTransitionTime":"2026-01-22T09:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:42 crc kubenswrapper[4811]: I0122 09:06:42.991093 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 21:39:23.318205065 +0000 UTC Jan 22 09:06:42 crc kubenswrapper[4811]: I0122 09:06:42.991229 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bhj4l" Jan 22 09:06:42 crc kubenswrapper[4811]: E0122 09:06:42.991424 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bhj4l" podUID="de4b38a0-0c7a-4693-9f92-40fefd6bc9b4" Jan 22 09:06:43 crc kubenswrapper[4811]: I0122 09:06:43.058604 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:43 crc kubenswrapper[4811]: I0122 09:06:43.058668 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:43 crc kubenswrapper[4811]: I0122 09:06:43.058679 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:43 crc kubenswrapper[4811]: I0122 09:06:43.058695 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:43 crc kubenswrapper[4811]: I0122 09:06:43.058705 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:43Z","lastTransitionTime":"2026-01-22T09:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:43 crc kubenswrapper[4811]: I0122 09:06:43.160764 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:43 crc kubenswrapper[4811]: I0122 09:06:43.160803 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:43 crc kubenswrapper[4811]: I0122 09:06:43.160813 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:43 crc kubenswrapper[4811]: I0122 09:06:43.160831 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:43 crc kubenswrapper[4811]: I0122 09:06:43.160843 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:43Z","lastTransitionTime":"2026-01-22T09:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:43 crc kubenswrapper[4811]: I0122 09:06:43.240726 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-274vf_1cd0f0db-de53-47c0-9b45-2ce8b37392a3/ovnkube-controller/2.log" Jan 22 09:06:43 crc kubenswrapper[4811]: I0122 09:06:43.241365 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-274vf_1cd0f0db-de53-47c0-9b45-2ce8b37392a3/ovnkube-controller/1.log" Jan 22 09:06:43 crc kubenswrapper[4811]: I0122 09:06:43.243878 4811 generic.go:334] "Generic (PLEG): container finished" podID="1cd0f0db-de53-47c0-9b45-2ce8b37392a3" containerID="454af35caa860192c078a3fb4028bf2c4544607849290f3bd427688821757b2f" exitCode=1 Jan 22 09:06:43 crc kubenswrapper[4811]: I0122 09:06:43.243942 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-274vf" event={"ID":"1cd0f0db-de53-47c0-9b45-2ce8b37392a3","Type":"ContainerDied","Data":"454af35caa860192c078a3fb4028bf2c4544607849290f3bd427688821757b2f"} Jan 22 09:06:43 crc kubenswrapper[4811]: I0122 09:06:43.243999 4811 scope.go:117] "RemoveContainer" containerID="239f1575640cc017afbde44a6bc3d241facb1b9b7588e9c3902064b0192e9e97" Jan 22 09:06:43 crc kubenswrapper[4811]: I0122 09:06:43.244691 4811 scope.go:117] "RemoveContainer" containerID="454af35caa860192c078a3fb4028bf2c4544607849290f3bd427688821757b2f" Jan 22 09:06:43 crc kubenswrapper[4811]: E0122 09:06:43.244943 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-274vf_openshift-ovn-kubernetes(1cd0f0db-de53-47c0-9b45-2ce8b37392a3)\"" pod="openshift-ovn-kubernetes/ovnkube-node-274vf" podUID="1cd0f0db-de53-47c0-9b45-2ce8b37392a3" Jan 22 09:06:43 crc kubenswrapper[4811]: I0122 09:06:43.259305 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30b44c40-e6d4-4902-98e9-a259269d8bf7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c0b4f518da69d1ede47cb9d322e9d9ca120cfc5eb770f109f0e5fd2e3260964\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aee61445030e02ac4fa9de4fefd94fe50e821ac8897c7403e173f2594a2e0ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d66c2a7980e144d2ce01cb8ebac91297e012c09d4b9aec617e747542e1ca304\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4daff941637e717d7dc33c5627db72d253f50f50288c23257e1eaa66cd38c5d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a86cb7b767900041a2946ed016b105369283ee61acdf725ade666acb1e926ed0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0122 09:06:13.075349 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0122 09:06:13.075574 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 09:06:13.077171 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1744166093/tls.crt::/tmp/serving-cert-1744166093/tls.key\\\\\\\"\\\\nI0122 09:06:13.324400 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 09:06:13.330700 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 09:06:13.330724 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 09:06:13.330758 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 09:06:13.330771 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 09:06:13.337838 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 09:06:13.337866 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:06:13.337872 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:06:13.337876 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 09:06:13.337879 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 09:06:13.337883 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 09:06:13.337886 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 09:06:13.338151 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 09:06:13.339610 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9efb1ddf2ce6601c9860410e3ccacc38359aef779a7c8524bf4ec5950743d36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85e811085c9b345899231a1fcf56619c1cc5c8b0e6d22fc897fb96f20ba72e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85e811085c9b345899231a1fcf56619c1cc5c8b0e6d22fc897fb96f20ba72e37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:05:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:43Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:43 crc kubenswrapper[4811]: I0122 09:06:43.262531 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:43 crc kubenswrapper[4811]: I0122 09:06:43.262574 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:43 crc kubenswrapper[4811]: I0122 09:06:43.262583 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:43 crc kubenswrapper[4811]: I0122 09:06:43.262599 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:43 crc kubenswrapper[4811]: I0122 09:06:43.262610 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:43Z","lastTransitionTime":"2026-01-22T09:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:43 crc kubenswrapper[4811]: I0122 09:06:43.270112 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:43Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:43 crc kubenswrapper[4811]: I0122 09:06:43.278780 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:43Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:43 crc kubenswrapper[4811]: I0122 09:06:43.285940 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xhhs7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f42dc1a6-d0c4-43e4-b9d9-b40c1f910400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a64d7f1d53f9a3d9616b76ea12902134c5e0a3a30c9a61113675074caae70d58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcfzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xhhs7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:43Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:43 crc kubenswrapper[4811]: I0122 09:06:43.296154 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9g4j8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d23c9c9-89ca-4db5-99dc-1e5b9f80be38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://279ca4748f2c8229a6647c23a414668a7612c0c863b52ef33c5b3dd026de74ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e01aaa7db78471fea3a39a7dfc2be7926746a5c38a6a92dc3d33ebab7b8bf3cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e01aaa7db78471fea3a39a7dfc2be7926746a5c38a6a92dc3d33ebab7b8bf3cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c4b6317408e75e2cc7947a3ff91f9268141cd5000f951dcbfdab3fdc5c2386c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c4b6317408e75e2cc7947a3ff91f9268141cd5000f951dcbfdab3fdc5c2386c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9e774edddd1e144cf32c598245081727747e9a4e3c04ee339ff63e330ceb5cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9e774edddd1e144cf32c598245081727747e9a4e3c04ee339ff63e330ceb5cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66cb2961790d3d876438b07dabf72c50ed19c9e946e21dcdf90f098a38e02c84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66cb2961790d3d876438b07dabf72c50ed19c9e946e21dcdf90f098a38e02c84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://016900d5e167b9638dd095fc155f63e33c55037edb90af28ebef58c91b86fe7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://016900d5e167b9638dd095fc155f63e33c55037edb90af28ebef58c91b86fe7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4face38e0653080b6d84c1e1a126c96d043eb68c6e19f76dbd8def8bc96e9d40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4face38e0653080b6d84c1e1a126c96d043eb68c6e19f76dbd8def8bc96e9d40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9g4j8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:43Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:43 crc kubenswrapper[4811]: I0122 09:06:43.303709 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84068a6b-e189-419b-87f5-f31428f6eafe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cde8418c9f22553519ec0a9ba1ddcdd553f64aafb44fa002e924499701236bca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thzhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbb067d04c1683ffd06d01aab41bb5988fdca44b8bd16012d35c42795d0d8011\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thzhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-txvcq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:43Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:43 crc kubenswrapper[4811]: I0122 09:06:43.311312 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bhj4l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de4b38a0-0c7a-4693-9f92-40fefd6bc9b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mphcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mphcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bhj4l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:43Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:43 crc kubenswrapper[4811]: I0122 09:06:43.319639 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab5dbe63-08fa-4157-9c52-003944e50d7b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af3bf6cdac4751814ecd71637d9e80cbdda1c0e99714275fe77cfa858b3823b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4419625f66ae957b2d9b146fc8b085a4a95c3f47860d6b4847f8fe71a6c4c86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7830976d23e073a099c3c6af1fa2392d9504ad9d41ddf2ee63c16632c22f833a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45fa5d37dcd1dcae1a7488bb855e4bd7c7dc39a1199fe80b82f83604be089f2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:05:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:43Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:43 crc kubenswrapper[4811]: I0122 09:06:43.328466 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a14bfcc7d6d2ab5e1ad0c45999b2e73332e750ed65d9299e188505dafb26d0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:43Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:43 crc kubenswrapper[4811]: I0122 09:06:43.336222 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bae2e22607dec646b15563dc62f66655e7a5c2b7ad809406c907e5f687f80105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75e290e203fab8f291146855a960477ceb9c33d965d15b4722c668dc2b3fded3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:43Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:43 crc kubenswrapper[4811]: I0122 09:06:43.348321 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-274vf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cd0f0db-de53-47c0-9b45-2ce8b37392a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5f0ec565b046b22d600103b58946f6f520873f1686cc3042bf7ddc7f24bd4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e071df53e8cd1652f507eacae3e2c54ac75e76a6b9a426620c970762e81ecd84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c261cdea10a90cb9b06369f3b725b7778d070fb329e81ab44552446e12662451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f0736426a25dadac460e5ed467f3082cd8c4806190c053022d6f360c4290799\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9197a8c82d49fb9b4b1c97827ee95582ca8148656abb88904e06abb25c135597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55f26cdf9d61842467eb1367c6ec156b4de9c40a5b05fe60d3d100df5136a8a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://454af35caa860192c078a3fb4028bf2c4544607849290f3bd427688821757b2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://239f1575640cc017afbde44a6bc3d241facb1b9b7588e9c3902064b0192e9e97\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T09:06:26Z\\\",\\\"message\\\":\\\"k8s.io/client-go/informers/factory.go:160\\\\nI0122 09:06:26.804316 6184 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 09:06:26.804420 6184 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0122 09:06:26.804469 6184 handler.go:208] Removed *v1.Node event handler 2\\\\nI0122 09:06:26.804791 6184 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 09:06:26.805266 6184 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 09:06:26.805478 6184 factory.go:656] Stopping watch factory\\\\nI0122 09:06:26.805690 6184 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 09:06:26.827772 6184 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0122 09:06:26.827839 6184 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0122 09:06:26.827915 6184 ovnkube.go:599] Stopped ovnkube\\\\nI0122 09:06:26.827965 6184 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0122 09:06:26.828033 6184 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://454af35caa860192c078a3fb4028bf2c4544607849290f3bd427688821757b2f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T09:06:42Z\\\",\\\"message\\\":\\\"ing service etcd on namespace openshift-etcd for network=default : 1.153957ms\\\\nI0122 09:06:42.648545 6404 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-controller-manager/controller-manager]} name:Service_openshift-controller-manager/controller-manager_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.149:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {cab7c637-a021-4a4d-a4b9-06d63c44316f}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0122 09:06:42.648515 6404 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-ingress-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"d8772e82-b0a4-4596-87d3-3d517c13344b\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-ingress-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]stri\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c741f2990f908528ccab348e1eb02e217a9b659becc7a41c4eb8da28bfd42e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://523ab909aa2cbac6043b3c572a17c40733e9b2155d4017253d68998cb85d3e7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://523ab909aa2cbac6043b3c572a17c40733e9b2155d4017253d68998cb85d3e7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-274vf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:43Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:43 crc kubenswrapper[4811]: I0122 09:06:43.355419 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n2kj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66a7f3f9-ab88-4b1a-a28a-900480fa9651\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3a88defded9a5435941e2608c8a29baee840bc0ce48454a3731f344299347cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z47mj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n2kj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:43Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:43 crc kubenswrapper[4811]: I0122 09:06:43.364411 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:43 crc kubenswrapper[4811]: I0122 09:06:43.364443 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:43 crc kubenswrapper[4811]: I0122 09:06:43.364454 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:43 crc kubenswrapper[4811]: I0122 09:06:43.364472 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:43 crc kubenswrapper[4811]: I0122 09:06:43.364486 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:43Z","lastTransitionTime":"2026-01-22T09:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:43 crc kubenswrapper[4811]: I0122 09:06:43.365589 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f776b1bc70646779cf35092ded02c00aa3b990c873de09d21e1f0d76258b0b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:43Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:43 crc kubenswrapper[4811]: I0122 09:06:43.374989 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gwnx7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"153efd1a-2c09-4c49-94e1-2307bbf1e659\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71a7748f45c9a14885785ca825c32a5036a887c7a54b36cadd1a89e57608111f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nklsw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddce971b4fbeb72e1a32c0adab8e46cb8bce2ff9736cd875f63e105bf39dcc27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nklsw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gwnx7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:43Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:43 crc kubenswrapper[4811]: I0122 09:06:43.390547 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cf1d28a-4ee8-45bb-9a86-3687d8db2105\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684f8ca14b2eabcbf9280085d495af9a6f1fa76fb187f15b8fa5659178efdc93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64382d1748f1eaeb9f2f09d243b35c56f9fd70b9a9765ddb64cb0c4866ed493b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://590bb419e3791ec3a267b9a5aa4022ee74ba5e302b047e85211563e135c4cf31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b14bb77b0cf24bacdb8f5edae5d02ccea019b12feb315e6d6c4887feb1a9354\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0d10794bea5b8ea94b9247775e0c424fa9d481267c663d4bcae16eb4e7eb729\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09c7d347a923430e4eb2af48033ea3fc7715bedad38dd1b644c7b29a1ab617fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09c7d347a923430e4eb2af48033ea3fc7715bedad38dd1b644c7b29a1ab617fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba74d0a51550a0637b01b701bbaa33c5efe84fb88a69b7294d1125f1f400b7cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba74d0a51550a0637b01b701bbaa33c5efe84fb88a69b7294d1125f1f400b7cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9a2da74ed29f44340dd1a4778f15ba78ba4f673eb1b34b676e35c7d7c84af31d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a2da74ed29f44340dd1a4778f15ba78ba4f673eb1b34b676e35c7d7c84af31d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:05:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:43Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:43 crc kubenswrapper[4811]: I0122 09:06:43.400095 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:43Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:43 crc kubenswrapper[4811]: I0122 09:06:43.409249 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kfqgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2555861-d1bb-4f21-be4a-165ed9212932\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ececfc4b2e4f2a120e7a3307235d606db00733e1a5631b6e19b5312afc8e8ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jt7cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kfqgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:43Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:43 crc kubenswrapper[4811]: I0122 09:06:43.467064 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:43 crc kubenswrapper[4811]: I0122 09:06:43.467103 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:43 crc kubenswrapper[4811]: I0122 09:06:43.467118 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:43 crc kubenswrapper[4811]: I0122 09:06:43.467150 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:43 crc kubenswrapper[4811]: I0122 09:06:43.467164 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:43Z","lastTransitionTime":"2026-01-22T09:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:43 crc kubenswrapper[4811]: I0122 09:06:43.569573 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:43 crc kubenswrapper[4811]: I0122 09:06:43.569645 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:43 crc kubenswrapper[4811]: I0122 09:06:43.569659 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:43 crc kubenswrapper[4811]: I0122 09:06:43.569674 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:43 crc kubenswrapper[4811]: I0122 09:06:43.569684 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:43Z","lastTransitionTime":"2026-01-22T09:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:43 crc kubenswrapper[4811]: I0122 09:06:43.671119 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:43 crc kubenswrapper[4811]: I0122 09:06:43.671158 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:43 crc kubenswrapper[4811]: I0122 09:06:43.671169 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:43 crc kubenswrapper[4811]: I0122 09:06:43.671184 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:43 crc kubenswrapper[4811]: I0122 09:06:43.671196 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:43Z","lastTransitionTime":"2026-01-22T09:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:43 crc kubenswrapper[4811]: I0122 09:06:43.773176 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:43 crc kubenswrapper[4811]: I0122 09:06:43.773205 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:43 crc kubenswrapper[4811]: I0122 09:06:43.773214 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:43 crc kubenswrapper[4811]: I0122 09:06:43.773230 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:43 crc kubenswrapper[4811]: I0122 09:06:43.773242 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:43Z","lastTransitionTime":"2026-01-22T09:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:43 crc kubenswrapper[4811]: I0122 09:06:43.874861 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:43 crc kubenswrapper[4811]: I0122 09:06:43.874892 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:43 crc kubenswrapper[4811]: I0122 09:06:43.874901 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:43 crc kubenswrapper[4811]: I0122 09:06:43.874912 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:43 crc kubenswrapper[4811]: I0122 09:06:43.874923 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:43Z","lastTransitionTime":"2026-01-22T09:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:43 crc kubenswrapper[4811]: I0122 09:06:43.976275 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:43 crc kubenswrapper[4811]: I0122 09:06:43.976300 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:43 crc kubenswrapper[4811]: I0122 09:06:43.976309 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:43 crc kubenswrapper[4811]: I0122 09:06:43.976321 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:43 crc kubenswrapper[4811]: I0122 09:06:43.976330 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:43Z","lastTransitionTime":"2026-01-22T09:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:43 crc kubenswrapper[4811]: I0122 09:06:43.991865 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 16:42:33.700388802 +0000 UTC Jan 22 09:06:43 crc kubenswrapper[4811]: I0122 09:06:43.992003 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 09:06:43 crc kubenswrapper[4811]: E0122 09:06:43.992105 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 09:06:43 crc kubenswrapper[4811]: I0122 09:06:43.992003 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 09:06:43 crc kubenswrapper[4811]: I0122 09:06:43.992146 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:06:43 crc kubenswrapper[4811]: E0122 09:06:43.992239 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 09:06:43 crc kubenswrapper[4811]: E0122 09:06:43.992335 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 09:06:44 crc kubenswrapper[4811]: I0122 09:06:44.077410 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:44 crc kubenswrapper[4811]: I0122 09:06:44.077435 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:44 crc kubenswrapper[4811]: I0122 09:06:44.077443 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:44 crc kubenswrapper[4811]: I0122 09:06:44.077456 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:44 crc kubenswrapper[4811]: I0122 09:06:44.077464 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:44Z","lastTransitionTime":"2026-01-22T09:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:44 crc kubenswrapper[4811]: I0122 09:06:44.178731 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:44 crc kubenswrapper[4811]: I0122 09:06:44.178850 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:44 crc kubenswrapper[4811]: I0122 09:06:44.178941 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:44 crc kubenswrapper[4811]: I0122 09:06:44.179016 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:44 crc kubenswrapper[4811]: I0122 09:06:44.179084 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:44Z","lastTransitionTime":"2026-01-22T09:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:44 crc kubenswrapper[4811]: I0122 09:06:44.247113 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-274vf_1cd0f0db-de53-47c0-9b45-2ce8b37392a3/ovnkube-controller/2.log" Jan 22 09:06:44 crc kubenswrapper[4811]: I0122 09:06:44.249582 4811 scope.go:117] "RemoveContainer" containerID="454af35caa860192c078a3fb4028bf2c4544607849290f3bd427688821757b2f" Jan 22 09:06:44 crc kubenswrapper[4811]: E0122 09:06:44.249806 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-274vf_openshift-ovn-kubernetes(1cd0f0db-de53-47c0-9b45-2ce8b37392a3)\"" pod="openshift-ovn-kubernetes/ovnkube-node-274vf" podUID="1cd0f0db-de53-47c0-9b45-2ce8b37392a3" Jan 22 09:06:44 crc kubenswrapper[4811]: I0122 09:06:44.260648 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30b44c40-e6d4-4902-98e9-a259269d8bf7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c0b4f518da69d1ede47cb9d322e9d9ca120cfc5eb770f109f0e5fd2e3260964\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aee61445030e02ac4fa9de4fefd94fe50e821ac8897c7403e173f2594a2e0ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d66c2a7980e144d2ce01cb8ebac91297e012c09d4b9aec617e747542e1ca304\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4daff941637e717d7dc33c5627db72d253f50f50288c23257e1eaa66cd38c5d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a86cb7b767900041a2946ed016b105369283ee61acdf725ade666acb1e926ed0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0122 09:06:13.075349 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0122 09:06:13.075574 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 09:06:13.077171 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1744166093/tls.crt::/tmp/serving-cert-1744166093/tls.key\\\\\\\"\\\\nI0122 09:06:13.324400 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 09:06:13.330700 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 09:06:13.330724 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 09:06:13.330758 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 09:06:13.330771 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 09:06:13.337838 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 09:06:13.337866 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:06:13.337872 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:06:13.337876 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 09:06:13.337879 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 09:06:13.337883 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 09:06:13.337886 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 09:06:13.338151 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 09:06:13.339610 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9efb1ddf2ce6601c9860410e3ccacc38359aef779a7c8524bf4ec5950743d36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85e811085c9b345899231a1fcf56619c1cc5c8b0e6d22fc897fb96f20ba72e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85e811085c9b345899231a1fcf56619c1cc5c8b0e6d22fc897fb96f20ba72e37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:05:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:44Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:44 crc kubenswrapper[4811]: I0122 09:06:44.271680 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:44Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:44 crc kubenswrapper[4811]: I0122 09:06:44.281242 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:44 crc kubenswrapper[4811]: I0122 09:06:44.281279 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:44 crc kubenswrapper[4811]: I0122 09:06:44.281290 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:44 crc kubenswrapper[4811]: I0122 09:06:44.281321 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:44 crc kubenswrapper[4811]: I0122 09:06:44.281333 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:44Z","lastTransitionTime":"2026-01-22T09:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:44 crc kubenswrapper[4811]: I0122 09:06:44.281842 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:44Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:44 crc kubenswrapper[4811]: I0122 09:06:44.290268 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xhhs7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f42dc1a6-d0c4-43e4-b9d9-b40c1f910400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a64d7f1d53f9a3d9616b76ea12902134c5e0a3a30c9a61113675074caae70d58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcfzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xhhs7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:44Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:44 crc kubenswrapper[4811]: I0122 09:06:44.299813 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9g4j8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d23c9c9-89ca-4db5-99dc-1e5b9f80be38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://279ca4748f2c8229a6647c23a414668a7612c0c863b52ef33c5b3dd026de74ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e01aaa7db78471fea3a39a7dfc2be7926746a5c38a6a92dc3d33ebab7b8bf3cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e01aaa7db78471fea3a39a7dfc2be7926746a5c38a6a92dc3d33ebab7b8bf3cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c4b6317408e75e2cc7947a3ff91f9268141cd5000f951dcbfdab3fdc5c2386c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c4b6317408e75e2cc7947a3ff91f9268141cd5000f951dcbfdab3fdc5c2386c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9e774edddd1e144cf32c598245081727747e9a4e3c04ee339ff63e330ceb5cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9e774edddd1e144cf32c598245081727747e9a4e3c04ee339ff63e330ceb5cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66cb2961790d3d876438b07dabf72c50ed19c9e946e21dcdf90f098a38e02c84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66cb2961790d3d876438b07dabf72c50ed19c9e946e21dcdf90f098a38e02c84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://016900d5e167b9638dd095fc155f63e33c55037edb90af28ebef58c91b86fe7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://016900d5e167b9638dd095fc155f63e33c55037edb90af28ebef58c91b86fe7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4face38e0653080b6d84c1e1a126c96d043eb68c6e19f76dbd8def8bc96e9d40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4face38e0653080b6d84c1e1a126c96d043eb68c6e19f76dbd8def8bc96e9d40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9g4j8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:44Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:44 crc kubenswrapper[4811]: I0122 09:06:44.308266 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84068a6b-e189-419b-87f5-f31428f6eafe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cde8418c9f22553519ec0a9ba1ddcdd553f64aafb44fa002e924499701236bca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thzhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbb067d04c1683ffd06d01aab41bb5988fdca44b8bd16012d35c42795d0d8011\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thzhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-txvcq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:44Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:44 crc kubenswrapper[4811]: I0122 09:06:44.315328 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bhj4l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de4b38a0-0c7a-4693-9f92-40fefd6bc9b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mphcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mphcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bhj4l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:44Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:44 crc kubenswrapper[4811]: I0122 09:06:44.325126 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab5dbe63-08fa-4157-9c52-003944e50d7b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af3bf6cdac4751814ecd71637d9e80cbdda1c0e99714275fe77cfa858b3823b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4419625f66ae957b2d9b146fc8b085a4a95c3f47860d6b4847f8fe71a6c4c86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7830976d23e073a099c3c6af1fa2392d9504ad9d41ddf2ee63c16632c22f833a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45fa5d37dcd1dcae1a7488bb855e4bd7c7dc39a1199fe80b82f83604be089f2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:05:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:44Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:44 crc kubenswrapper[4811]: I0122 09:06:44.333799 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a14bfcc7d6d2ab5e1ad0c45999b2e73332e750ed65d9299e188505dafb26d0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:44Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:44 crc kubenswrapper[4811]: I0122 09:06:44.341876 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bae2e22607dec646b15563dc62f66655e7a5c2b7ad809406c907e5f687f80105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75e290e203fab8f291146855a960477ceb9c33d965d15b4722c668dc2b3fded3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:44Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:44 crc kubenswrapper[4811]: I0122 09:06:44.354487 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-274vf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cd0f0db-de53-47c0-9b45-2ce8b37392a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5f0ec565b046b22d600103b58946f6f520873f1686cc3042bf7ddc7f24bd4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e071df53e8cd1652f507eacae3e2c54ac75e76a6b9a426620c970762e81ecd84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c261cdea10a90cb9b06369f3b725b7778d070fb329e81ab44552446e12662451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f0736426a25dadac460e5ed467f3082cd8c4806190c053022d6f360c4290799\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9197a8c82d49fb9b4b1c97827ee95582ca8148656abb88904e06abb25c135597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55f26cdf9d61842467eb1367c6ec156b4de9c40a5b05fe60d3d100df5136a8a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://454af35caa860192c078a3fb4028bf2c4544607849290f3bd427688821757b2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://454af35caa860192c078a3fb4028bf2c4544607849290f3bd427688821757b2f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T09:06:42Z\\\",\\\"message\\\":\\\"ing service etcd on namespace openshift-etcd for network=default : 1.153957ms\\\\nI0122 09:06:42.648545 6404 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-controller-manager/controller-manager]} name:Service_openshift-controller-manager/controller-manager_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.149:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {cab7c637-a021-4a4d-a4b9-06d63c44316f}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0122 09:06:42.648515 6404 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-ingress-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"d8772e82-b0a4-4596-87d3-3d517c13344b\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-ingress-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]stri\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-274vf_openshift-ovn-kubernetes(1cd0f0db-de53-47c0-9b45-2ce8b37392a3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c741f2990f908528ccab348e1eb02e217a9b659becc7a41c4eb8da28bfd42e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://523ab909aa2cbac6043b3c572a17c40733e9b2155d4017253d68998cb85d3e7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://523ab909aa2cbac6043b3c572a17c40733e9b2155d4017253d68998cb85d3e7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-274vf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:44Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:44 crc kubenswrapper[4811]: I0122 09:06:44.361276 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n2kj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66a7f3f9-ab88-4b1a-a28a-900480fa9651\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3a88defded9a5435941e2608c8a29baee840bc0ce48454a3731f344299347cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z47mj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n2kj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:44Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:44 crc kubenswrapper[4811]: I0122 09:06:44.370120 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f776b1bc70646779cf35092ded02c00aa3b990c873de09d21e1f0d76258b0b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:44Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:44 crc kubenswrapper[4811]: I0122 09:06:44.378684 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gwnx7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"153efd1a-2c09-4c49-94e1-2307bbf1e659\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71a7748f45c9a14885785ca825c32a5036a887c7a54b36cadd1a89e57608111f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nklsw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddce971b4fbeb72e1a32c0adab8e46cb8bce2ff9736cd875f63e105bf39dcc27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nklsw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gwnx7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:44Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:44 crc kubenswrapper[4811]: I0122 09:06:44.383716 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:44 crc kubenswrapper[4811]: I0122 09:06:44.383773 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:44 crc kubenswrapper[4811]: I0122 09:06:44.383784 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:44 crc kubenswrapper[4811]: I0122 09:06:44.383800 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:44 crc kubenswrapper[4811]: I0122 09:06:44.383811 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:44Z","lastTransitionTime":"2026-01-22T09:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:44 crc kubenswrapper[4811]: I0122 09:06:44.393794 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cf1d28a-4ee8-45bb-9a86-3687d8db2105\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684f8ca14b2eabcbf9280085d495af9a6f1fa76fb187f15b8fa5659178efdc93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64382d1748f1eaeb9f2f09d243b35c56f9fd70b9a9765ddb64cb0c4866ed493b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://590bb419e3791ec3a267b9a5aa4022ee74ba5e302b047e85211563e135c4cf31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b14bb77b0cf24bacdb8f5edae5d02ccea019b12feb315e6d6c4887feb1a9354\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0d10794bea5b8ea94b9247775e0c424fa9d481267c663d4bcae16eb4e7eb729\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09c7d347a923430e4eb2af48033ea3fc7715bedad38dd1b644c7b29a1ab617fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09c7d347a923430e4eb2af48033ea3fc7715bedad38dd1b644c7b29a1ab617fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba74d0a51550a0637b01b701bbaa33c5efe84fb88a69b7294d1125f1f400b7cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba74d0a51550a0637b01b701bbaa33c5efe84fb88a69b7294d1125f1f400b7cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9a2da74ed29f44340dd1a4778f15ba78ba4f673eb1b34b676e35c7d7c84af31d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a2da74ed29f44340dd1a4778f15ba78ba4f673eb1b34b676e35c7d7c84af31d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:05:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:44Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:44 crc kubenswrapper[4811]: I0122 09:06:44.402046 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:44Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:44 crc kubenswrapper[4811]: I0122 09:06:44.410744 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kfqgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2555861-d1bb-4f21-be4a-165ed9212932\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ececfc4b2e4f2a120e7a3307235d606db00733e1a5631b6e19b5312afc8e8ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jt7cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kfqgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:44Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:44 crc kubenswrapper[4811]: I0122 09:06:44.486331 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:44 crc kubenswrapper[4811]: I0122 09:06:44.486572 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:44 crc kubenswrapper[4811]: I0122 09:06:44.486581 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:44 crc kubenswrapper[4811]: I0122 09:06:44.486595 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:44 crc kubenswrapper[4811]: I0122 09:06:44.486608 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:44Z","lastTransitionTime":"2026-01-22T09:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:44 crc kubenswrapper[4811]: I0122 09:06:44.588470 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:44 crc kubenswrapper[4811]: I0122 09:06:44.588504 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:44 crc kubenswrapper[4811]: I0122 09:06:44.588534 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:44 crc kubenswrapper[4811]: I0122 09:06:44.588548 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:44 crc kubenswrapper[4811]: I0122 09:06:44.588560 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:44Z","lastTransitionTime":"2026-01-22T09:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:44 crc kubenswrapper[4811]: I0122 09:06:44.690476 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:44 crc kubenswrapper[4811]: I0122 09:06:44.690538 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:44 crc kubenswrapper[4811]: I0122 09:06:44.690554 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:44 crc kubenswrapper[4811]: I0122 09:06:44.690573 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:44 crc kubenswrapper[4811]: I0122 09:06:44.690601 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:44Z","lastTransitionTime":"2026-01-22T09:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:44 crc kubenswrapper[4811]: I0122 09:06:44.792588 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:44 crc kubenswrapper[4811]: I0122 09:06:44.792621 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:44 crc kubenswrapper[4811]: I0122 09:06:44.792655 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:44 crc kubenswrapper[4811]: I0122 09:06:44.792672 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:44 crc kubenswrapper[4811]: I0122 09:06:44.792683 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:44Z","lastTransitionTime":"2026-01-22T09:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:44 crc kubenswrapper[4811]: I0122 09:06:44.894261 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:44 crc kubenswrapper[4811]: I0122 09:06:44.894287 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:44 crc kubenswrapper[4811]: I0122 09:06:44.894297 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:44 crc kubenswrapper[4811]: I0122 09:06:44.894309 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:44 crc kubenswrapper[4811]: I0122 09:06:44.894319 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:44Z","lastTransitionTime":"2026-01-22T09:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:44 crc kubenswrapper[4811]: I0122 09:06:44.991188 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bhj4l" Jan 22 09:06:44 crc kubenswrapper[4811]: E0122 09:06:44.991293 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bhj4l" podUID="de4b38a0-0c7a-4693-9f92-40fefd6bc9b4" Jan 22 09:06:44 crc kubenswrapper[4811]: I0122 09:06:44.992215 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 20:45:56.675019471 +0000 UTC Jan 22 09:06:44 crc kubenswrapper[4811]: I0122 09:06:44.996171 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:44 crc kubenswrapper[4811]: I0122 09:06:44.996219 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:44 crc kubenswrapper[4811]: I0122 09:06:44.996229 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:44 crc kubenswrapper[4811]: I0122 09:06:44.996246 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:44 crc kubenswrapper[4811]: I0122 09:06:44.996256 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:44Z","lastTransitionTime":"2026-01-22T09:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:45 crc kubenswrapper[4811]: I0122 09:06:45.098490 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:45 crc kubenswrapper[4811]: I0122 09:06:45.098541 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:45 crc kubenswrapper[4811]: I0122 09:06:45.098554 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:45 crc kubenswrapper[4811]: I0122 09:06:45.098573 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:45 crc kubenswrapper[4811]: I0122 09:06:45.098583 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:45Z","lastTransitionTime":"2026-01-22T09:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:45 crc kubenswrapper[4811]: I0122 09:06:45.201079 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:45 crc kubenswrapper[4811]: I0122 09:06:45.201112 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:45 crc kubenswrapper[4811]: I0122 09:06:45.201121 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:45 crc kubenswrapper[4811]: I0122 09:06:45.201132 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:45 crc kubenswrapper[4811]: I0122 09:06:45.201139 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:45Z","lastTransitionTime":"2026-01-22T09:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:45 crc kubenswrapper[4811]: I0122 09:06:45.303308 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:45 crc kubenswrapper[4811]: I0122 09:06:45.303340 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:45 crc kubenswrapper[4811]: I0122 09:06:45.303350 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:45 crc kubenswrapper[4811]: I0122 09:06:45.303365 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:45 crc kubenswrapper[4811]: I0122 09:06:45.303373 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:45Z","lastTransitionTime":"2026-01-22T09:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:45 crc kubenswrapper[4811]: I0122 09:06:45.405478 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:45 crc kubenswrapper[4811]: I0122 09:06:45.405517 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:45 crc kubenswrapper[4811]: I0122 09:06:45.405530 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:45 crc kubenswrapper[4811]: I0122 09:06:45.405548 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:45 crc kubenswrapper[4811]: I0122 09:06:45.405559 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:45Z","lastTransitionTime":"2026-01-22T09:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:45 crc kubenswrapper[4811]: I0122 09:06:45.507440 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:45 crc kubenswrapper[4811]: I0122 09:06:45.507498 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:45 crc kubenswrapper[4811]: I0122 09:06:45.507510 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:45 crc kubenswrapper[4811]: I0122 09:06:45.507524 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:45 crc kubenswrapper[4811]: I0122 09:06:45.507535 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:45Z","lastTransitionTime":"2026-01-22T09:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:45 crc kubenswrapper[4811]: I0122 09:06:45.609559 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:45 crc kubenswrapper[4811]: I0122 09:06:45.609595 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:45 crc kubenswrapper[4811]: I0122 09:06:45.609604 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:45 crc kubenswrapper[4811]: I0122 09:06:45.609618 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:45 crc kubenswrapper[4811]: I0122 09:06:45.609641 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:45Z","lastTransitionTime":"2026-01-22T09:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:45 crc kubenswrapper[4811]: I0122 09:06:45.688619 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 09:06:45 crc kubenswrapper[4811]: I0122 09:06:45.688703 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 09:06:45 crc kubenswrapper[4811]: E0122 09:06:45.688876 4811 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 09:06:45 crc kubenswrapper[4811]: E0122 09:06:45.688903 4811 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 09:06:45 crc kubenswrapper[4811]: E0122 09:06:45.688915 4811 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 09:06:45 crc kubenswrapper[4811]: E0122 09:06:45.688957 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-22 09:07:17.688944807 +0000 UTC m=+82.011131930 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 09:06:45 crc kubenswrapper[4811]: E0122 09:06:45.688876 4811 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 09:06:45 crc kubenswrapper[4811]: E0122 09:06:45.688984 4811 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 09:06:45 crc kubenswrapper[4811]: E0122 09:06:45.688991 4811 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 09:06:45 crc kubenswrapper[4811]: E0122 09:06:45.689014 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-22 09:07:17.689007295 +0000 UTC m=+82.011194417 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 09:06:45 crc kubenswrapper[4811]: I0122 09:06:45.711907 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:45 crc kubenswrapper[4811]: I0122 09:06:45.711941 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:45 crc kubenswrapper[4811]: I0122 09:06:45.711952 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:45 crc kubenswrapper[4811]: I0122 09:06:45.711962 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:45 crc kubenswrapper[4811]: I0122 09:06:45.711972 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:45Z","lastTransitionTime":"2026-01-22T09:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:45 crc kubenswrapper[4811]: I0122 09:06:45.817772 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:45 crc kubenswrapper[4811]: I0122 09:06:45.817861 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:45 crc kubenswrapper[4811]: I0122 09:06:45.817876 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:45 crc kubenswrapper[4811]: I0122 09:06:45.817895 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:45 crc kubenswrapper[4811]: I0122 09:06:45.817910 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:45Z","lastTransitionTime":"2026-01-22T09:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:45 crc kubenswrapper[4811]: I0122 09:06:45.890365 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 09:06:45 crc kubenswrapper[4811]: E0122 09:06:45.890486 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 09:07:17.890465919 +0000 UTC m=+82.212653052 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:06:45 crc kubenswrapper[4811]: I0122 09:06:45.890560 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:06:45 crc kubenswrapper[4811]: I0122 09:06:45.890598 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:06:45 crc kubenswrapper[4811]: E0122 09:06:45.890698 4811 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 09:06:45 crc kubenswrapper[4811]: E0122 09:06:45.890749 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 09:07:17.890728495 +0000 UTC m=+82.212915608 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 09:06:45 crc kubenswrapper[4811]: E0122 09:06:45.890823 4811 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 09:06:45 crc kubenswrapper[4811]: E0122 09:06:45.890876 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 09:07:17.890865613 +0000 UTC m=+82.213052746 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 09:06:45 crc kubenswrapper[4811]: I0122 09:06:45.919654 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:45 crc kubenswrapper[4811]: I0122 09:06:45.919683 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:45 crc kubenswrapper[4811]: I0122 09:06:45.919692 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:45 crc kubenswrapper[4811]: I0122 09:06:45.919707 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:45 crc kubenswrapper[4811]: I0122 09:06:45.919717 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:45Z","lastTransitionTime":"2026-01-22T09:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:45 crc kubenswrapper[4811]: I0122 09:06:45.991492 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:06:45 crc kubenswrapper[4811]: I0122 09:06:45.991512 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 09:06:45 crc kubenswrapper[4811]: E0122 09:06:45.991599 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 09:06:45 crc kubenswrapper[4811]: I0122 09:06:45.991679 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 09:06:45 crc kubenswrapper[4811]: E0122 09:06:45.991846 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 09:06:45 crc kubenswrapper[4811]: E0122 09:06:45.991914 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 09:06:45 crc kubenswrapper[4811]: I0122 09:06:45.992691 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 08:17:03.451714685 +0000 UTC Jan 22 09:06:46 crc kubenswrapper[4811]: I0122 09:06:46.000793 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f776b1bc70646779cf35092ded02c00aa3b990c873de09d21e1f0d76258b0b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:45Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:46 crc kubenswrapper[4811]: I0122 09:06:46.008460 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gwnx7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"153efd1a-2c09-4c49-94e1-2307bbf1e659\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71a7748f45c9a14885785ca825c32a5036a887c7a54b36cadd1a89e57608111f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nklsw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddce971b4fbeb72e1a32c0adab8e46cb8bce2ff9736cd875f63e105bf39dcc27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nklsw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gwnx7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:46Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:46 crc kubenswrapper[4811]: I0122 09:06:46.021610 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:46 crc kubenswrapper[4811]: I0122 09:06:46.021669 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:46 crc kubenswrapper[4811]: I0122 09:06:46.021684 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:46 crc kubenswrapper[4811]: I0122 09:06:46.021700 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:46 crc kubenswrapper[4811]: I0122 09:06:46.021713 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:46Z","lastTransitionTime":"2026-01-22T09:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:46 crc kubenswrapper[4811]: I0122 09:06:46.024153 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cf1d28a-4ee8-45bb-9a86-3687d8db2105\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684f8ca14b2eabcbf9280085d495af9a6f1fa76fb187f15b8fa5659178efdc93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64382d1748f1eaeb9f2f09d243b35c56f9fd70b9a9765ddb64cb0c4866ed493b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://590bb419e3791ec3a267b9a5aa4022ee74ba5e302b047e85211563e135c4cf31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b14bb77b0cf24bacdb8f5edae5d02ccea019b12feb315e6d6c4887feb1a9354\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0d10794bea5b8ea94b9247775e0c424fa9d481267c663d4bcae16eb4e7eb729\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09c7d347a923430e4eb2af48033ea3fc7715bedad38dd1b644c7b29a1ab617fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09c7d347a923430e4eb2af48033ea3fc7715bedad38dd1b644c7b29a1ab617fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba74d0a51550a0637b01b701bbaa33c5efe84fb88a69b7294d1125f1f400b7cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba74d0a51550a0637b01b701bbaa33c5efe84fb88a69b7294d1125f1f400b7cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9a2da74ed29f44340dd1a4778f15ba78ba4f673eb1b34b676e35c7d7c84af31d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a2da74ed29f44340dd1a4778f15ba78ba4f673eb1b34b676e35c7d7c84af31d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:05:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:46Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:46 crc kubenswrapper[4811]: I0122 09:06:46.031803 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:46Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:46 crc kubenswrapper[4811]: I0122 09:06:46.040922 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kfqgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2555861-d1bb-4f21-be4a-165ed9212932\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ececfc4b2e4f2a120e7a3307235d606db00733e1a5631b6e19b5312afc8e8ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jt7cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kfqgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:46Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:46 crc kubenswrapper[4811]: I0122 09:06:46.048400 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xhhs7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f42dc1a6-d0c4-43e4-b9d9-b40c1f910400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a64d7f1d53f9a3d9616b76ea12902134c5e0a3a30c9a61113675074caae70d58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcfzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xhhs7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:46Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:46 crc kubenswrapper[4811]: I0122 09:06:46.060433 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9g4j8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d23c9c9-89ca-4db5-99dc-1e5b9f80be38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://279ca4748f2c8229a6647c23a414668a7612c0c863b52ef33c5b3dd026de74ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e01aaa7db78471fea3a39a7dfc2be7926746a5c38a6a92dc3d33ebab7b8bf3cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e01aaa7db78471fea3a39a7dfc2be7926746a5c38a6a92dc3d33ebab7b8bf3cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c4b6317408e75e2cc7947a3ff91f9268141cd5000f951dcbfdab3fdc5c2386c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c4b6317408e75e2cc7947a3ff91f9268141cd5000f951dcbfdab3fdc5c2386c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9e774edddd1e144cf32c598245081727747e9a4e3c04ee339ff63e330ceb5cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9e774edddd1e144cf32c598245081727747e9a4e3c04ee339ff63e330ceb5cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66cb2961790d3d876438b07dabf72c50ed19c9e946e21dcdf90f098a38e02c84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66cb2961790d3d876438b07dabf72c50ed19c9e946e21dcdf90f098a38e02c84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://016900d5e167b9638dd095fc155f63e33c55037edb90af28ebef58c91b86fe7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://016900d5e167b9638dd095fc155f63e33c55037edb90af28ebef58c91b86fe7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4face38e0653080b6d84c1e1a126c96d043eb68c6e19f76dbd8def8bc96e9d40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4face38e0653080b6d84c1e1a126c96d043eb68c6e19f76dbd8def8bc96e9d40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9g4j8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:46Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:46 crc kubenswrapper[4811]: I0122 09:06:46.068796 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84068a6b-e189-419b-87f5-f31428f6eafe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cde8418c9f22553519ec0a9ba1ddcdd553f64aafb44fa002e924499701236bca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thzhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbb067d04c1683ffd06d01aab41bb5988fdca44b8bd16012d35c42795d0d8011\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thzhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-txvcq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:46Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:46 crc kubenswrapper[4811]: I0122 09:06:46.075660 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bhj4l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de4b38a0-0c7a-4693-9f92-40fefd6bc9b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mphcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mphcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bhj4l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:46Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:46 crc kubenswrapper[4811]: I0122 09:06:46.084386 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30b44c40-e6d4-4902-98e9-a259269d8bf7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c0b4f518da69d1ede47cb9d322e9d9ca120cfc5eb770f109f0e5fd2e3260964\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aee61445030e02ac4fa9de4fefd94fe50e821ac8897c7403e173f2594a2e0ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d66c2a7980e144d2ce01cb8ebac91297e012c09d4b9aec617e747542e1ca304\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4daff941637e717d7dc33c5627db72d253f50f50288c23257e1eaa66cd38c5d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a86cb7b767900041a2946ed016b105369283ee61acdf725ade666acb1e926ed0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0122 09:06:13.075349 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0122 09:06:13.075574 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 09:06:13.077171 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1744166093/tls.crt::/tmp/serving-cert-1744166093/tls.key\\\\\\\"\\\\nI0122 09:06:13.324400 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 09:06:13.330700 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 09:06:13.330724 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 09:06:13.330758 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 09:06:13.330771 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 09:06:13.337838 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 09:06:13.337866 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:06:13.337872 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:06:13.337876 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 09:06:13.337879 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 09:06:13.337883 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 09:06:13.337886 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 09:06:13.338151 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 09:06:13.339610 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9efb1ddf2ce6601c9860410e3ccacc38359aef779a7c8524bf4ec5950743d36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85e811085c9b345899231a1fcf56619c1cc5c8b0e6d22fc897fb96f20ba72e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85e811085c9b345899231a1fcf56619c1cc5c8b0e6d22fc897fb96f20ba72e37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:05:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:46Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:46 crc kubenswrapper[4811]: I0122 09:06:46.091781 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:46Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:46 crc kubenswrapper[4811]: I0122 09:06:46.099302 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:46Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:46 crc kubenswrapper[4811]: I0122 09:06:46.106985 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bae2e22607dec646b15563dc62f66655e7a5c2b7ad809406c907e5f687f80105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75e290e203fab8f291146855a960477ceb9c33d965d15b4722c668dc2b3fded3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:46Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:46 crc kubenswrapper[4811]: I0122 09:06:46.118886 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-274vf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cd0f0db-de53-47c0-9b45-2ce8b37392a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5f0ec565b046b22d600103b58946f6f520873f1686cc3042bf7ddc7f24bd4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e071df53e8cd1652f507eacae3e2c54ac75e76a6b9a426620c970762e81ecd84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c261cdea10a90cb9b06369f3b725b7778d070fb329e81ab44552446e12662451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f0736426a25dadac460e5ed467f3082cd8c4806190c053022d6f360c4290799\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9197a8c82d49fb9b4b1c97827ee95582ca8148656abb88904e06abb25c135597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55f26cdf9d61842467eb1367c6ec156b4de9c40a5b05fe60d3d100df5136a8a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://454af35caa860192c078a3fb4028bf2c4544607849290f3bd427688821757b2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://454af35caa860192c078a3fb4028bf2c4544607849290f3bd427688821757b2f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T09:06:42Z\\\",\\\"message\\\":\\\"ing service etcd on namespace openshift-etcd for network=default : 1.153957ms\\\\nI0122 09:06:42.648545 6404 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-controller-manager/controller-manager]} name:Service_openshift-controller-manager/controller-manager_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.149:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {cab7c637-a021-4a4d-a4b9-06d63c44316f}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0122 09:06:42.648515 6404 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-ingress-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"d8772e82-b0a4-4596-87d3-3d517c13344b\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-ingress-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]stri\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-274vf_openshift-ovn-kubernetes(1cd0f0db-de53-47c0-9b45-2ce8b37392a3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c741f2990f908528ccab348e1eb02e217a9b659becc7a41c4eb8da28bfd42e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://523ab909aa2cbac6043b3c572a17c40733e9b2155d4017253d68998cb85d3e7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://523ab909aa2cbac6043b3c572a17c40733e9b2155d4017253d68998cb85d3e7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-274vf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:46Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:46 crc kubenswrapper[4811]: I0122 09:06:46.123423 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:46 crc kubenswrapper[4811]: I0122 09:06:46.123448 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:46 crc kubenswrapper[4811]: I0122 09:06:46.123460 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:46 crc kubenswrapper[4811]: I0122 09:06:46.123474 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:46 crc kubenswrapper[4811]: I0122 09:06:46.123484 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:46Z","lastTransitionTime":"2026-01-22T09:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:46 crc kubenswrapper[4811]: I0122 09:06:46.125461 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n2kj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66a7f3f9-ab88-4b1a-a28a-900480fa9651\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3a88defded9a5435941e2608c8a29baee840bc0ce48454a3731f344299347cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z47mj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n2kj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:46Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:46 crc kubenswrapper[4811]: I0122 09:06:46.133190 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab5dbe63-08fa-4157-9c52-003944e50d7b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af3bf6cdac4751814ecd71637d9e80cbdda1c0e99714275fe77cfa858b3823b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4419625f66ae957b2d9b146fc8b085a4a95c3f47860d6b4847f8fe71a6c4c86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7830976d23e073a099c3c6af1fa2392d9504ad9d41ddf2ee63c16632c22f833a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45fa5d37dcd1dcae1a7488bb855e4bd7c7dc39a1199fe80b82f83604be089f2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:05:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:46Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:46 crc kubenswrapper[4811]: I0122 09:06:46.142311 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a14bfcc7d6d2ab5e1ad0c45999b2e73332e750ed65d9299e188505dafb26d0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:46Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:46 crc kubenswrapper[4811]: I0122 09:06:46.225171 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:46 crc kubenswrapper[4811]: I0122 09:06:46.225218 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:46 crc kubenswrapper[4811]: I0122 09:06:46.225228 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:46 crc kubenswrapper[4811]: I0122 09:06:46.225244 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:46 crc kubenswrapper[4811]: I0122 09:06:46.225254 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:46Z","lastTransitionTime":"2026-01-22T09:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:46 crc kubenswrapper[4811]: I0122 09:06:46.326686 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:46 crc kubenswrapper[4811]: I0122 09:06:46.326940 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:46 crc kubenswrapper[4811]: I0122 09:06:46.326952 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:46 crc kubenswrapper[4811]: I0122 09:06:46.326966 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:46 crc kubenswrapper[4811]: I0122 09:06:46.326976 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:46Z","lastTransitionTime":"2026-01-22T09:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:46 crc kubenswrapper[4811]: I0122 09:06:46.428981 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:46 crc kubenswrapper[4811]: I0122 09:06:46.429013 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:46 crc kubenswrapper[4811]: I0122 09:06:46.429023 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:46 crc kubenswrapper[4811]: I0122 09:06:46.429064 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:46 crc kubenswrapper[4811]: I0122 09:06:46.429074 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:46Z","lastTransitionTime":"2026-01-22T09:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:46 crc kubenswrapper[4811]: I0122 09:06:46.531083 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:46 crc kubenswrapper[4811]: I0122 09:06:46.531131 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:46 crc kubenswrapper[4811]: I0122 09:06:46.531142 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:46 crc kubenswrapper[4811]: I0122 09:06:46.531156 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:46 crc kubenswrapper[4811]: I0122 09:06:46.531167 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:46Z","lastTransitionTime":"2026-01-22T09:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:46 crc kubenswrapper[4811]: I0122 09:06:46.633293 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:46 crc kubenswrapper[4811]: I0122 09:06:46.633320 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:46 crc kubenswrapper[4811]: I0122 09:06:46.633328 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:46 crc kubenswrapper[4811]: I0122 09:06:46.633339 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:46 crc kubenswrapper[4811]: I0122 09:06:46.633347 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:46Z","lastTransitionTime":"2026-01-22T09:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:46 crc kubenswrapper[4811]: I0122 09:06:46.734707 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:46 crc kubenswrapper[4811]: I0122 09:06:46.734732 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:46 crc kubenswrapper[4811]: I0122 09:06:46.734751 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:46 crc kubenswrapper[4811]: I0122 09:06:46.734763 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:46 crc kubenswrapper[4811]: I0122 09:06:46.734771 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:46Z","lastTransitionTime":"2026-01-22T09:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:46 crc kubenswrapper[4811]: I0122 09:06:46.836007 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:46 crc kubenswrapper[4811]: I0122 09:06:46.836129 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:46 crc kubenswrapper[4811]: I0122 09:06:46.836187 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:46 crc kubenswrapper[4811]: I0122 09:06:46.836251 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:46 crc kubenswrapper[4811]: I0122 09:06:46.836307 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:46Z","lastTransitionTime":"2026-01-22T09:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:46 crc kubenswrapper[4811]: I0122 09:06:46.938609 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:46 crc kubenswrapper[4811]: I0122 09:06:46.938659 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:46 crc kubenswrapper[4811]: I0122 09:06:46.938668 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:46 crc kubenswrapper[4811]: I0122 09:06:46.938682 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:46 crc kubenswrapper[4811]: I0122 09:06:46.938692 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:46Z","lastTransitionTime":"2026-01-22T09:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:46 crc kubenswrapper[4811]: I0122 09:06:46.991591 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bhj4l" Jan 22 09:06:46 crc kubenswrapper[4811]: E0122 09:06:46.991953 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bhj4l" podUID="de4b38a0-0c7a-4693-9f92-40fefd6bc9b4" Jan 22 09:06:46 crc kubenswrapper[4811]: I0122 09:06:46.993672 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 08:19:01.010088119 +0000 UTC Jan 22 09:06:47 crc kubenswrapper[4811]: I0122 09:06:47.041014 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:47 crc kubenswrapper[4811]: I0122 09:06:47.041066 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:47 crc kubenswrapper[4811]: I0122 09:06:47.041076 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:47 crc kubenswrapper[4811]: I0122 09:06:47.041087 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:47 crc kubenswrapper[4811]: I0122 09:06:47.041095 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:47Z","lastTransitionTime":"2026-01-22T09:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:47 crc kubenswrapper[4811]: I0122 09:06:47.085685 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:47 crc kubenswrapper[4811]: I0122 09:06:47.085723 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:47 crc kubenswrapper[4811]: I0122 09:06:47.085732 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:47 crc kubenswrapper[4811]: I0122 09:06:47.085758 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:47 crc kubenswrapper[4811]: I0122 09:06:47.085770 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:47Z","lastTransitionTime":"2026-01-22T09:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:47 crc kubenswrapper[4811]: E0122 09:06:47.093914 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:06:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:06:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:06:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:06:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0066ca33-f035-4af3-9028-0da78d54d55e\\\",\\\"systemUUID\\\":\\\"463fdb35-dd0c-4804-a85e-31cd33c59ce4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:47Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:47 crc kubenswrapper[4811]: I0122 09:06:47.096707 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:47 crc kubenswrapper[4811]: I0122 09:06:47.096773 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:47 crc kubenswrapper[4811]: I0122 09:06:47.096783 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:47 crc kubenswrapper[4811]: I0122 09:06:47.096795 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:47 crc kubenswrapper[4811]: I0122 09:06:47.096803 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:47Z","lastTransitionTime":"2026-01-22T09:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:47 crc kubenswrapper[4811]: E0122 09:06:47.104894 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:06:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:06:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:06:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:06:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0066ca33-f035-4af3-9028-0da78d54d55e\\\",\\\"systemUUID\\\":\\\"463fdb35-dd0c-4804-a85e-31cd33c59ce4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:47Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:47 crc kubenswrapper[4811]: I0122 09:06:47.107181 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:47 crc kubenswrapper[4811]: I0122 09:06:47.107208 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:47 crc kubenswrapper[4811]: I0122 09:06:47.107216 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:47 crc kubenswrapper[4811]: I0122 09:06:47.107225 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:47 crc kubenswrapper[4811]: I0122 09:06:47.107233 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:47Z","lastTransitionTime":"2026-01-22T09:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:47 crc kubenswrapper[4811]: E0122 09:06:47.114916 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:06:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:06:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:06:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:06:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0066ca33-f035-4af3-9028-0da78d54d55e\\\",\\\"systemUUID\\\":\\\"463fdb35-dd0c-4804-a85e-31cd33c59ce4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:47Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:47 crc kubenswrapper[4811]: I0122 09:06:47.117292 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:47 crc kubenswrapper[4811]: I0122 09:06:47.117384 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:47 crc kubenswrapper[4811]: I0122 09:06:47.117454 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:47 crc kubenswrapper[4811]: I0122 09:06:47.117516 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:47 crc kubenswrapper[4811]: I0122 09:06:47.117567 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:47Z","lastTransitionTime":"2026-01-22T09:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:47 crc kubenswrapper[4811]: E0122 09:06:47.125695 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:06:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:06:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:06:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:06:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0066ca33-f035-4af3-9028-0da78d54d55e\\\",\\\"systemUUID\\\":\\\"463fdb35-dd0c-4804-a85e-31cd33c59ce4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:47Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:47 crc kubenswrapper[4811]: I0122 09:06:47.127879 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:47 crc kubenswrapper[4811]: I0122 09:06:47.127906 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:47 crc kubenswrapper[4811]: I0122 09:06:47.127916 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:47 crc kubenswrapper[4811]: I0122 09:06:47.127927 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:47 crc kubenswrapper[4811]: I0122 09:06:47.127935 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:47Z","lastTransitionTime":"2026-01-22T09:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:47 crc kubenswrapper[4811]: E0122 09:06:47.135471 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:06:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:06:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:06:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:06:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0066ca33-f035-4af3-9028-0da78d54d55e\\\",\\\"systemUUID\\\":\\\"463fdb35-dd0c-4804-a85e-31cd33c59ce4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:47Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:47 crc kubenswrapper[4811]: E0122 09:06:47.135584 4811 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 22 09:06:47 crc kubenswrapper[4811]: I0122 09:06:47.142555 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:47 crc kubenswrapper[4811]: I0122 09:06:47.142584 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:47 crc kubenswrapper[4811]: I0122 09:06:47.142613 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:47 crc kubenswrapper[4811]: I0122 09:06:47.142638 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:47 crc kubenswrapper[4811]: I0122 09:06:47.142646 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:47Z","lastTransitionTime":"2026-01-22T09:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:47 crc kubenswrapper[4811]: I0122 09:06:47.202520 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/de4b38a0-0c7a-4693-9f92-40fefd6bc9b4-metrics-certs\") pod \"network-metrics-daemon-bhj4l\" (UID: \"de4b38a0-0c7a-4693-9f92-40fefd6bc9b4\") " pod="openshift-multus/network-metrics-daemon-bhj4l" Jan 22 09:06:47 crc kubenswrapper[4811]: E0122 09:06:47.202613 4811 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 09:06:47 crc kubenswrapper[4811]: E0122 09:06:47.202679 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/de4b38a0-0c7a-4693-9f92-40fefd6bc9b4-metrics-certs podName:de4b38a0-0c7a-4693-9f92-40fefd6bc9b4 nodeName:}" failed. No retries permitted until 2026-01-22 09:07:03.202665036 +0000 UTC m=+67.524852159 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/de4b38a0-0c7a-4693-9f92-40fefd6bc9b4-metrics-certs") pod "network-metrics-daemon-bhj4l" (UID: "de4b38a0-0c7a-4693-9f92-40fefd6bc9b4") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 09:06:47 crc kubenswrapper[4811]: I0122 09:06:47.244774 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:47 crc kubenswrapper[4811]: I0122 09:06:47.244799 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:47 crc kubenswrapper[4811]: I0122 09:06:47.244808 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:47 crc kubenswrapper[4811]: I0122 09:06:47.244817 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:47 crc kubenswrapper[4811]: I0122 09:06:47.244825 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:47Z","lastTransitionTime":"2026-01-22T09:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:47 crc kubenswrapper[4811]: I0122 09:06:47.346911 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:47 crc kubenswrapper[4811]: I0122 09:06:47.346937 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:47 crc kubenswrapper[4811]: I0122 09:06:47.347149 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:47 crc kubenswrapper[4811]: I0122 09:06:47.347162 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:47 crc kubenswrapper[4811]: I0122 09:06:47.347171 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:47Z","lastTransitionTime":"2026-01-22T09:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:47 crc kubenswrapper[4811]: I0122 09:06:47.448752 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:47 crc kubenswrapper[4811]: I0122 09:06:47.448788 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:47 crc kubenswrapper[4811]: I0122 09:06:47.448797 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:47 crc kubenswrapper[4811]: I0122 09:06:47.448809 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:47 crc kubenswrapper[4811]: I0122 09:06:47.448820 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:47Z","lastTransitionTime":"2026-01-22T09:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:47 crc kubenswrapper[4811]: I0122 09:06:47.550312 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:47 crc kubenswrapper[4811]: I0122 09:06:47.550343 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:47 crc kubenswrapper[4811]: I0122 09:06:47.550351 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:47 crc kubenswrapper[4811]: I0122 09:06:47.550362 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:47 crc kubenswrapper[4811]: I0122 09:06:47.550370 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:47Z","lastTransitionTime":"2026-01-22T09:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:47 crc kubenswrapper[4811]: I0122 09:06:47.651840 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:47 crc kubenswrapper[4811]: I0122 09:06:47.651878 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:47 crc kubenswrapper[4811]: I0122 09:06:47.651886 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:47 crc kubenswrapper[4811]: I0122 09:06:47.651900 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:47 crc kubenswrapper[4811]: I0122 09:06:47.651922 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:47Z","lastTransitionTime":"2026-01-22T09:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:47 crc kubenswrapper[4811]: I0122 09:06:47.754076 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:47 crc kubenswrapper[4811]: I0122 09:06:47.754290 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:47 crc kubenswrapper[4811]: I0122 09:06:47.754359 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:47 crc kubenswrapper[4811]: I0122 09:06:47.754414 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:47 crc kubenswrapper[4811]: I0122 09:06:47.754461 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:47Z","lastTransitionTime":"2026-01-22T09:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:47 crc kubenswrapper[4811]: I0122 09:06:47.855778 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:47 crc kubenswrapper[4811]: I0122 09:06:47.855922 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:47 crc kubenswrapper[4811]: I0122 09:06:47.855983 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:47 crc kubenswrapper[4811]: I0122 09:06:47.856052 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:47 crc kubenswrapper[4811]: I0122 09:06:47.856101 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:47Z","lastTransitionTime":"2026-01-22T09:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:47 crc kubenswrapper[4811]: I0122 09:06:47.958212 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:47 crc kubenswrapper[4811]: I0122 09:06:47.958239 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:47 crc kubenswrapper[4811]: I0122 09:06:47.958246 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:47 crc kubenswrapper[4811]: I0122 09:06:47.958255 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:47 crc kubenswrapper[4811]: I0122 09:06:47.958262 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:47Z","lastTransitionTime":"2026-01-22T09:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:47 crc kubenswrapper[4811]: I0122 09:06:47.991381 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 09:06:47 crc kubenswrapper[4811]: I0122 09:06:47.991381 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 09:06:47 crc kubenswrapper[4811]: I0122 09:06:47.991424 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:06:47 crc kubenswrapper[4811]: E0122 09:06:47.991486 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 09:06:47 crc kubenswrapper[4811]: E0122 09:06:47.991532 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 09:06:47 crc kubenswrapper[4811]: E0122 09:06:47.991581 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 09:06:47 crc kubenswrapper[4811]: I0122 09:06:47.993753 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 21:25:18.901326013 +0000 UTC Jan 22 09:06:48 crc kubenswrapper[4811]: I0122 09:06:48.060041 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:48 crc kubenswrapper[4811]: I0122 09:06:48.060064 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:48 crc kubenswrapper[4811]: I0122 09:06:48.060072 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:48 crc kubenswrapper[4811]: I0122 09:06:48.060098 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:48 crc kubenswrapper[4811]: I0122 09:06:48.060106 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:48Z","lastTransitionTime":"2026-01-22T09:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:48 crc kubenswrapper[4811]: I0122 09:06:48.162816 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:48 crc kubenswrapper[4811]: I0122 09:06:48.162865 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:48 crc kubenswrapper[4811]: I0122 09:06:48.162882 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:48 crc kubenswrapper[4811]: I0122 09:06:48.162902 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:48 crc kubenswrapper[4811]: I0122 09:06:48.162915 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:48Z","lastTransitionTime":"2026-01-22T09:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:48 crc kubenswrapper[4811]: I0122 09:06:48.264326 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:48 crc kubenswrapper[4811]: I0122 09:06:48.264375 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:48 crc kubenswrapper[4811]: I0122 09:06:48.264384 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:48 crc kubenswrapper[4811]: I0122 09:06:48.264394 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:48 crc kubenswrapper[4811]: I0122 09:06:48.264402 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:48Z","lastTransitionTime":"2026-01-22T09:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:48 crc kubenswrapper[4811]: I0122 09:06:48.366067 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:48 crc kubenswrapper[4811]: I0122 09:06:48.366090 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:48 crc kubenswrapper[4811]: I0122 09:06:48.366097 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:48 crc kubenswrapper[4811]: I0122 09:06:48.366107 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:48 crc kubenswrapper[4811]: I0122 09:06:48.366114 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:48Z","lastTransitionTime":"2026-01-22T09:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:48 crc kubenswrapper[4811]: I0122 09:06:48.468290 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:48 crc kubenswrapper[4811]: I0122 09:06:48.468312 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:48 crc kubenswrapper[4811]: I0122 09:06:48.468320 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:48 crc kubenswrapper[4811]: I0122 09:06:48.468330 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:48 crc kubenswrapper[4811]: I0122 09:06:48.468339 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:48Z","lastTransitionTime":"2026-01-22T09:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:48 crc kubenswrapper[4811]: I0122 09:06:48.571730 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:48 crc kubenswrapper[4811]: I0122 09:06:48.571791 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:48 crc kubenswrapper[4811]: I0122 09:06:48.571803 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:48 crc kubenswrapper[4811]: I0122 09:06:48.571814 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:48 crc kubenswrapper[4811]: I0122 09:06:48.571821 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:48Z","lastTransitionTime":"2026-01-22T09:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:48 crc kubenswrapper[4811]: I0122 09:06:48.673821 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:48 crc kubenswrapper[4811]: I0122 09:06:48.673858 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:48 crc kubenswrapper[4811]: I0122 09:06:48.673867 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:48 crc kubenswrapper[4811]: I0122 09:06:48.673881 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:48 crc kubenswrapper[4811]: I0122 09:06:48.673892 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:48Z","lastTransitionTime":"2026-01-22T09:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:48 crc kubenswrapper[4811]: I0122 09:06:48.776189 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:48 crc kubenswrapper[4811]: I0122 09:06:48.776221 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:48 crc kubenswrapper[4811]: I0122 09:06:48.776231 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:48 crc kubenswrapper[4811]: I0122 09:06:48.776245 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:48 crc kubenswrapper[4811]: I0122 09:06:48.776254 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:48Z","lastTransitionTime":"2026-01-22T09:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:48 crc kubenswrapper[4811]: I0122 09:06:48.878589 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:48 crc kubenswrapper[4811]: I0122 09:06:48.878688 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:48 crc kubenswrapper[4811]: I0122 09:06:48.878700 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:48 crc kubenswrapper[4811]: I0122 09:06:48.878950 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:48 crc kubenswrapper[4811]: I0122 09:06:48.878965 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:48Z","lastTransitionTime":"2026-01-22T09:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:48 crc kubenswrapper[4811]: I0122 09:06:48.981693 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:48 crc kubenswrapper[4811]: I0122 09:06:48.981753 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:48 crc kubenswrapper[4811]: I0122 09:06:48.981764 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:48 crc kubenswrapper[4811]: I0122 09:06:48.981783 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:48 crc kubenswrapper[4811]: I0122 09:06:48.981797 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:48Z","lastTransitionTime":"2026-01-22T09:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:48 crc kubenswrapper[4811]: I0122 09:06:48.991837 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bhj4l" Jan 22 09:06:48 crc kubenswrapper[4811]: E0122 09:06:48.991954 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bhj4l" podUID="de4b38a0-0c7a-4693-9f92-40fefd6bc9b4" Jan 22 09:06:48 crc kubenswrapper[4811]: I0122 09:06:48.993831 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 22:59:54.078215107 +0000 UTC Jan 22 09:06:49 crc kubenswrapper[4811]: I0122 09:06:49.084543 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:49 crc kubenswrapper[4811]: I0122 09:06:49.084689 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:49 crc kubenswrapper[4811]: I0122 09:06:49.084777 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:49 crc kubenswrapper[4811]: I0122 09:06:49.084843 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:49 crc kubenswrapper[4811]: I0122 09:06:49.084904 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:49Z","lastTransitionTime":"2026-01-22T09:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:49 crc kubenswrapper[4811]: I0122 09:06:49.186571 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:49 crc kubenswrapper[4811]: I0122 09:06:49.186694 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:49 crc kubenswrapper[4811]: I0122 09:06:49.186768 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:49 crc kubenswrapper[4811]: I0122 09:06:49.186840 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:49 crc kubenswrapper[4811]: I0122 09:06:49.186901 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:49Z","lastTransitionTime":"2026-01-22T09:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:49 crc kubenswrapper[4811]: I0122 09:06:49.288448 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:49 crc kubenswrapper[4811]: I0122 09:06:49.288493 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:49 crc kubenswrapper[4811]: I0122 09:06:49.288504 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:49 crc kubenswrapper[4811]: I0122 09:06:49.288520 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:49 crc kubenswrapper[4811]: I0122 09:06:49.288531 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:49Z","lastTransitionTime":"2026-01-22T09:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:49 crc kubenswrapper[4811]: I0122 09:06:49.390483 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:49 crc kubenswrapper[4811]: I0122 09:06:49.390540 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:49 crc kubenswrapper[4811]: I0122 09:06:49.390552 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:49 crc kubenswrapper[4811]: I0122 09:06:49.390564 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:49 crc kubenswrapper[4811]: I0122 09:06:49.390572 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:49Z","lastTransitionTime":"2026-01-22T09:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:49 crc kubenswrapper[4811]: I0122 09:06:49.492773 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:49 crc kubenswrapper[4811]: I0122 09:06:49.492824 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:49 crc kubenswrapper[4811]: I0122 09:06:49.492837 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:49 crc kubenswrapper[4811]: I0122 09:06:49.492853 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:49 crc kubenswrapper[4811]: I0122 09:06:49.492864 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:49Z","lastTransitionTime":"2026-01-22T09:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:49 crc kubenswrapper[4811]: I0122 09:06:49.498897 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 09:06:49 crc kubenswrapper[4811]: I0122 09:06:49.507652 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 22 09:06:49 crc kubenswrapper[4811]: I0122 09:06:49.514257 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cf1d28a-4ee8-45bb-9a86-3687d8db2105\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684f8ca14b2eabcbf9280085d495af9a6f1fa76fb187f15b8fa5659178efdc93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64382d1748f1eaeb9f2f09d243b35c56f9fd70b9a9765ddb64cb0c4866ed493b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://590bb419e3791ec3a267b9a5aa4022ee74ba5e302b047e85211563e135c4cf31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b14bb77b0cf24bacdb8f5edae5d02ccea019b12feb315e6d6c4887feb1a9354\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0d10794bea5b8ea94b9247775e0c424fa9d481267c663d4bcae16eb4e7eb729\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09c7d347a923430e4eb2af48033ea3fc7715bedad38dd1b644c7b29a1ab617fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09c7d347a923430e4eb2af48033ea3fc7715bedad38dd1b644c7b29a1ab617fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba74d0a51550a0637b01b701bbaa33c5efe84fb88a69b7294d1125f1f400b7cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba74d0a51550a0637b01b701bbaa33c5efe84fb88a69b7294d1125f1f400b7cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9a2da74ed29f44340dd1a4778f15ba78ba4f673eb1b34b676e35c7d7c84af31d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a2da74ed29f44340dd1a4778f15ba78ba4f673eb1b34b676e35c7d7c84af31d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:05:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:49Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:49 crc kubenswrapper[4811]: I0122 09:06:49.525514 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:49Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:49 crc kubenswrapper[4811]: I0122 09:06:49.535381 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kfqgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2555861-d1bb-4f21-be4a-165ed9212932\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ececfc4b2e4f2a120e7a3307235d606db00733e1a5631b6e19b5312afc8e8ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jt7cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kfqgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:49Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:49 crc kubenswrapper[4811]: I0122 09:06:49.542447 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xhhs7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f42dc1a6-d0c4-43e4-b9d9-b40c1f910400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a64d7f1d53f9a3d9616b76ea12902134c5e0a3a30c9a61113675074caae70d58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcfzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xhhs7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:49Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:49 crc kubenswrapper[4811]: I0122 09:06:49.551616 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9g4j8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d23c9c9-89ca-4db5-99dc-1e5b9f80be38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://279ca4748f2c8229a6647c23a414668a7612c0c863b52ef33c5b3dd026de74ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e01aaa7db78471fea3a39a7dfc2be7926746a5c38a6a92dc3d33ebab7b8bf3cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e01aaa7db78471fea3a39a7dfc2be7926746a5c38a6a92dc3d33ebab7b8bf3cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c4b6317408e75e2cc7947a3ff91f9268141cd5000f951dcbfdab3fdc5c2386c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c4b6317408e75e2cc7947a3ff91f9268141cd5000f951dcbfdab3fdc5c2386c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9e774edddd1e144cf32c598245081727747e9a4e3c04ee339ff63e330ceb5cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9e774edddd1e144cf32c598245081727747e9a4e3c04ee339ff63e330ceb5cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66cb2961790d3d876438b07dabf72c50ed19c9e946e21dcdf90f098a38e02c84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66cb2961790d3d876438b07dabf72c50ed19c9e946e21dcdf90f098a38e02c84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://016900d5e167b9638dd095fc155f63e33c55037edb90af28ebef58c91b86fe7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://016900d5e167b9638dd095fc155f63e33c55037edb90af28ebef58c91b86fe7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4face38e0653080b6d84c1e1a126c96d043eb68c6e19f76dbd8def8bc96e9d40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4face38e0653080b6d84c1e1a126c96d043eb68c6e19f76dbd8def8bc96e9d40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9g4j8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:49Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:49 crc kubenswrapper[4811]: I0122 09:06:49.559424 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84068a6b-e189-419b-87f5-f31428f6eafe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cde8418c9f22553519ec0a9ba1ddcdd553f64aafb44fa002e924499701236bca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thzhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbb067d04c1683ffd06d01aab41bb5988fdca44b8bd16012d35c42795d0d8011\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thzhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-txvcq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:49Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:49 crc kubenswrapper[4811]: I0122 09:06:49.568131 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bhj4l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de4b38a0-0c7a-4693-9f92-40fefd6bc9b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mphcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mphcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bhj4l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:49Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:49 crc kubenswrapper[4811]: I0122 09:06:49.577925 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30b44c40-e6d4-4902-98e9-a259269d8bf7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c0b4f518da69d1ede47cb9d322e9d9ca120cfc5eb770f109f0e5fd2e3260964\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aee61445030e02ac4fa9de4fefd94fe50e821ac8897c7403e173f2594a2e0ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d66c2a7980e144d2ce01cb8ebac91297e012c09d4b9aec617e747542e1ca304\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4daff941637e717d7dc33c5627db72d253f50f50288c23257e1eaa66cd38c5d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a86cb7b767900041a2946ed016b105369283ee61acdf725ade666acb1e926ed0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0122 09:06:13.075349 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0122 09:06:13.075574 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 09:06:13.077171 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1744166093/tls.crt::/tmp/serving-cert-1744166093/tls.key\\\\\\\"\\\\nI0122 09:06:13.324400 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 09:06:13.330700 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 09:06:13.330724 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 09:06:13.330758 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 09:06:13.330771 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 09:06:13.337838 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 09:06:13.337866 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:06:13.337872 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:06:13.337876 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 09:06:13.337879 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 09:06:13.337883 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 09:06:13.337886 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 09:06:13.338151 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 09:06:13.339610 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9efb1ddf2ce6601c9860410e3ccacc38359aef779a7c8524bf4ec5950743d36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85e811085c9b345899231a1fcf56619c1cc5c8b0e6d22fc897fb96f20ba72e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85e811085c9b345899231a1fcf56619c1cc5c8b0e6d22fc897fb96f20ba72e37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:05:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:49Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:49 crc kubenswrapper[4811]: I0122 09:06:49.586032 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:49Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:49 crc kubenswrapper[4811]: I0122 09:06:49.595109 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:49 crc kubenswrapper[4811]: I0122 09:06:49.595153 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:49 crc kubenswrapper[4811]: I0122 09:06:49.595168 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:49 crc kubenswrapper[4811]: I0122 09:06:49.595190 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:49 crc kubenswrapper[4811]: I0122 09:06:49.595204 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:49Z","lastTransitionTime":"2026-01-22T09:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:49 crc kubenswrapper[4811]: I0122 09:06:49.595852 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:49Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:49 crc kubenswrapper[4811]: I0122 09:06:49.604440 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bae2e22607dec646b15563dc62f66655e7a5c2b7ad809406c907e5f687f80105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75e290e203fab8f291146855a960477ceb9c33d965d15b4722c668dc2b3fded3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:49Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:49 crc kubenswrapper[4811]: I0122 09:06:49.617328 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-274vf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cd0f0db-de53-47c0-9b45-2ce8b37392a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5f0ec565b046b22d600103b58946f6f520873f1686cc3042bf7ddc7f24bd4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e071df53e8cd1652f507eacae3e2c54ac75e76a6b9a426620c970762e81ecd84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c261cdea10a90cb9b06369f3b725b7778d070fb329e81ab44552446e12662451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f0736426a25dadac460e5ed467f3082cd8c4806190c053022d6f360c4290799\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9197a8c82d49fb9b4b1c97827ee95582ca8148656abb88904e06abb25c135597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55f26cdf9d61842467eb1367c6ec156b4de9c40a5b05fe60d3d100df5136a8a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://454af35caa860192c078a3fb4028bf2c4544607849290f3bd427688821757b2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://454af35caa860192c078a3fb4028bf2c4544607849290f3bd427688821757b2f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T09:06:42Z\\\",\\\"message\\\":\\\"ing service etcd on namespace openshift-etcd for network=default : 1.153957ms\\\\nI0122 09:06:42.648545 6404 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-controller-manager/controller-manager]} name:Service_openshift-controller-manager/controller-manager_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.149:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {cab7c637-a021-4a4d-a4b9-06d63c44316f}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0122 09:06:42.648515 6404 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-ingress-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"d8772e82-b0a4-4596-87d3-3d517c13344b\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-ingress-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]stri\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-274vf_openshift-ovn-kubernetes(1cd0f0db-de53-47c0-9b45-2ce8b37392a3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c741f2990f908528ccab348e1eb02e217a9b659becc7a41c4eb8da28bfd42e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://523ab909aa2cbac6043b3c572a17c40733e9b2155d4017253d68998cb85d3e7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://523ab909aa2cbac6043b3c572a17c40733e9b2155d4017253d68998cb85d3e7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-274vf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:49Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:49 crc kubenswrapper[4811]: I0122 09:06:49.625424 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n2kj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66a7f3f9-ab88-4b1a-a28a-900480fa9651\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3a88defded9a5435941e2608c8a29baee840bc0ce48454a3731f344299347cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z47mj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n2kj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:49Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:49 crc kubenswrapper[4811]: I0122 09:06:49.633494 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab5dbe63-08fa-4157-9c52-003944e50d7b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af3bf6cdac4751814ecd71637d9e80cbdda1c0e99714275fe77cfa858b3823b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4419625f66ae957b2d9b146fc8b085a4a95c3f47860d6b4847f8fe71a6c4c86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7830976d23e073a099c3c6af1fa2392d9504ad9d41ddf2ee63c16632c22f833a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45fa5d37dcd1dcae1a7488bb855e4bd7c7dc39a1199fe80b82f83604be089f2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:05:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:49Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:49 crc kubenswrapper[4811]: I0122 09:06:49.642669 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a14bfcc7d6d2ab5e1ad0c45999b2e73332e750ed65d9299e188505dafb26d0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:49Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:49 crc kubenswrapper[4811]: I0122 09:06:49.649962 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f776b1bc70646779cf35092ded02c00aa3b990c873de09d21e1f0d76258b0b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:49Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:49 crc kubenswrapper[4811]: I0122 09:06:49.657992 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gwnx7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"153efd1a-2c09-4c49-94e1-2307bbf1e659\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71a7748f45c9a14885785ca825c32a5036a887c7a54b36cadd1a89e57608111f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nklsw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddce971b4fbeb72e1a32c0adab8e46cb8bce2ff9736cd875f63e105bf39dcc27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nklsw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gwnx7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:49Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:49 crc kubenswrapper[4811]: I0122 09:06:49.697616 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:49 crc kubenswrapper[4811]: I0122 09:06:49.697682 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:49 crc kubenswrapper[4811]: I0122 09:06:49.697693 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:49 crc kubenswrapper[4811]: I0122 09:06:49.697709 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:49 crc kubenswrapper[4811]: I0122 09:06:49.697722 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:49Z","lastTransitionTime":"2026-01-22T09:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:49 crc kubenswrapper[4811]: I0122 09:06:49.799668 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:49 crc kubenswrapper[4811]: I0122 09:06:49.799700 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:49 crc kubenswrapper[4811]: I0122 09:06:49.799710 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:49 crc kubenswrapper[4811]: I0122 09:06:49.799740 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:49 crc kubenswrapper[4811]: I0122 09:06:49.799764 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:49Z","lastTransitionTime":"2026-01-22T09:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:49 crc kubenswrapper[4811]: I0122 09:06:49.902263 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:49 crc kubenswrapper[4811]: I0122 09:06:49.902315 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:49 crc kubenswrapper[4811]: I0122 09:06:49.902326 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:49 crc kubenswrapper[4811]: I0122 09:06:49.902343 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:49 crc kubenswrapper[4811]: I0122 09:06:49.902354 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:49Z","lastTransitionTime":"2026-01-22T09:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:49 crc kubenswrapper[4811]: I0122 09:06:49.991661 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 09:06:49 crc kubenswrapper[4811]: E0122 09:06:49.991763 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 09:06:49 crc kubenswrapper[4811]: I0122 09:06:49.991905 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:06:49 crc kubenswrapper[4811]: I0122 09:06:49.992000 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 09:06:49 crc kubenswrapper[4811]: E0122 09:06:49.992123 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 09:06:49 crc kubenswrapper[4811]: E0122 09:06:49.992228 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 09:06:49 crc kubenswrapper[4811]: I0122 09:06:49.994483 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 15:11:41.216320044 +0000 UTC Jan 22 09:06:50 crc kubenswrapper[4811]: I0122 09:06:50.003560 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:50 crc kubenswrapper[4811]: I0122 09:06:50.003787 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:50 crc kubenswrapper[4811]: I0122 09:06:50.003882 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:50 crc kubenswrapper[4811]: I0122 09:06:50.003952 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:50 crc kubenswrapper[4811]: I0122 09:06:50.004004 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:50Z","lastTransitionTime":"2026-01-22T09:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:50 crc kubenswrapper[4811]: I0122 09:06:50.105453 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:50 crc kubenswrapper[4811]: I0122 09:06:50.105499 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:50 crc kubenswrapper[4811]: I0122 09:06:50.105513 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:50 crc kubenswrapper[4811]: I0122 09:06:50.105536 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:50 crc kubenswrapper[4811]: I0122 09:06:50.105553 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:50Z","lastTransitionTime":"2026-01-22T09:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:50 crc kubenswrapper[4811]: I0122 09:06:50.207063 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:50 crc kubenswrapper[4811]: I0122 09:06:50.207175 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:50 crc kubenswrapper[4811]: I0122 09:06:50.207234 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:50 crc kubenswrapper[4811]: I0122 09:06:50.207301 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:50 crc kubenswrapper[4811]: I0122 09:06:50.207381 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:50Z","lastTransitionTime":"2026-01-22T09:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:50 crc kubenswrapper[4811]: I0122 09:06:50.308884 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:50 crc kubenswrapper[4811]: I0122 09:06:50.308919 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:50 crc kubenswrapper[4811]: I0122 09:06:50.308929 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:50 crc kubenswrapper[4811]: I0122 09:06:50.308941 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:50 crc kubenswrapper[4811]: I0122 09:06:50.308952 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:50Z","lastTransitionTime":"2026-01-22T09:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:50 crc kubenswrapper[4811]: I0122 09:06:50.411014 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:50 crc kubenswrapper[4811]: I0122 09:06:50.411046 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:50 crc kubenswrapper[4811]: I0122 09:06:50.411056 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:50 crc kubenswrapper[4811]: I0122 09:06:50.411071 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:50 crc kubenswrapper[4811]: I0122 09:06:50.411083 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:50Z","lastTransitionTime":"2026-01-22T09:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:50 crc kubenswrapper[4811]: I0122 09:06:50.512812 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:50 crc kubenswrapper[4811]: I0122 09:06:50.513099 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:50 crc kubenswrapper[4811]: I0122 09:06:50.513171 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:50 crc kubenswrapper[4811]: I0122 09:06:50.513231 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:50 crc kubenswrapper[4811]: I0122 09:06:50.513285 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:50Z","lastTransitionTime":"2026-01-22T09:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:50 crc kubenswrapper[4811]: I0122 09:06:50.614722 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:50 crc kubenswrapper[4811]: I0122 09:06:50.614776 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:50 crc kubenswrapper[4811]: I0122 09:06:50.614789 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:50 crc kubenswrapper[4811]: I0122 09:06:50.614808 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:50 crc kubenswrapper[4811]: I0122 09:06:50.614821 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:50Z","lastTransitionTime":"2026-01-22T09:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:50 crc kubenswrapper[4811]: I0122 09:06:50.716414 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:50 crc kubenswrapper[4811]: I0122 09:06:50.716442 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:50 crc kubenswrapper[4811]: I0122 09:06:50.716452 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:50 crc kubenswrapper[4811]: I0122 09:06:50.716462 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:50 crc kubenswrapper[4811]: I0122 09:06:50.716471 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:50Z","lastTransitionTime":"2026-01-22T09:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:50 crc kubenswrapper[4811]: I0122 09:06:50.818156 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:50 crc kubenswrapper[4811]: I0122 09:06:50.818194 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:50 crc kubenswrapper[4811]: I0122 09:06:50.818205 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:50 crc kubenswrapper[4811]: I0122 09:06:50.818220 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:50 crc kubenswrapper[4811]: I0122 09:06:50.818232 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:50Z","lastTransitionTime":"2026-01-22T09:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:50 crc kubenswrapper[4811]: I0122 09:06:50.920028 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:50 crc kubenswrapper[4811]: I0122 09:06:50.920070 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:50 crc kubenswrapper[4811]: I0122 09:06:50.920117 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:50 crc kubenswrapper[4811]: I0122 09:06:50.920135 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:50 crc kubenswrapper[4811]: I0122 09:06:50.920148 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:50Z","lastTransitionTime":"2026-01-22T09:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:50 crc kubenswrapper[4811]: I0122 09:06:50.991056 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bhj4l" Jan 22 09:06:50 crc kubenswrapper[4811]: E0122 09:06:50.991176 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bhj4l" podUID="de4b38a0-0c7a-4693-9f92-40fefd6bc9b4" Jan 22 09:06:50 crc kubenswrapper[4811]: I0122 09:06:50.995142 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 02:16:08.260743241 +0000 UTC Jan 22 09:06:51 crc kubenswrapper[4811]: I0122 09:06:51.022353 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:51 crc kubenswrapper[4811]: I0122 09:06:51.022386 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:51 crc kubenswrapper[4811]: I0122 09:06:51.022396 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:51 crc kubenswrapper[4811]: I0122 09:06:51.022409 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:51 crc kubenswrapper[4811]: I0122 09:06:51.022420 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:51Z","lastTransitionTime":"2026-01-22T09:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:51 crc kubenswrapper[4811]: I0122 09:06:51.124035 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:51 crc kubenswrapper[4811]: I0122 09:06:51.124067 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:51 crc kubenswrapper[4811]: I0122 09:06:51.124077 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:51 crc kubenswrapper[4811]: I0122 09:06:51.124087 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:51 crc kubenswrapper[4811]: I0122 09:06:51.124096 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:51Z","lastTransitionTime":"2026-01-22T09:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:51 crc kubenswrapper[4811]: I0122 09:06:51.225986 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:51 crc kubenswrapper[4811]: I0122 09:06:51.226019 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:51 crc kubenswrapper[4811]: I0122 09:06:51.226029 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:51 crc kubenswrapper[4811]: I0122 09:06:51.226041 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:51 crc kubenswrapper[4811]: I0122 09:06:51.226052 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:51Z","lastTransitionTime":"2026-01-22T09:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:51 crc kubenswrapper[4811]: I0122 09:06:51.328007 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:51 crc kubenswrapper[4811]: I0122 09:06:51.328059 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:51 crc kubenswrapper[4811]: I0122 09:06:51.328075 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:51 crc kubenswrapper[4811]: I0122 09:06:51.328095 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:51 crc kubenswrapper[4811]: I0122 09:06:51.328108 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:51Z","lastTransitionTime":"2026-01-22T09:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:51 crc kubenswrapper[4811]: I0122 09:06:51.429589 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:51 crc kubenswrapper[4811]: I0122 09:06:51.429644 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:51 crc kubenswrapper[4811]: I0122 09:06:51.429655 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:51 crc kubenswrapper[4811]: I0122 09:06:51.429666 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:51 crc kubenswrapper[4811]: I0122 09:06:51.429674 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:51Z","lastTransitionTime":"2026-01-22T09:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:51 crc kubenswrapper[4811]: I0122 09:06:51.531726 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:51 crc kubenswrapper[4811]: I0122 09:06:51.531852 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:51 crc kubenswrapper[4811]: I0122 09:06:51.531862 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:51 crc kubenswrapper[4811]: I0122 09:06:51.531873 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:51 crc kubenswrapper[4811]: I0122 09:06:51.531881 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:51Z","lastTransitionTime":"2026-01-22T09:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:51 crc kubenswrapper[4811]: I0122 09:06:51.633690 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:51 crc kubenswrapper[4811]: I0122 09:06:51.633755 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:51 crc kubenswrapper[4811]: I0122 09:06:51.633765 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:51 crc kubenswrapper[4811]: I0122 09:06:51.633774 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:51 crc kubenswrapper[4811]: I0122 09:06:51.633782 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:51Z","lastTransitionTime":"2026-01-22T09:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:51 crc kubenswrapper[4811]: I0122 09:06:51.735319 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:51 crc kubenswrapper[4811]: I0122 09:06:51.735355 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:51 crc kubenswrapper[4811]: I0122 09:06:51.735365 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:51 crc kubenswrapper[4811]: I0122 09:06:51.735376 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:51 crc kubenswrapper[4811]: I0122 09:06:51.735384 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:51Z","lastTransitionTime":"2026-01-22T09:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:51 crc kubenswrapper[4811]: I0122 09:06:51.836677 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:51 crc kubenswrapper[4811]: I0122 09:06:51.836697 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:51 crc kubenswrapper[4811]: I0122 09:06:51.836704 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:51 crc kubenswrapper[4811]: I0122 09:06:51.836714 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:51 crc kubenswrapper[4811]: I0122 09:06:51.836722 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:51Z","lastTransitionTime":"2026-01-22T09:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:51 crc kubenswrapper[4811]: I0122 09:06:51.938848 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:51 crc kubenswrapper[4811]: I0122 09:06:51.938887 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:51 crc kubenswrapper[4811]: I0122 09:06:51.938897 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:51 crc kubenswrapper[4811]: I0122 09:06:51.938911 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:51 crc kubenswrapper[4811]: I0122 09:06:51.938920 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:51Z","lastTransitionTime":"2026-01-22T09:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:51 crc kubenswrapper[4811]: I0122 09:06:51.991387 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 09:06:51 crc kubenswrapper[4811]: E0122 09:06:51.991490 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 09:06:51 crc kubenswrapper[4811]: I0122 09:06:51.991644 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 09:06:51 crc kubenswrapper[4811]: E0122 09:06:51.991697 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 09:06:51 crc kubenswrapper[4811]: I0122 09:06:51.991865 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:06:51 crc kubenswrapper[4811]: E0122 09:06:51.991943 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 09:06:51 crc kubenswrapper[4811]: I0122 09:06:51.995608 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 13:25:13.1523776 +0000 UTC Jan 22 09:06:52 crc kubenswrapper[4811]: I0122 09:06:52.040417 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:52 crc kubenswrapper[4811]: I0122 09:06:52.040447 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:52 crc kubenswrapper[4811]: I0122 09:06:52.040459 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:52 crc kubenswrapper[4811]: I0122 09:06:52.040473 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:52 crc kubenswrapper[4811]: I0122 09:06:52.040481 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:52Z","lastTransitionTime":"2026-01-22T09:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:52 crc kubenswrapper[4811]: I0122 09:06:52.142021 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:52 crc kubenswrapper[4811]: I0122 09:06:52.142054 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:52 crc kubenswrapper[4811]: I0122 09:06:52.142062 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:52 crc kubenswrapper[4811]: I0122 09:06:52.142074 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:52 crc kubenswrapper[4811]: I0122 09:06:52.142082 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:52Z","lastTransitionTime":"2026-01-22T09:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:52 crc kubenswrapper[4811]: I0122 09:06:52.243956 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:52 crc kubenswrapper[4811]: I0122 09:06:52.243987 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:52 crc kubenswrapper[4811]: I0122 09:06:52.243997 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:52 crc kubenswrapper[4811]: I0122 09:06:52.244010 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:52 crc kubenswrapper[4811]: I0122 09:06:52.244018 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:52Z","lastTransitionTime":"2026-01-22T09:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:52 crc kubenswrapper[4811]: I0122 09:06:52.345292 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:52 crc kubenswrapper[4811]: I0122 09:06:52.345319 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:52 crc kubenswrapper[4811]: I0122 09:06:52.345328 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:52 crc kubenswrapper[4811]: I0122 09:06:52.345338 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:52 crc kubenswrapper[4811]: I0122 09:06:52.345345 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:52Z","lastTransitionTime":"2026-01-22T09:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:52 crc kubenswrapper[4811]: I0122 09:06:52.447599 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:52 crc kubenswrapper[4811]: I0122 09:06:52.447672 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:52 crc kubenswrapper[4811]: I0122 09:06:52.447685 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:52 crc kubenswrapper[4811]: I0122 09:06:52.447703 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:52 crc kubenswrapper[4811]: I0122 09:06:52.447716 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:52Z","lastTransitionTime":"2026-01-22T09:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:52 crc kubenswrapper[4811]: I0122 09:06:52.549697 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:52 crc kubenswrapper[4811]: I0122 09:06:52.549742 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:52 crc kubenswrapper[4811]: I0122 09:06:52.549764 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:52 crc kubenswrapper[4811]: I0122 09:06:52.549780 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:52 crc kubenswrapper[4811]: I0122 09:06:52.549793 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:52Z","lastTransitionTime":"2026-01-22T09:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:52 crc kubenswrapper[4811]: I0122 09:06:52.651296 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:52 crc kubenswrapper[4811]: I0122 09:06:52.651324 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:52 crc kubenswrapper[4811]: I0122 09:06:52.651334 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:52 crc kubenswrapper[4811]: I0122 09:06:52.651347 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:52 crc kubenswrapper[4811]: I0122 09:06:52.651359 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:52Z","lastTransitionTime":"2026-01-22T09:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:52 crc kubenswrapper[4811]: I0122 09:06:52.753287 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:52 crc kubenswrapper[4811]: I0122 09:06:52.753309 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:52 crc kubenswrapper[4811]: I0122 09:06:52.753335 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:52 crc kubenswrapper[4811]: I0122 09:06:52.753346 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:52 crc kubenswrapper[4811]: I0122 09:06:52.753353 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:52Z","lastTransitionTime":"2026-01-22T09:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:52 crc kubenswrapper[4811]: I0122 09:06:52.855321 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:52 crc kubenswrapper[4811]: I0122 09:06:52.855442 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:52 crc kubenswrapper[4811]: I0122 09:06:52.855494 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:52 crc kubenswrapper[4811]: I0122 09:06:52.855542 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:52 crc kubenswrapper[4811]: I0122 09:06:52.855587 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:52Z","lastTransitionTime":"2026-01-22T09:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:52 crc kubenswrapper[4811]: I0122 09:06:52.957660 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:52 crc kubenswrapper[4811]: I0122 09:06:52.957819 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:52 crc kubenswrapper[4811]: I0122 09:06:52.957897 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:52 crc kubenswrapper[4811]: I0122 09:06:52.957954 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:52 crc kubenswrapper[4811]: I0122 09:06:52.958024 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:52Z","lastTransitionTime":"2026-01-22T09:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:52 crc kubenswrapper[4811]: I0122 09:06:52.991355 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bhj4l" Jan 22 09:06:52 crc kubenswrapper[4811]: E0122 09:06:52.991536 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bhj4l" podUID="de4b38a0-0c7a-4693-9f92-40fefd6bc9b4" Jan 22 09:06:52 crc kubenswrapper[4811]: I0122 09:06:52.996183 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 23:30:23.375612216 +0000 UTC Jan 22 09:06:53 crc kubenswrapper[4811]: I0122 09:06:53.060012 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:53 crc kubenswrapper[4811]: I0122 09:06:53.060037 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:53 crc kubenswrapper[4811]: I0122 09:06:53.060045 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:53 crc kubenswrapper[4811]: I0122 09:06:53.060057 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:53 crc kubenswrapper[4811]: I0122 09:06:53.060066 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:53Z","lastTransitionTime":"2026-01-22T09:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:53 crc kubenswrapper[4811]: I0122 09:06:53.162001 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:53 crc kubenswrapper[4811]: I0122 09:06:53.162037 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:53 crc kubenswrapper[4811]: I0122 09:06:53.162054 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:53 crc kubenswrapper[4811]: I0122 09:06:53.162065 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:53 crc kubenswrapper[4811]: I0122 09:06:53.162073 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:53Z","lastTransitionTime":"2026-01-22T09:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:53 crc kubenswrapper[4811]: I0122 09:06:53.264427 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:53 crc kubenswrapper[4811]: I0122 09:06:53.264531 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:53 crc kubenswrapper[4811]: I0122 09:06:53.264610 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:53 crc kubenswrapper[4811]: I0122 09:06:53.264701 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:53 crc kubenswrapper[4811]: I0122 09:06:53.264773 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:53Z","lastTransitionTime":"2026-01-22T09:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:53 crc kubenswrapper[4811]: I0122 09:06:53.366416 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:53 crc kubenswrapper[4811]: I0122 09:06:53.366504 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:53 crc kubenswrapper[4811]: I0122 09:06:53.366559 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:53 crc kubenswrapper[4811]: I0122 09:06:53.366640 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:53 crc kubenswrapper[4811]: I0122 09:06:53.366694 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:53Z","lastTransitionTime":"2026-01-22T09:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:53 crc kubenswrapper[4811]: I0122 09:06:53.468600 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:53 crc kubenswrapper[4811]: I0122 09:06:53.468742 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:53 crc kubenswrapper[4811]: I0122 09:06:53.468808 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:53 crc kubenswrapper[4811]: I0122 09:06:53.468860 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:53 crc kubenswrapper[4811]: I0122 09:06:53.468914 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:53Z","lastTransitionTime":"2026-01-22T09:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:53 crc kubenswrapper[4811]: I0122 09:06:53.570443 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:53 crc kubenswrapper[4811]: I0122 09:06:53.570794 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:53 crc kubenswrapper[4811]: I0122 09:06:53.570854 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:53 crc kubenswrapper[4811]: I0122 09:06:53.570911 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:53 crc kubenswrapper[4811]: I0122 09:06:53.570962 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:53Z","lastTransitionTime":"2026-01-22T09:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:53 crc kubenswrapper[4811]: I0122 09:06:53.673310 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:53 crc kubenswrapper[4811]: I0122 09:06:53.673369 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:53 crc kubenswrapper[4811]: I0122 09:06:53.673383 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:53 crc kubenswrapper[4811]: I0122 09:06:53.673401 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:53 crc kubenswrapper[4811]: I0122 09:06:53.673415 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:53Z","lastTransitionTime":"2026-01-22T09:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:53 crc kubenswrapper[4811]: I0122 09:06:53.775614 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:53 crc kubenswrapper[4811]: I0122 09:06:53.775663 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:53 crc kubenswrapper[4811]: I0122 09:06:53.775673 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:53 crc kubenswrapper[4811]: I0122 09:06:53.775685 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:53 crc kubenswrapper[4811]: I0122 09:06:53.775694 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:53Z","lastTransitionTime":"2026-01-22T09:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:53 crc kubenswrapper[4811]: I0122 09:06:53.877891 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:53 crc kubenswrapper[4811]: I0122 09:06:53.877921 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:53 crc kubenswrapper[4811]: I0122 09:06:53.877932 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:53 crc kubenswrapper[4811]: I0122 09:06:53.877945 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:53 crc kubenswrapper[4811]: I0122 09:06:53.877971 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:53Z","lastTransitionTime":"2026-01-22T09:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:53 crc kubenswrapper[4811]: I0122 09:06:53.980131 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:53 crc kubenswrapper[4811]: I0122 09:06:53.980252 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:53 crc kubenswrapper[4811]: I0122 09:06:53.980317 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:53 crc kubenswrapper[4811]: I0122 09:06:53.980370 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:53 crc kubenswrapper[4811]: I0122 09:06:53.980421 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:53Z","lastTransitionTime":"2026-01-22T09:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:53 crc kubenswrapper[4811]: I0122 09:06:53.991566 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 09:06:53 crc kubenswrapper[4811]: I0122 09:06:53.991579 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 09:06:53 crc kubenswrapper[4811]: I0122 09:06:53.991778 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:06:53 crc kubenswrapper[4811]: E0122 09:06:53.991909 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 09:06:53 crc kubenswrapper[4811]: E0122 09:06:53.992118 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 09:06:53 crc kubenswrapper[4811]: E0122 09:06:53.992182 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 09:06:53 crc kubenswrapper[4811]: I0122 09:06:53.996597 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 20:34:00.161203698 +0000 UTC Jan 22 09:06:54 crc kubenswrapper[4811]: I0122 09:06:54.082747 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:54 crc kubenswrapper[4811]: I0122 09:06:54.082782 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:54 crc kubenswrapper[4811]: I0122 09:06:54.082790 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:54 crc kubenswrapper[4811]: I0122 09:06:54.082817 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:54 crc kubenswrapper[4811]: I0122 09:06:54.082825 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:54Z","lastTransitionTime":"2026-01-22T09:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:54 crc kubenswrapper[4811]: I0122 09:06:54.184493 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:54 crc kubenswrapper[4811]: I0122 09:06:54.184537 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:54 crc kubenswrapper[4811]: I0122 09:06:54.184550 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:54 crc kubenswrapper[4811]: I0122 09:06:54.184569 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:54 crc kubenswrapper[4811]: I0122 09:06:54.184583 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:54Z","lastTransitionTime":"2026-01-22T09:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:54 crc kubenswrapper[4811]: I0122 09:06:54.286839 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:54 crc kubenswrapper[4811]: I0122 09:06:54.286869 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:54 crc kubenswrapper[4811]: I0122 09:06:54.286880 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:54 crc kubenswrapper[4811]: I0122 09:06:54.286890 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:54 crc kubenswrapper[4811]: I0122 09:06:54.286898 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:54Z","lastTransitionTime":"2026-01-22T09:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:54 crc kubenswrapper[4811]: I0122 09:06:54.388091 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:54 crc kubenswrapper[4811]: I0122 09:06:54.388113 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:54 crc kubenswrapper[4811]: I0122 09:06:54.388122 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:54 crc kubenswrapper[4811]: I0122 09:06:54.388131 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:54 crc kubenswrapper[4811]: I0122 09:06:54.388137 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:54Z","lastTransitionTime":"2026-01-22T09:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:54 crc kubenswrapper[4811]: I0122 09:06:54.490516 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:54 crc kubenswrapper[4811]: I0122 09:06:54.490607 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:54 crc kubenswrapper[4811]: I0122 09:06:54.490702 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:54 crc kubenswrapper[4811]: I0122 09:06:54.490791 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:54 crc kubenswrapper[4811]: I0122 09:06:54.490859 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:54Z","lastTransitionTime":"2026-01-22T09:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:54 crc kubenswrapper[4811]: I0122 09:06:54.593077 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:54 crc kubenswrapper[4811]: I0122 09:06:54.593112 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:54 crc kubenswrapper[4811]: I0122 09:06:54.593123 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:54 crc kubenswrapper[4811]: I0122 09:06:54.593138 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:54 crc kubenswrapper[4811]: I0122 09:06:54.593149 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:54Z","lastTransitionTime":"2026-01-22T09:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:54 crc kubenswrapper[4811]: I0122 09:06:54.694583 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:54 crc kubenswrapper[4811]: I0122 09:06:54.694602 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:54 crc kubenswrapper[4811]: I0122 09:06:54.694612 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:54 crc kubenswrapper[4811]: I0122 09:06:54.694646 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:54 crc kubenswrapper[4811]: I0122 09:06:54.694654 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:54Z","lastTransitionTime":"2026-01-22T09:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:54 crc kubenswrapper[4811]: I0122 09:06:54.797356 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:54 crc kubenswrapper[4811]: I0122 09:06:54.797431 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:54 crc kubenswrapper[4811]: I0122 09:06:54.797444 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:54 crc kubenswrapper[4811]: I0122 09:06:54.797458 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:54 crc kubenswrapper[4811]: I0122 09:06:54.797468 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:54Z","lastTransitionTime":"2026-01-22T09:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:54 crc kubenswrapper[4811]: I0122 09:06:54.899285 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:54 crc kubenswrapper[4811]: I0122 09:06:54.899319 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:54 crc kubenswrapper[4811]: I0122 09:06:54.899328 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:54 crc kubenswrapper[4811]: I0122 09:06:54.899343 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:54 crc kubenswrapper[4811]: I0122 09:06:54.899355 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:54Z","lastTransitionTime":"2026-01-22T09:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:54 crc kubenswrapper[4811]: I0122 09:06:54.991498 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bhj4l" Jan 22 09:06:54 crc kubenswrapper[4811]: E0122 09:06:54.991614 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bhj4l" podUID="de4b38a0-0c7a-4693-9f92-40fefd6bc9b4" Jan 22 09:06:54 crc kubenswrapper[4811]: I0122 09:06:54.996983 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 10:47:26.57696987 +0000 UTC Jan 22 09:06:55 crc kubenswrapper[4811]: I0122 09:06:55.001297 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:55 crc kubenswrapper[4811]: I0122 09:06:55.001324 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:55 crc kubenswrapper[4811]: I0122 09:06:55.001333 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:55 crc kubenswrapper[4811]: I0122 09:06:55.001345 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:55 crc kubenswrapper[4811]: I0122 09:06:55.001355 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:55Z","lastTransitionTime":"2026-01-22T09:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:55 crc kubenswrapper[4811]: I0122 09:06:55.103804 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:55 crc kubenswrapper[4811]: I0122 09:06:55.103840 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:55 crc kubenswrapper[4811]: I0122 09:06:55.103848 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:55 crc kubenswrapper[4811]: I0122 09:06:55.103861 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:55 crc kubenswrapper[4811]: I0122 09:06:55.103873 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:55Z","lastTransitionTime":"2026-01-22T09:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:55 crc kubenswrapper[4811]: I0122 09:06:55.206014 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:55 crc kubenswrapper[4811]: I0122 09:06:55.206045 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:55 crc kubenswrapper[4811]: I0122 09:06:55.206054 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:55 crc kubenswrapper[4811]: I0122 09:06:55.206066 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:55 crc kubenswrapper[4811]: I0122 09:06:55.206076 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:55Z","lastTransitionTime":"2026-01-22T09:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:55 crc kubenswrapper[4811]: I0122 09:06:55.307942 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:55 crc kubenswrapper[4811]: I0122 09:06:55.307971 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:55 crc kubenswrapper[4811]: I0122 09:06:55.307979 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:55 crc kubenswrapper[4811]: I0122 09:06:55.307991 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:55 crc kubenswrapper[4811]: I0122 09:06:55.307999 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:55Z","lastTransitionTime":"2026-01-22T09:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:55 crc kubenswrapper[4811]: I0122 09:06:55.409178 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:55 crc kubenswrapper[4811]: I0122 09:06:55.409200 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:55 crc kubenswrapper[4811]: I0122 09:06:55.409208 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:55 crc kubenswrapper[4811]: I0122 09:06:55.409237 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:55 crc kubenswrapper[4811]: I0122 09:06:55.409246 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:55Z","lastTransitionTime":"2026-01-22T09:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:55 crc kubenswrapper[4811]: I0122 09:06:55.510657 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:55 crc kubenswrapper[4811]: I0122 09:06:55.510683 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:55 crc kubenswrapper[4811]: I0122 09:06:55.510694 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:55 crc kubenswrapper[4811]: I0122 09:06:55.510703 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:55 crc kubenswrapper[4811]: I0122 09:06:55.510712 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:55Z","lastTransitionTime":"2026-01-22T09:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:55 crc kubenswrapper[4811]: I0122 09:06:55.612683 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:55 crc kubenswrapper[4811]: I0122 09:06:55.612706 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:55 crc kubenswrapper[4811]: I0122 09:06:55.612759 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:55 crc kubenswrapper[4811]: I0122 09:06:55.612780 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:55 crc kubenswrapper[4811]: I0122 09:06:55.612792 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:55Z","lastTransitionTime":"2026-01-22T09:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:55 crc kubenswrapper[4811]: I0122 09:06:55.715196 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:55 crc kubenswrapper[4811]: I0122 09:06:55.715241 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:55 crc kubenswrapper[4811]: I0122 09:06:55.715251 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:55 crc kubenswrapper[4811]: I0122 09:06:55.715262 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:55 crc kubenswrapper[4811]: I0122 09:06:55.715270 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:55Z","lastTransitionTime":"2026-01-22T09:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:55 crc kubenswrapper[4811]: I0122 09:06:55.817817 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:55 crc kubenswrapper[4811]: I0122 09:06:55.817849 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:55 crc kubenswrapper[4811]: I0122 09:06:55.817883 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:55 crc kubenswrapper[4811]: I0122 09:06:55.817896 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:55 crc kubenswrapper[4811]: I0122 09:06:55.817904 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:55Z","lastTransitionTime":"2026-01-22T09:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:55 crc kubenswrapper[4811]: I0122 09:06:55.919345 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:55 crc kubenswrapper[4811]: I0122 09:06:55.919394 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:55 crc kubenswrapper[4811]: I0122 09:06:55.919405 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:55 crc kubenswrapper[4811]: I0122 09:06:55.919417 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:55 crc kubenswrapper[4811]: I0122 09:06:55.919447 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:55Z","lastTransitionTime":"2026-01-22T09:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:55 crc kubenswrapper[4811]: I0122 09:06:55.992325 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 09:06:55 crc kubenswrapper[4811]: I0122 09:06:55.992337 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:06:55 crc kubenswrapper[4811]: E0122 09:06:55.992435 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 09:06:55 crc kubenswrapper[4811]: I0122 09:06:55.992481 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 09:06:55 crc kubenswrapper[4811]: E0122 09:06:55.992598 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 09:06:55 crc kubenswrapper[4811]: E0122 09:06:55.992705 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 09:06:55 crc kubenswrapper[4811]: I0122 09:06:55.997223 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 06:38:31.068193912 +0000 UTC Jan 22 09:06:56 crc kubenswrapper[4811]: I0122 09:06:56.006574 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:56Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:56 crc kubenswrapper[4811]: I0122 09:06:56.016097 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xhhs7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f42dc1a6-d0c4-43e4-b9d9-b40c1f910400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a64d7f1d53f9a3d9616b76ea12902134c5e0a3a30c9a61113675074caae70d58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcfzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xhhs7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:56Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:56 crc kubenswrapper[4811]: I0122 09:06:56.023265 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:56 crc kubenswrapper[4811]: I0122 09:06:56.023595 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:56 crc kubenswrapper[4811]: I0122 09:06:56.023620 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:56 crc kubenswrapper[4811]: I0122 09:06:56.023672 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:56 crc kubenswrapper[4811]: I0122 09:06:56.023689 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:56Z","lastTransitionTime":"2026-01-22T09:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:56 crc kubenswrapper[4811]: I0122 09:06:56.028712 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9g4j8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d23c9c9-89ca-4db5-99dc-1e5b9f80be38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://279ca4748f2c8229a6647c23a414668a7612c0c863b52ef33c5b3dd026de74ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e01aaa7db78471fea3a39a7dfc2be7926746a5c38a6a92dc3d33ebab7b8bf3cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e01aaa7db78471fea3a39a7dfc2be7926746a5c38a6a92dc3d33ebab7b8bf3cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c4b6317408e75e2cc7947a3ff91f9268141cd5000f951dcbfdab3fdc5c2386c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c4b6317408e75e2cc7947a3ff91f9268141cd5000f951dcbfdab3fdc5c2386c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9e774edddd1e144cf32c598245081727747e9a4e3c04ee339ff63e330ceb5cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9e774edddd1e144cf32c598245081727747e9a4e3c04ee339ff63e330ceb5cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66cb2961790d3d876438b07dabf72c50ed19c9e946e21dcdf90f098a38e02c84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66cb2961790d3d876438b07dabf72c50ed19c9e946e21dcdf90f098a38e02c84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://016900d5e167b9638dd095fc155f63e33c55037edb90af28ebef58c91b86fe7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://016900d5e167b9638dd095fc155f63e33c55037edb90af28ebef58c91b86fe7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4face38e0653080b6d84c1e1a126c96d043eb68c6e19f76dbd8def8bc96e9d40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4face38e0653080b6d84c1e1a126c96d043eb68c6e19f76dbd8def8bc96e9d40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9g4j8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:56Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:56 crc kubenswrapper[4811]: I0122 09:06:56.036644 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84068a6b-e189-419b-87f5-f31428f6eafe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cde8418c9f22553519ec0a9ba1ddcdd553f64aafb44fa002e924499701236bca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thzhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbb067d04c1683ffd06d01aab41bb5988fdca44b8bd16012d35c42795d0d8011\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thzhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-txvcq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:56Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:56 crc kubenswrapper[4811]: I0122 09:06:56.043971 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bhj4l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de4b38a0-0c7a-4693-9f92-40fefd6bc9b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mphcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mphcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bhj4l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:56Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:56 crc kubenswrapper[4811]: I0122 09:06:56.054228 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30b44c40-e6d4-4902-98e9-a259269d8bf7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c0b4f518da69d1ede47cb9d322e9d9ca120cfc5eb770f109f0e5fd2e3260964\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aee61445030e02ac4fa9de4fefd94fe50e821ac8897c7403e173f2594a2e0ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d66c2a7980e144d2ce01cb8ebac91297e012c09d4b9aec617e747542e1ca304\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4daff941637e717d7dc33c5627db72d253f50f50288c23257e1eaa66cd38c5d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a86cb7b767900041a2946ed016b105369283ee61acdf725ade666acb1e926ed0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0122 09:06:13.075349 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0122 09:06:13.075574 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 09:06:13.077171 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1744166093/tls.crt::/tmp/serving-cert-1744166093/tls.key\\\\\\\"\\\\nI0122 09:06:13.324400 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 09:06:13.330700 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 09:06:13.330724 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 09:06:13.330758 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 09:06:13.330771 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 09:06:13.337838 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 09:06:13.337866 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:06:13.337872 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:06:13.337876 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 09:06:13.337879 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 09:06:13.337883 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 09:06:13.337886 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 09:06:13.338151 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 09:06:13.339610 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9efb1ddf2ce6601c9860410e3ccacc38359aef779a7c8524bf4ec5950743d36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85e811085c9b345899231a1fcf56619c1cc5c8b0e6d22fc897fb96f20ba72e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85e811085c9b345899231a1fcf56619c1cc5c8b0e6d22fc897fb96f20ba72e37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:05:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:56Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:56 crc kubenswrapper[4811]: I0122 09:06:56.063147 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:56Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:56 crc kubenswrapper[4811]: I0122 09:06:56.071107 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a14bfcc7d6d2ab5e1ad0c45999b2e73332e750ed65d9299e188505dafb26d0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:56Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:56 crc kubenswrapper[4811]: I0122 09:06:56.078683 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bae2e22607dec646b15563dc62f66655e7a5c2b7ad809406c907e5f687f80105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75e290e203fab8f291146855a960477ceb9c33d965d15b4722c668dc2b3fded3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:56Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:56 crc kubenswrapper[4811]: I0122 09:06:56.091074 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-274vf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cd0f0db-de53-47c0-9b45-2ce8b37392a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5f0ec565b046b22d600103b58946f6f520873f1686cc3042bf7ddc7f24bd4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e071df53e8cd1652f507eacae3e2c54ac75e76a6b9a426620c970762e81ecd84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c261cdea10a90cb9b06369f3b725b7778d070fb329e81ab44552446e12662451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f0736426a25dadac460e5ed467f3082cd8c4806190c053022d6f360c4290799\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9197a8c82d49fb9b4b1c97827ee95582ca8148656abb88904e06abb25c135597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55f26cdf9d61842467eb1367c6ec156b4de9c40a5b05fe60d3d100df5136a8a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://454af35caa860192c078a3fb4028bf2c4544607849290f3bd427688821757b2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://454af35caa860192c078a3fb4028bf2c4544607849290f3bd427688821757b2f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T09:06:42Z\\\",\\\"message\\\":\\\"ing service etcd on namespace openshift-etcd for network=default : 1.153957ms\\\\nI0122 09:06:42.648545 6404 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-controller-manager/controller-manager]} name:Service_openshift-controller-manager/controller-manager_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.149:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {cab7c637-a021-4a4d-a4b9-06d63c44316f}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0122 09:06:42.648515 6404 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-ingress-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"d8772e82-b0a4-4596-87d3-3d517c13344b\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-ingress-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]stri\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-274vf_openshift-ovn-kubernetes(1cd0f0db-de53-47c0-9b45-2ce8b37392a3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c741f2990f908528ccab348e1eb02e217a9b659becc7a41c4eb8da28bfd42e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://523ab909aa2cbac6043b3c572a17c40733e9b2155d4017253d68998cb85d3e7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://523ab909aa2cbac6043b3c572a17c40733e9b2155d4017253d68998cb85d3e7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-274vf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:56Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:56 crc kubenswrapper[4811]: I0122 09:06:56.097896 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n2kj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66a7f3f9-ab88-4b1a-a28a-900480fa9651\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3a88defded9a5435941e2608c8a29baee840bc0ce48454a3731f344299347cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z47mj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n2kj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:56Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:56 crc kubenswrapper[4811]: I0122 09:06:56.107124 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab5dbe63-08fa-4157-9c52-003944e50d7b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af3bf6cdac4751814ecd71637d9e80cbdda1c0e99714275fe77cfa858b3823b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4419625f66ae957b2d9b146fc8b085a4a95c3f47860d6b4847f8fe71a6c4c86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7830976d23e073a099c3c6af1fa2392d9504ad9d41ddf2ee63c16632c22f833a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45fa5d37dcd1dcae1a7488bb855e4bd7c7dc39a1199fe80b82f83604be089f2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:05:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:56Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:56 crc kubenswrapper[4811]: I0122 09:06:56.114320 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gwnx7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"153efd1a-2c09-4c49-94e1-2307bbf1e659\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71a7748f45c9a14885785ca825c32a5036a887c7a54b36cadd1a89e57608111f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nklsw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddce971b4fbeb72e1a32c0adab8e46cb8bce2ff9736cd875f63e105bf39dcc27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nklsw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gwnx7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:56Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:56 crc kubenswrapper[4811]: I0122 09:06:56.122175 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95b8b8c1-880c-4910-8387-3ff87b534859\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1a6925ee11245a252465e1a76bf8246c142164097c9c35f3467ab3d1650bc32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78e58b4688b2704d8a02a27ef452d900419711bd51a2d64b9c05de7d3a02ffbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72d5bfb65ec0c94b865b39227fa43cb243e05b615b8b6c8b2ce289357eb5488b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b13581fc07c90a3dba27e04a1079d8a678635bf14c0d5463cd7b337f0f40a4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b13581fc07c90a3dba27e04a1079d8a678635bf14c0d5463cd7b337f0f40a4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:56Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:05:56Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:56Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:56 crc kubenswrapper[4811]: I0122 09:06:56.125112 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:56 crc kubenswrapper[4811]: I0122 09:06:56.125143 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:56 crc kubenswrapper[4811]: I0122 09:06:56.125154 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:56 crc kubenswrapper[4811]: I0122 09:06:56.125177 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:56 crc kubenswrapper[4811]: I0122 09:06:56.125188 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:56Z","lastTransitionTime":"2026-01-22T09:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:56 crc kubenswrapper[4811]: I0122 09:06:56.129573 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f776b1bc70646779cf35092ded02c00aa3b990c873de09d21e1f0d76258b0b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:56Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:56 crc kubenswrapper[4811]: I0122 09:06:56.136995 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kfqgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2555861-d1bb-4f21-be4a-165ed9212932\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ececfc4b2e4f2a120e7a3307235d606db00733e1a5631b6e19b5312afc8e8ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jt7cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kfqgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:56Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:56 crc kubenswrapper[4811]: I0122 09:06:56.149462 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cf1d28a-4ee8-45bb-9a86-3687d8db2105\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684f8ca14b2eabcbf9280085d495af9a6f1fa76fb187f15b8fa5659178efdc93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64382d1748f1eaeb9f2f09d243b35c56f9fd70b9a9765ddb64cb0c4866ed493b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://590bb419e3791ec3a267b9a5aa4022ee74ba5e302b047e85211563e135c4cf31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b14bb77b0cf24bacdb8f5edae5d02ccea019b12feb315e6d6c4887feb1a9354\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0d10794bea5b8ea94b9247775e0c424fa9d481267c663d4bcae16eb4e7eb729\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09c7d347a923430e4eb2af48033ea3fc7715bedad38dd1b644c7b29a1ab617fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09c7d347a923430e4eb2af48033ea3fc7715bedad38dd1b644c7b29a1ab617fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba74d0a51550a0637b01b701bbaa33c5efe84fb88a69b7294d1125f1f400b7cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba74d0a51550a0637b01b701bbaa33c5efe84fb88a69b7294d1125f1f400b7cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9a2da74ed29f44340dd1a4778f15ba78ba4f673eb1b34b676e35c7d7c84af31d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a2da74ed29f44340dd1a4778f15ba78ba4f673eb1b34b676e35c7d7c84af31d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:05:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:56Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:56 crc kubenswrapper[4811]: I0122 09:06:56.157482 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:56Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:56 crc kubenswrapper[4811]: I0122 09:06:56.227011 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:56 crc kubenswrapper[4811]: I0122 09:06:56.227070 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:56 crc kubenswrapper[4811]: I0122 09:06:56.227082 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:56 crc kubenswrapper[4811]: I0122 09:06:56.227096 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:56 crc kubenswrapper[4811]: I0122 09:06:56.227108 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:56Z","lastTransitionTime":"2026-01-22T09:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:56 crc kubenswrapper[4811]: I0122 09:06:56.329003 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:56 crc kubenswrapper[4811]: I0122 09:06:56.329038 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:56 crc kubenswrapper[4811]: I0122 09:06:56.329049 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:56 crc kubenswrapper[4811]: I0122 09:06:56.329062 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:56 crc kubenswrapper[4811]: I0122 09:06:56.329073 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:56Z","lastTransitionTime":"2026-01-22T09:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:56 crc kubenswrapper[4811]: I0122 09:06:56.431668 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:56 crc kubenswrapper[4811]: I0122 09:06:56.431703 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:56 crc kubenswrapper[4811]: I0122 09:06:56.431715 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:56 crc kubenswrapper[4811]: I0122 09:06:56.431730 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:56 crc kubenswrapper[4811]: I0122 09:06:56.431742 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:56Z","lastTransitionTime":"2026-01-22T09:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:56 crc kubenswrapper[4811]: I0122 09:06:56.533677 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:56 crc kubenswrapper[4811]: I0122 09:06:56.533939 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:56 crc kubenswrapper[4811]: I0122 09:06:56.533950 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:56 crc kubenswrapper[4811]: I0122 09:06:56.533965 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:56 crc kubenswrapper[4811]: I0122 09:06:56.533977 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:56Z","lastTransitionTime":"2026-01-22T09:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:56 crc kubenswrapper[4811]: I0122 09:06:56.635493 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:56 crc kubenswrapper[4811]: I0122 09:06:56.635519 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:56 crc kubenswrapper[4811]: I0122 09:06:56.635528 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:56 crc kubenswrapper[4811]: I0122 09:06:56.635539 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:56 crc kubenswrapper[4811]: I0122 09:06:56.635549 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:56Z","lastTransitionTime":"2026-01-22T09:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:56 crc kubenswrapper[4811]: I0122 09:06:56.737605 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:56 crc kubenswrapper[4811]: I0122 09:06:56.737651 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:56 crc kubenswrapper[4811]: I0122 09:06:56.737663 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:56 crc kubenswrapper[4811]: I0122 09:06:56.737677 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:56 crc kubenswrapper[4811]: I0122 09:06:56.737688 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:56Z","lastTransitionTime":"2026-01-22T09:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:56 crc kubenswrapper[4811]: I0122 09:06:56.839193 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:56 crc kubenswrapper[4811]: I0122 09:06:56.839228 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:56 crc kubenswrapper[4811]: I0122 09:06:56.839237 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:56 crc kubenswrapper[4811]: I0122 09:06:56.839251 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:56 crc kubenswrapper[4811]: I0122 09:06:56.839262 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:56Z","lastTransitionTime":"2026-01-22T09:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:56 crc kubenswrapper[4811]: I0122 09:06:56.941216 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:56 crc kubenswrapper[4811]: I0122 09:06:56.941256 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:56 crc kubenswrapper[4811]: I0122 09:06:56.941266 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:56 crc kubenswrapper[4811]: I0122 09:06:56.941278 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:56 crc kubenswrapper[4811]: I0122 09:06:56.941287 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:56Z","lastTransitionTime":"2026-01-22T09:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:56 crc kubenswrapper[4811]: I0122 09:06:56.991214 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bhj4l" Jan 22 09:06:56 crc kubenswrapper[4811]: E0122 09:06:56.991315 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bhj4l" podUID="de4b38a0-0c7a-4693-9f92-40fefd6bc9b4" Jan 22 09:06:56 crc kubenswrapper[4811]: I0122 09:06:56.997732 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 12:06:02.454715389 +0000 UTC Jan 22 09:06:57 crc kubenswrapper[4811]: I0122 09:06:57.043191 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:57 crc kubenswrapper[4811]: I0122 09:06:57.043229 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:57 crc kubenswrapper[4811]: I0122 09:06:57.043238 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:57 crc kubenswrapper[4811]: I0122 09:06:57.043252 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:57 crc kubenswrapper[4811]: I0122 09:06:57.043262 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:57Z","lastTransitionTime":"2026-01-22T09:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:57 crc kubenswrapper[4811]: I0122 09:06:57.145194 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:57 crc kubenswrapper[4811]: I0122 09:06:57.145226 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:57 crc kubenswrapper[4811]: I0122 09:06:57.145234 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:57 crc kubenswrapper[4811]: I0122 09:06:57.145248 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:57 crc kubenswrapper[4811]: I0122 09:06:57.145258 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:57Z","lastTransitionTime":"2026-01-22T09:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:57 crc kubenswrapper[4811]: I0122 09:06:57.247513 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:57 crc kubenswrapper[4811]: I0122 09:06:57.247544 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:57 crc kubenswrapper[4811]: I0122 09:06:57.247553 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:57 crc kubenswrapper[4811]: I0122 09:06:57.247564 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:57 crc kubenswrapper[4811]: I0122 09:06:57.247572 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:57Z","lastTransitionTime":"2026-01-22T09:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:57 crc kubenswrapper[4811]: I0122 09:06:57.349486 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:57 crc kubenswrapper[4811]: I0122 09:06:57.349522 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:57 crc kubenswrapper[4811]: I0122 09:06:57.349530 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:57 crc kubenswrapper[4811]: I0122 09:06:57.349542 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:57 crc kubenswrapper[4811]: I0122 09:06:57.349550 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:57Z","lastTransitionTime":"2026-01-22T09:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:57 crc kubenswrapper[4811]: I0122 09:06:57.397186 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:57 crc kubenswrapper[4811]: I0122 09:06:57.397221 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:57 crc kubenswrapper[4811]: I0122 09:06:57.397230 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:57 crc kubenswrapper[4811]: I0122 09:06:57.397240 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:57 crc kubenswrapper[4811]: I0122 09:06:57.397248 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:57Z","lastTransitionTime":"2026-01-22T09:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:57 crc kubenswrapper[4811]: E0122 09:06:57.405849 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:06:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:06:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:06:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:06:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0066ca33-f035-4af3-9028-0da78d54d55e\\\",\\\"systemUUID\\\":\\\"463fdb35-dd0c-4804-a85e-31cd33c59ce4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:57Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:57 crc kubenswrapper[4811]: I0122 09:06:57.408117 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:57 crc kubenswrapper[4811]: I0122 09:06:57.408147 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:57 crc kubenswrapper[4811]: I0122 09:06:57.408154 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:57 crc kubenswrapper[4811]: I0122 09:06:57.408165 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:57 crc kubenswrapper[4811]: I0122 09:06:57.408173 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:57Z","lastTransitionTime":"2026-01-22T09:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:57 crc kubenswrapper[4811]: E0122 09:06:57.416387 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:06:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:06:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:06:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:06:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0066ca33-f035-4af3-9028-0da78d54d55e\\\",\\\"systemUUID\\\":\\\"463fdb35-dd0c-4804-a85e-31cd33c59ce4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:57Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:57 crc kubenswrapper[4811]: I0122 09:06:57.418477 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:57 crc kubenswrapper[4811]: I0122 09:06:57.418505 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:57 crc kubenswrapper[4811]: I0122 09:06:57.418515 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:57 crc kubenswrapper[4811]: I0122 09:06:57.418525 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:57 crc kubenswrapper[4811]: I0122 09:06:57.418534 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:57Z","lastTransitionTime":"2026-01-22T09:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:57 crc kubenswrapper[4811]: E0122 09:06:57.425963 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:06:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:06:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:06:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:06:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0066ca33-f035-4af3-9028-0da78d54d55e\\\",\\\"systemUUID\\\":\\\"463fdb35-dd0c-4804-a85e-31cd33c59ce4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:57Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:57 crc kubenswrapper[4811]: I0122 09:06:57.428132 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:57 crc kubenswrapper[4811]: I0122 09:06:57.428161 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:57 crc kubenswrapper[4811]: I0122 09:06:57.428170 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:57 crc kubenswrapper[4811]: I0122 09:06:57.428197 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:57 crc kubenswrapper[4811]: I0122 09:06:57.428206 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:57Z","lastTransitionTime":"2026-01-22T09:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:57 crc kubenswrapper[4811]: E0122 09:06:57.435958 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:06:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:06:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:06:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:06:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0066ca33-f035-4af3-9028-0da78d54d55e\\\",\\\"systemUUID\\\":\\\"463fdb35-dd0c-4804-a85e-31cd33c59ce4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:57Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:57 crc kubenswrapper[4811]: I0122 09:06:57.438273 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:57 crc kubenswrapper[4811]: I0122 09:06:57.438302 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:57 crc kubenswrapper[4811]: I0122 09:06:57.438420 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:57 crc kubenswrapper[4811]: I0122 09:06:57.438437 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:57 crc kubenswrapper[4811]: I0122 09:06:57.438445 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:57Z","lastTransitionTime":"2026-01-22T09:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:57 crc kubenswrapper[4811]: E0122 09:06:57.446199 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:06:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:06:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:06:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:06:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0066ca33-f035-4af3-9028-0da78d54d55e\\\",\\\"systemUUID\\\":\\\"463fdb35-dd0c-4804-a85e-31cd33c59ce4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:06:57Z is after 2025-08-24T17:21:41Z" Jan 22 09:06:57 crc kubenswrapper[4811]: E0122 09:06:57.446297 4811 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 22 09:06:57 crc kubenswrapper[4811]: I0122 09:06:57.451174 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:57 crc kubenswrapper[4811]: I0122 09:06:57.451202 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:57 crc kubenswrapper[4811]: I0122 09:06:57.451210 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:57 crc kubenswrapper[4811]: I0122 09:06:57.451347 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:57 crc kubenswrapper[4811]: I0122 09:06:57.451363 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:57Z","lastTransitionTime":"2026-01-22T09:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:57 crc kubenswrapper[4811]: I0122 09:06:57.552692 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:57 crc kubenswrapper[4811]: I0122 09:06:57.552734 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:57 crc kubenswrapper[4811]: I0122 09:06:57.552743 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:57 crc kubenswrapper[4811]: I0122 09:06:57.552752 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:57 crc kubenswrapper[4811]: I0122 09:06:57.552766 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:57Z","lastTransitionTime":"2026-01-22T09:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:57 crc kubenswrapper[4811]: I0122 09:06:57.654251 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:57 crc kubenswrapper[4811]: I0122 09:06:57.654299 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:57 crc kubenswrapper[4811]: I0122 09:06:57.654309 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:57 crc kubenswrapper[4811]: I0122 09:06:57.654319 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:57 crc kubenswrapper[4811]: I0122 09:06:57.654327 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:57Z","lastTransitionTime":"2026-01-22T09:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:57 crc kubenswrapper[4811]: I0122 09:06:57.756130 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:57 crc kubenswrapper[4811]: I0122 09:06:57.756171 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:57 crc kubenswrapper[4811]: I0122 09:06:57.756181 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:57 crc kubenswrapper[4811]: I0122 09:06:57.756196 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:57 crc kubenswrapper[4811]: I0122 09:06:57.756207 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:57Z","lastTransitionTime":"2026-01-22T09:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:57 crc kubenswrapper[4811]: I0122 09:06:57.857651 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:57 crc kubenswrapper[4811]: I0122 09:06:57.857680 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:57 crc kubenswrapper[4811]: I0122 09:06:57.857687 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:57 crc kubenswrapper[4811]: I0122 09:06:57.857699 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:57 crc kubenswrapper[4811]: I0122 09:06:57.857708 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:57Z","lastTransitionTime":"2026-01-22T09:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:57 crc kubenswrapper[4811]: I0122 09:06:57.959124 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:57 crc kubenswrapper[4811]: I0122 09:06:57.959143 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:57 crc kubenswrapper[4811]: I0122 09:06:57.959150 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:57 crc kubenswrapper[4811]: I0122 09:06:57.959160 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:57 crc kubenswrapper[4811]: I0122 09:06:57.959167 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:57Z","lastTransitionTime":"2026-01-22T09:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:57 crc kubenswrapper[4811]: I0122 09:06:57.991161 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 09:06:57 crc kubenswrapper[4811]: E0122 09:06:57.991249 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 09:06:57 crc kubenswrapper[4811]: I0122 09:06:57.991362 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 09:06:57 crc kubenswrapper[4811]: E0122 09:06:57.991409 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 09:06:57 crc kubenswrapper[4811]: I0122 09:06:57.991483 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:06:57 crc kubenswrapper[4811]: E0122 09:06:57.991530 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 09:06:57 crc kubenswrapper[4811]: I0122 09:06:57.998242 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 12:20:10.932553769 +0000 UTC Jan 22 09:06:58 crc kubenswrapper[4811]: I0122 09:06:58.060790 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:58 crc kubenswrapper[4811]: I0122 09:06:58.060829 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:58 crc kubenswrapper[4811]: I0122 09:06:58.060838 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:58 crc kubenswrapper[4811]: I0122 09:06:58.060846 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:58 crc kubenswrapper[4811]: I0122 09:06:58.060853 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:58Z","lastTransitionTime":"2026-01-22T09:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:58 crc kubenswrapper[4811]: I0122 09:06:58.162418 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:58 crc kubenswrapper[4811]: I0122 09:06:58.162446 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:58 crc kubenswrapper[4811]: I0122 09:06:58.162453 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:58 crc kubenswrapper[4811]: I0122 09:06:58.162466 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:58 crc kubenswrapper[4811]: I0122 09:06:58.162474 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:58Z","lastTransitionTime":"2026-01-22T09:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:58 crc kubenswrapper[4811]: I0122 09:06:58.264282 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:58 crc kubenswrapper[4811]: I0122 09:06:58.264324 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:58 crc kubenswrapper[4811]: I0122 09:06:58.264333 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:58 crc kubenswrapper[4811]: I0122 09:06:58.264344 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:58 crc kubenswrapper[4811]: I0122 09:06:58.264366 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:58Z","lastTransitionTime":"2026-01-22T09:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:58 crc kubenswrapper[4811]: I0122 09:06:58.366035 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:58 crc kubenswrapper[4811]: I0122 09:06:58.366062 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:58 crc kubenswrapper[4811]: I0122 09:06:58.366070 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:58 crc kubenswrapper[4811]: I0122 09:06:58.366081 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:58 crc kubenswrapper[4811]: I0122 09:06:58.366088 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:58Z","lastTransitionTime":"2026-01-22T09:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:58 crc kubenswrapper[4811]: I0122 09:06:58.468093 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:58 crc kubenswrapper[4811]: I0122 09:06:58.468113 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:58 crc kubenswrapper[4811]: I0122 09:06:58.468121 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:58 crc kubenswrapper[4811]: I0122 09:06:58.468130 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:58 crc kubenswrapper[4811]: I0122 09:06:58.468138 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:58Z","lastTransitionTime":"2026-01-22T09:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:58 crc kubenswrapper[4811]: I0122 09:06:58.570065 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:58 crc kubenswrapper[4811]: I0122 09:06:58.570104 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:58 crc kubenswrapper[4811]: I0122 09:06:58.570113 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:58 crc kubenswrapper[4811]: I0122 09:06:58.570124 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:58 crc kubenswrapper[4811]: I0122 09:06:58.570132 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:58Z","lastTransitionTime":"2026-01-22T09:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:58 crc kubenswrapper[4811]: I0122 09:06:58.672138 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:58 crc kubenswrapper[4811]: I0122 09:06:58.672166 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:58 crc kubenswrapper[4811]: I0122 09:06:58.672175 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:58 crc kubenswrapper[4811]: I0122 09:06:58.672187 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:58 crc kubenswrapper[4811]: I0122 09:06:58.672196 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:58Z","lastTransitionTime":"2026-01-22T09:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:58 crc kubenswrapper[4811]: I0122 09:06:58.773574 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:58 crc kubenswrapper[4811]: I0122 09:06:58.773618 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:58 crc kubenswrapper[4811]: I0122 09:06:58.773655 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:58 crc kubenswrapper[4811]: I0122 09:06:58.773666 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:58 crc kubenswrapper[4811]: I0122 09:06:58.773673 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:58Z","lastTransitionTime":"2026-01-22T09:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:58 crc kubenswrapper[4811]: I0122 09:06:58.875156 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:58 crc kubenswrapper[4811]: I0122 09:06:58.875204 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:58 crc kubenswrapper[4811]: I0122 09:06:58.875212 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:58 crc kubenswrapper[4811]: I0122 09:06:58.875223 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:58 crc kubenswrapper[4811]: I0122 09:06:58.875234 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:58Z","lastTransitionTime":"2026-01-22T09:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:58 crc kubenswrapper[4811]: I0122 09:06:58.977482 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:58 crc kubenswrapper[4811]: I0122 09:06:58.977513 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:58 crc kubenswrapper[4811]: I0122 09:06:58.977522 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:58 crc kubenswrapper[4811]: I0122 09:06:58.977536 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:58 crc kubenswrapper[4811]: I0122 09:06:58.977546 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:58Z","lastTransitionTime":"2026-01-22T09:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:58 crc kubenswrapper[4811]: I0122 09:06:58.991825 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bhj4l" Jan 22 09:06:58 crc kubenswrapper[4811]: E0122 09:06:58.991922 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bhj4l" podUID="de4b38a0-0c7a-4693-9f92-40fefd6bc9b4" Jan 22 09:06:58 crc kubenswrapper[4811]: I0122 09:06:58.999274 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 19:39:16.289500121 +0000 UTC Jan 22 09:06:59 crc kubenswrapper[4811]: I0122 09:06:59.080064 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:59 crc kubenswrapper[4811]: I0122 09:06:59.080108 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:59 crc kubenswrapper[4811]: I0122 09:06:59.080118 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:59 crc kubenswrapper[4811]: I0122 09:06:59.080131 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:59 crc kubenswrapper[4811]: I0122 09:06:59.080141 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:59Z","lastTransitionTime":"2026-01-22T09:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:59 crc kubenswrapper[4811]: I0122 09:06:59.181998 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:59 crc kubenswrapper[4811]: I0122 09:06:59.182035 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:59 crc kubenswrapper[4811]: I0122 09:06:59.182043 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:59 crc kubenswrapper[4811]: I0122 09:06:59.182055 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:59 crc kubenswrapper[4811]: I0122 09:06:59.182063 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:59Z","lastTransitionTime":"2026-01-22T09:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:59 crc kubenswrapper[4811]: I0122 09:06:59.284421 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:59 crc kubenswrapper[4811]: I0122 09:06:59.284445 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:59 crc kubenswrapper[4811]: I0122 09:06:59.284453 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:59 crc kubenswrapper[4811]: I0122 09:06:59.284462 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:59 crc kubenswrapper[4811]: I0122 09:06:59.284470 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:59Z","lastTransitionTime":"2026-01-22T09:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:59 crc kubenswrapper[4811]: I0122 09:06:59.386346 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:59 crc kubenswrapper[4811]: I0122 09:06:59.386370 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:59 crc kubenswrapper[4811]: I0122 09:06:59.386377 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:59 crc kubenswrapper[4811]: I0122 09:06:59.386387 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:59 crc kubenswrapper[4811]: I0122 09:06:59.386395 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:59Z","lastTransitionTime":"2026-01-22T09:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:59 crc kubenswrapper[4811]: I0122 09:06:59.488231 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:59 crc kubenswrapper[4811]: I0122 09:06:59.488272 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:59 crc kubenswrapper[4811]: I0122 09:06:59.488281 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:59 crc kubenswrapper[4811]: I0122 09:06:59.488290 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:59 crc kubenswrapper[4811]: I0122 09:06:59.488297 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:59Z","lastTransitionTime":"2026-01-22T09:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:59 crc kubenswrapper[4811]: I0122 09:06:59.590090 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:59 crc kubenswrapper[4811]: I0122 09:06:59.590123 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:59 crc kubenswrapper[4811]: I0122 09:06:59.590135 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:59 crc kubenswrapper[4811]: I0122 09:06:59.590145 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:59 crc kubenswrapper[4811]: I0122 09:06:59.590153 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:59Z","lastTransitionTime":"2026-01-22T09:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:59 crc kubenswrapper[4811]: I0122 09:06:59.691826 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:59 crc kubenswrapper[4811]: I0122 09:06:59.691865 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:59 crc kubenswrapper[4811]: I0122 09:06:59.691875 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:59 crc kubenswrapper[4811]: I0122 09:06:59.691889 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:59 crc kubenswrapper[4811]: I0122 09:06:59.691898 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:59Z","lastTransitionTime":"2026-01-22T09:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:59 crc kubenswrapper[4811]: I0122 09:06:59.793617 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:59 crc kubenswrapper[4811]: I0122 09:06:59.793674 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:59 crc kubenswrapper[4811]: I0122 09:06:59.793683 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:59 crc kubenswrapper[4811]: I0122 09:06:59.793697 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:59 crc kubenswrapper[4811]: I0122 09:06:59.793709 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:59Z","lastTransitionTime":"2026-01-22T09:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:59 crc kubenswrapper[4811]: I0122 09:06:59.894978 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:59 crc kubenswrapper[4811]: I0122 09:06:59.895009 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:59 crc kubenswrapper[4811]: I0122 09:06:59.895018 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:59 crc kubenswrapper[4811]: I0122 09:06:59.895033 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:59 crc kubenswrapper[4811]: I0122 09:06:59.895042 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:59Z","lastTransitionTime":"2026-01-22T09:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:06:59 crc kubenswrapper[4811]: I0122 09:06:59.991834 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 09:06:59 crc kubenswrapper[4811]: I0122 09:06:59.991838 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:06:59 crc kubenswrapper[4811]: I0122 09:06:59.991840 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 09:06:59 crc kubenswrapper[4811]: E0122 09:06:59.991913 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 09:06:59 crc kubenswrapper[4811]: E0122 09:06:59.991981 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 09:06:59 crc kubenswrapper[4811]: E0122 09:06:59.992335 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 09:06:59 crc kubenswrapper[4811]: I0122 09:06:59.992544 4811 scope.go:117] "RemoveContainer" containerID="454af35caa860192c078a3fb4028bf2c4544607849290f3bd427688821757b2f" Jan 22 09:06:59 crc kubenswrapper[4811]: E0122 09:06:59.992807 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-274vf_openshift-ovn-kubernetes(1cd0f0db-de53-47c0-9b45-2ce8b37392a3)\"" pod="openshift-ovn-kubernetes/ovnkube-node-274vf" podUID="1cd0f0db-de53-47c0-9b45-2ce8b37392a3" Jan 22 09:06:59 crc kubenswrapper[4811]: I0122 09:06:59.996836 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:06:59 crc kubenswrapper[4811]: I0122 09:06:59.996895 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:06:59 crc kubenswrapper[4811]: I0122 09:06:59.996905 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:06:59 crc kubenswrapper[4811]: I0122 09:06:59.996916 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:06:59 crc kubenswrapper[4811]: I0122 09:06:59.996924 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:06:59Z","lastTransitionTime":"2026-01-22T09:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:00 crc kubenswrapper[4811]: I0122 09:07:00.000045 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 22:07:44.285986402 +0000 UTC Jan 22 09:07:00 crc kubenswrapper[4811]: I0122 09:07:00.098674 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:00 crc kubenswrapper[4811]: I0122 09:07:00.098702 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:00 crc kubenswrapper[4811]: I0122 09:07:00.098710 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:00 crc kubenswrapper[4811]: I0122 09:07:00.098721 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:00 crc kubenswrapper[4811]: I0122 09:07:00.098732 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:00Z","lastTransitionTime":"2026-01-22T09:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:00 crc kubenswrapper[4811]: I0122 09:07:00.201143 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:00 crc kubenswrapper[4811]: I0122 09:07:00.201170 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:00 crc kubenswrapper[4811]: I0122 09:07:00.201180 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:00 crc kubenswrapper[4811]: I0122 09:07:00.201195 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:00 crc kubenswrapper[4811]: I0122 09:07:00.201204 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:00Z","lastTransitionTime":"2026-01-22T09:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:00 crc kubenswrapper[4811]: I0122 09:07:00.302498 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:00 crc kubenswrapper[4811]: I0122 09:07:00.302532 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:00 crc kubenswrapper[4811]: I0122 09:07:00.302542 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:00 crc kubenswrapper[4811]: I0122 09:07:00.302553 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:00 crc kubenswrapper[4811]: I0122 09:07:00.302560 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:00Z","lastTransitionTime":"2026-01-22T09:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:00 crc kubenswrapper[4811]: I0122 09:07:00.403976 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:00 crc kubenswrapper[4811]: I0122 09:07:00.403997 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:00 crc kubenswrapper[4811]: I0122 09:07:00.404005 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:00 crc kubenswrapper[4811]: I0122 09:07:00.404015 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:00 crc kubenswrapper[4811]: I0122 09:07:00.404022 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:00Z","lastTransitionTime":"2026-01-22T09:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:00 crc kubenswrapper[4811]: I0122 09:07:00.505486 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:00 crc kubenswrapper[4811]: I0122 09:07:00.505514 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:00 crc kubenswrapper[4811]: I0122 09:07:00.505531 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:00 crc kubenswrapper[4811]: I0122 09:07:00.505541 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:00 crc kubenswrapper[4811]: I0122 09:07:00.505549 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:00Z","lastTransitionTime":"2026-01-22T09:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:00 crc kubenswrapper[4811]: I0122 09:07:00.607411 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:00 crc kubenswrapper[4811]: I0122 09:07:00.607436 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:00 crc kubenswrapper[4811]: I0122 09:07:00.607447 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:00 crc kubenswrapper[4811]: I0122 09:07:00.607456 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:00 crc kubenswrapper[4811]: I0122 09:07:00.607463 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:00Z","lastTransitionTime":"2026-01-22T09:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:00 crc kubenswrapper[4811]: I0122 09:07:00.708910 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:00 crc kubenswrapper[4811]: I0122 09:07:00.708929 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:00 crc kubenswrapper[4811]: I0122 09:07:00.708936 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:00 crc kubenswrapper[4811]: I0122 09:07:00.708944 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:00 crc kubenswrapper[4811]: I0122 09:07:00.708952 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:00Z","lastTransitionTime":"2026-01-22T09:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:00 crc kubenswrapper[4811]: I0122 09:07:00.810349 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:00 crc kubenswrapper[4811]: I0122 09:07:00.810369 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:00 crc kubenswrapper[4811]: I0122 09:07:00.810377 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:00 crc kubenswrapper[4811]: I0122 09:07:00.810386 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:00 crc kubenswrapper[4811]: I0122 09:07:00.810393 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:00Z","lastTransitionTime":"2026-01-22T09:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:00 crc kubenswrapper[4811]: I0122 09:07:00.911535 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:00 crc kubenswrapper[4811]: I0122 09:07:00.911557 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:00 crc kubenswrapper[4811]: I0122 09:07:00.911565 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:00 crc kubenswrapper[4811]: I0122 09:07:00.911574 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:00 crc kubenswrapper[4811]: I0122 09:07:00.911581 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:00Z","lastTransitionTime":"2026-01-22T09:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:00 crc kubenswrapper[4811]: I0122 09:07:00.990989 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bhj4l" Jan 22 09:07:00 crc kubenswrapper[4811]: E0122 09:07:00.991070 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bhj4l" podUID="de4b38a0-0c7a-4693-9f92-40fefd6bc9b4" Jan 22 09:07:01 crc kubenswrapper[4811]: I0122 09:07:01.000718 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 00:59:50.600968251 +0000 UTC Jan 22 09:07:01 crc kubenswrapper[4811]: I0122 09:07:01.013115 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:01 crc kubenswrapper[4811]: I0122 09:07:01.013135 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:01 crc kubenswrapper[4811]: I0122 09:07:01.013142 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:01 crc kubenswrapper[4811]: I0122 09:07:01.013150 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:01 crc kubenswrapper[4811]: I0122 09:07:01.013157 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:01Z","lastTransitionTime":"2026-01-22T09:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:01 crc kubenswrapper[4811]: I0122 09:07:01.115446 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:01 crc kubenswrapper[4811]: I0122 09:07:01.115488 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:01 crc kubenswrapper[4811]: I0122 09:07:01.115498 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:01 crc kubenswrapper[4811]: I0122 09:07:01.115510 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:01 crc kubenswrapper[4811]: I0122 09:07:01.115518 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:01Z","lastTransitionTime":"2026-01-22T09:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:01 crc kubenswrapper[4811]: I0122 09:07:01.218105 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:01 crc kubenswrapper[4811]: I0122 09:07:01.218149 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:01 crc kubenswrapper[4811]: I0122 09:07:01.218158 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:01 crc kubenswrapper[4811]: I0122 09:07:01.218174 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:01 crc kubenswrapper[4811]: I0122 09:07:01.218185 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:01Z","lastTransitionTime":"2026-01-22T09:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:01 crc kubenswrapper[4811]: I0122 09:07:01.319371 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:01 crc kubenswrapper[4811]: I0122 09:07:01.319402 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:01 crc kubenswrapper[4811]: I0122 09:07:01.319411 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:01 crc kubenswrapper[4811]: I0122 09:07:01.319422 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:01 crc kubenswrapper[4811]: I0122 09:07:01.319430 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:01Z","lastTransitionTime":"2026-01-22T09:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:01 crc kubenswrapper[4811]: I0122 09:07:01.421179 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:01 crc kubenswrapper[4811]: I0122 09:07:01.421208 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:01 crc kubenswrapper[4811]: I0122 09:07:01.421216 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:01 crc kubenswrapper[4811]: I0122 09:07:01.421228 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:01 crc kubenswrapper[4811]: I0122 09:07:01.421239 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:01Z","lastTransitionTime":"2026-01-22T09:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:01 crc kubenswrapper[4811]: I0122 09:07:01.523557 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:01 crc kubenswrapper[4811]: I0122 09:07:01.523593 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:01 crc kubenswrapper[4811]: I0122 09:07:01.523603 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:01 crc kubenswrapper[4811]: I0122 09:07:01.523618 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:01 crc kubenswrapper[4811]: I0122 09:07:01.523646 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:01Z","lastTransitionTime":"2026-01-22T09:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:01 crc kubenswrapper[4811]: I0122 09:07:01.626146 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:01 crc kubenswrapper[4811]: I0122 09:07:01.626894 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:01 crc kubenswrapper[4811]: I0122 09:07:01.626906 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:01 crc kubenswrapper[4811]: I0122 09:07:01.626918 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:01 crc kubenswrapper[4811]: I0122 09:07:01.626926 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:01Z","lastTransitionTime":"2026-01-22T09:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:01 crc kubenswrapper[4811]: I0122 09:07:01.728848 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:01 crc kubenswrapper[4811]: I0122 09:07:01.728870 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:01 crc kubenswrapper[4811]: I0122 09:07:01.728880 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:01 crc kubenswrapper[4811]: I0122 09:07:01.728897 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:01 crc kubenswrapper[4811]: I0122 09:07:01.728913 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:01Z","lastTransitionTime":"2026-01-22T09:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:01 crc kubenswrapper[4811]: I0122 09:07:01.830326 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:01 crc kubenswrapper[4811]: I0122 09:07:01.830360 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:01 crc kubenswrapper[4811]: I0122 09:07:01.830369 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:01 crc kubenswrapper[4811]: I0122 09:07:01.830381 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:01 crc kubenswrapper[4811]: I0122 09:07:01.830393 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:01Z","lastTransitionTime":"2026-01-22T09:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:01 crc kubenswrapper[4811]: I0122 09:07:01.931543 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:01 crc kubenswrapper[4811]: I0122 09:07:01.931571 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:01 crc kubenswrapper[4811]: I0122 09:07:01.931579 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:01 crc kubenswrapper[4811]: I0122 09:07:01.931589 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:01 crc kubenswrapper[4811]: I0122 09:07:01.931597 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:01Z","lastTransitionTime":"2026-01-22T09:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:01 crc kubenswrapper[4811]: I0122 09:07:01.991481 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:07:01 crc kubenswrapper[4811]: I0122 09:07:01.991717 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 09:07:01 crc kubenswrapper[4811]: E0122 09:07:01.991788 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 09:07:01 crc kubenswrapper[4811]: E0122 09:07:01.991905 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 09:07:01 crc kubenswrapper[4811]: I0122 09:07:01.991957 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 09:07:01 crc kubenswrapper[4811]: E0122 09:07:01.992016 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 09:07:02 crc kubenswrapper[4811]: I0122 09:07:02.001184 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 22:44:06.682888402 +0000 UTC Jan 22 09:07:02 crc kubenswrapper[4811]: I0122 09:07:02.033568 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:02 crc kubenswrapper[4811]: I0122 09:07:02.033589 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:02 crc kubenswrapper[4811]: I0122 09:07:02.033597 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:02 crc kubenswrapper[4811]: I0122 09:07:02.033607 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:02 crc kubenswrapper[4811]: I0122 09:07:02.033614 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:02Z","lastTransitionTime":"2026-01-22T09:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:02 crc kubenswrapper[4811]: I0122 09:07:02.135322 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:02 crc kubenswrapper[4811]: I0122 09:07:02.135353 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:02 crc kubenswrapper[4811]: I0122 09:07:02.135363 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:02 crc kubenswrapper[4811]: I0122 09:07:02.135374 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:02 crc kubenswrapper[4811]: I0122 09:07:02.135381 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:02Z","lastTransitionTime":"2026-01-22T09:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:02 crc kubenswrapper[4811]: I0122 09:07:02.237168 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:02 crc kubenswrapper[4811]: I0122 09:07:02.237195 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:02 crc kubenswrapper[4811]: I0122 09:07:02.237205 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:02 crc kubenswrapper[4811]: I0122 09:07:02.237217 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:02 crc kubenswrapper[4811]: I0122 09:07:02.237227 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:02Z","lastTransitionTime":"2026-01-22T09:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:02 crc kubenswrapper[4811]: I0122 09:07:02.339192 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:02 crc kubenswrapper[4811]: I0122 09:07:02.339223 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:02 crc kubenswrapper[4811]: I0122 09:07:02.339231 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:02 crc kubenswrapper[4811]: I0122 09:07:02.339241 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:02 crc kubenswrapper[4811]: I0122 09:07:02.339249 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:02Z","lastTransitionTime":"2026-01-22T09:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:02 crc kubenswrapper[4811]: I0122 09:07:02.442509 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:02 crc kubenswrapper[4811]: I0122 09:07:02.442536 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:02 crc kubenswrapper[4811]: I0122 09:07:02.442545 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:02 crc kubenswrapper[4811]: I0122 09:07:02.442556 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:02 crc kubenswrapper[4811]: I0122 09:07:02.442564 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:02Z","lastTransitionTime":"2026-01-22T09:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:02 crc kubenswrapper[4811]: I0122 09:07:02.544043 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:02 crc kubenswrapper[4811]: I0122 09:07:02.544076 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:02 crc kubenswrapper[4811]: I0122 09:07:02.544084 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:02 crc kubenswrapper[4811]: I0122 09:07:02.544093 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:02 crc kubenswrapper[4811]: I0122 09:07:02.544119 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:02Z","lastTransitionTime":"2026-01-22T09:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:02 crc kubenswrapper[4811]: I0122 09:07:02.645403 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:02 crc kubenswrapper[4811]: I0122 09:07:02.645439 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:02 crc kubenswrapper[4811]: I0122 09:07:02.645448 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:02 crc kubenswrapper[4811]: I0122 09:07:02.645461 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:02 crc kubenswrapper[4811]: I0122 09:07:02.645470 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:02Z","lastTransitionTime":"2026-01-22T09:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:02 crc kubenswrapper[4811]: I0122 09:07:02.746979 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:02 crc kubenswrapper[4811]: I0122 09:07:02.747003 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:02 crc kubenswrapper[4811]: I0122 09:07:02.747013 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:02 crc kubenswrapper[4811]: I0122 09:07:02.747022 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:02 crc kubenswrapper[4811]: I0122 09:07:02.747032 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:02Z","lastTransitionTime":"2026-01-22T09:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:02 crc kubenswrapper[4811]: I0122 09:07:02.848741 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:02 crc kubenswrapper[4811]: I0122 09:07:02.848787 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:02 crc kubenswrapper[4811]: I0122 09:07:02.848799 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:02 crc kubenswrapper[4811]: I0122 09:07:02.848812 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:02 crc kubenswrapper[4811]: I0122 09:07:02.848823 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:02Z","lastTransitionTime":"2026-01-22T09:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:02 crc kubenswrapper[4811]: I0122 09:07:02.950600 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:02 crc kubenswrapper[4811]: I0122 09:07:02.950720 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:02 crc kubenswrapper[4811]: I0122 09:07:02.950809 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:02 crc kubenswrapper[4811]: I0122 09:07:02.950870 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:02 crc kubenswrapper[4811]: I0122 09:07:02.950930 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:02Z","lastTransitionTime":"2026-01-22T09:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:02 crc kubenswrapper[4811]: I0122 09:07:02.991005 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bhj4l" Jan 22 09:07:02 crc kubenswrapper[4811]: E0122 09:07:02.991115 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bhj4l" podUID="de4b38a0-0c7a-4693-9f92-40fefd6bc9b4" Jan 22 09:07:03 crc kubenswrapper[4811]: I0122 09:07:03.002171 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 12:57:52.499245828 +0000 UTC Jan 22 09:07:03 crc kubenswrapper[4811]: I0122 09:07:03.052307 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:03 crc kubenswrapper[4811]: I0122 09:07:03.052410 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:03 crc kubenswrapper[4811]: I0122 09:07:03.052468 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:03 crc kubenswrapper[4811]: I0122 09:07:03.052525 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:03 crc kubenswrapper[4811]: I0122 09:07:03.052581 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:03Z","lastTransitionTime":"2026-01-22T09:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:03 crc kubenswrapper[4811]: I0122 09:07:03.154150 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:03 crc kubenswrapper[4811]: I0122 09:07:03.154176 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:03 crc kubenswrapper[4811]: I0122 09:07:03.154184 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:03 crc kubenswrapper[4811]: I0122 09:07:03.154195 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:03 crc kubenswrapper[4811]: I0122 09:07:03.154203 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:03Z","lastTransitionTime":"2026-01-22T09:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:03 crc kubenswrapper[4811]: I0122 09:07:03.240144 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/de4b38a0-0c7a-4693-9f92-40fefd6bc9b4-metrics-certs\") pod \"network-metrics-daemon-bhj4l\" (UID: \"de4b38a0-0c7a-4693-9f92-40fefd6bc9b4\") " pod="openshift-multus/network-metrics-daemon-bhj4l" Jan 22 09:07:03 crc kubenswrapper[4811]: E0122 09:07:03.240300 4811 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 09:07:03 crc kubenswrapper[4811]: E0122 09:07:03.240374 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/de4b38a0-0c7a-4693-9f92-40fefd6bc9b4-metrics-certs podName:de4b38a0-0c7a-4693-9f92-40fefd6bc9b4 nodeName:}" failed. No retries permitted until 2026-01-22 09:07:35.240345784 +0000 UTC m=+99.562532907 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/de4b38a0-0c7a-4693-9f92-40fefd6bc9b4-metrics-certs") pod "network-metrics-daemon-bhj4l" (UID: "de4b38a0-0c7a-4693-9f92-40fefd6bc9b4") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 09:07:03 crc kubenswrapper[4811]: I0122 09:07:03.255380 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:03 crc kubenswrapper[4811]: I0122 09:07:03.255425 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:03 crc kubenswrapper[4811]: I0122 09:07:03.255436 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:03 crc kubenswrapper[4811]: I0122 09:07:03.255445 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:03 crc kubenswrapper[4811]: I0122 09:07:03.255453 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:03Z","lastTransitionTime":"2026-01-22T09:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:03 crc kubenswrapper[4811]: I0122 09:07:03.356708 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:03 crc kubenswrapper[4811]: I0122 09:07:03.356735 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:03 crc kubenswrapper[4811]: I0122 09:07:03.356744 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:03 crc kubenswrapper[4811]: I0122 09:07:03.356754 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:03 crc kubenswrapper[4811]: I0122 09:07:03.356772 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:03Z","lastTransitionTime":"2026-01-22T09:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:03 crc kubenswrapper[4811]: I0122 09:07:03.458869 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:03 crc kubenswrapper[4811]: I0122 09:07:03.458903 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:03 crc kubenswrapper[4811]: I0122 09:07:03.458913 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:03 crc kubenswrapper[4811]: I0122 09:07:03.458922 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:03 crc kubenswrapper[4811]: I0122 09:07:03.458930 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:03Z","lastTransitionTime":"2026-01-22T09:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:03 crc kubenswrapper[4811]: I0122 09:07:03.560490 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:03 crc kubenswrapper[4811]: I0122 09:07:03.560519 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:03 crc kubenswrapper[4811]: I0122 09:07:03.560527 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:03 crc kubenswrapper[4811]: I0122 09:07:03.560537 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:03 crc kubenswrapper[4811]: I0122 09:07:03.560544 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:03Z","lastTransitionTime":"2026-01-22T09:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:03 crc kubenswrapper[4811]: I0122 09:07:03.662398 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:03 crc kubenswrapper[4811]: I0122 09:07:03.662434 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:03 crc kubenswrapper[4811]: I0122 09:07:03.662444 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:03 crc kubenswrapper[4811]: I0122 09:07:03.662460 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:03 crc kubenswrapper[4811]: I0122 09:07:03.662471 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:03Z","lastTransitionTime":"2026-01-22T09:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:03 crc kubenswrapper[4811]: I0122 09:07:03.764366 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:03 crc kubenswrapper[4811]: I0122 09:07:03.764399 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:03 crc kubenswrapper[4811]: I0122 09:07:03.764408 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:03 crc kubenswrapper[4811]: I0122 09:07:03.764421 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:03 crc kubenswrapper[4811]: I0122 09:07:03.764431 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:03Z","lastTransitionTime":"2026-01-22T09:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:03 crc kubenswrapper[4811]: I0122 09:07:03.866065 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:03 crc kubenswrapper[4811]: I0122 09:07:03.866086 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:03 crc kubenswrapper[4811]: I0122 09:07:03.866096 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:03 crc kubenswrapper[4811]: I0122 09:07:03.866107 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:03 crc kubenswrapper[4811]: I0122 09:07:03.866142 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:03Z","lastTransitionTime":"2026-01-22T09:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:03 crc kubenswrapper[4811]: I0122 09:07:03.968278 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:03 crc kubenswrapper[4811]: I0122 09:07:03.968306 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:03 crc kubenswrapper[4811]: I0122 09:07:03.968315 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:03 crc kubenswrapper[4811]: I0122 09:07:03.968330 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:03 crc kubenswrapper[4811]: I0122 09:07:03.968355 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:03Z","lastTransitionTime":"2026-01-22T09:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:03 crc kubenswrapper[4811]: I0122 09:07:03.991827 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 09:07:03 crc kubenswrapper[4811]: I0122 09:07:03.991861 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:07:03 crc kubenswrapper[4811]: I0122 09:07:03.991876 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 09:07:03 crc kubenswrapper[4811]: E0122 09:07:03.991907 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 09:07:03 crc kubenswrapper[4811]: E0122 09:07:03.991958 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 09:07:03 crc kubenswrapper[4811]: E0122 09:07:03.992022 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 09:07:04 crc kubenswrapper[4811]: I0122 09:07:04.002663 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 06:54:47.731456962 +0000 UTC Jan 22 09:07:04 crc kubenswrapper[4811]: I0122 09:07:04.073296 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:04 crc kubenswrapper[4811]: I0122 09:07:04.073352 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:04 crc kubenswrapper[4811]: I0122 09:07:04.073364 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:04 crc kubenswrapper[4811]: I0122 09:07:04.073374 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:04 crc kubenswrapper[4811]: I0122 09:07:04.073385 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:04Z","lastTransitionTime":"2026-01-22T09:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:04 crc kubenswrapper[4811]: I0122 09:07:04.175316 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:04 crc kubenswrapper[4811]: I0122 09:07:04.175349 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:04 crc kubenswrapper[4811]: I0122 09:07:04.175361 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:04 crc kubenswrapper[4811]: I0122 09:07:04.175388 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:04 crc kubenswrapper[4811]: I0122 09:07:04.175397 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:04Z","lastTransitionTime":"2026-01-22T09:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:04 crc kubenswrapper[4811]: I0122 09:07:04.277134 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:04 crc kubenswrapper[4811]: I0122 09:07:04.277163 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:04 crc kubenswrapper[4811]: I0122 09:07:04.277172 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:04 crc kubenswrapper[4811]: I0122 09:07:04.277182 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:04 crc kubenswrapper[4811]: I0122 09:07:04.277190 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:04Z","lastTransitionTime":"2026-01-22T09:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:04 crc kubenswrapper[4811]: I0122 09:07:04.379119 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:04 crc kubenswrapper[4811]: I0122 09:07:04.379151 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:04 crc kubenswrapper[4811]: I0122 09:07:04.379162 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:04 crc kubenswrapper[4811]: I0122 09:07:04.379175 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:04 crc kubenswrapper[4811]: I0122 09:07:04.379184 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:04Z","lastTransitionTime":"2026-01-22T09:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:04 crc kubenswrapper[4811]: I0122 09:07:04.481388 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:04 crc kubenswrapper[4811]: I0122 09:07:04.481449 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:04 crc kubenswrapper[4811]: I0122 09:07:04.481461 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:04 crc kubenswrapper[4811]: I0122 09:07:04.481471 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:04 crc kubenswrapper[4811]: I0122 09:07:04.481479 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:04Z","lastTransitionTime":"2026-01-22T09:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:04 crc kubenswrapper[4811]: I0122 09:07:04.582955 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:04 crc kubenswrapper[4811]: I0122 09:07:04.582984 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:04 crc kubenswrapper[4811]: I0122 09:07:04.582993 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:04 crc kubenswrapper[4811]: I0122 09:07:04.583003 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:04 crc kubenswrapper[4811]: I0122 09:07:04.583010 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:04Z","lastTransitionTime":"2026-01-22T09:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:04 crc kubenswrapper[4811]: I0122 09:07:04.684799 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:04 crc kubenswrapper[4811]: I0122 09:07:04.684830 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:04 crc kubenswrapper[4811]: I0122 09:07:04.684839 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:04 crc kubenswrapper[4811]: I0122 09:07:04.684850 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:04 crc kubenswrapper[4811]: I0122 09:07:04.684860 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:04Z","lastTransitionTime":"2026-01-22T09:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:04 crc kubenswrapper[4811]: I0122 09:07:04.786406 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:04 crc kubenswrapper[4811]: I0122 09:07:04.786428 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:04 crc kubenswrapper[4811]: I0122 09:07:04.786436 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:04 crc kubenswrapper[4811]: I0122 09:07:04.786447 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:04 crc kubenswrapper[4811]: I0122 09:07:04.786454 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:04Z","lastTransitionTime":"2026-01-22T09:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:04 crc kubenswrapper[4811]: I0122 09:07:04.888222 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:04 crc kubenswrapper[4811]: I0122 09:07:04.888403 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:04 crc kubenswrapper[4811]: I0122 09:07:04.888514 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:04 crc kubenswrapper[4811]: I0122 09:07:04.888612 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:04 crc kubenswrapper[4811]: I0122 09:07:04.888686 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:04Z","lastTransitionTime":"2026-01-22T09:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:04 crc kubenswrapper[4811]: I0122 09:07:04.990318 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:04 crc kubenswrapper[4811]: I0122 09:07:04.990355 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:04 crc kubenswrapper[4811]: I0122 09:07:04.990366 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:04 crc kubenswrapper[4811]: I0122 09:07:04.990379 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:04 crc kubenswrapper[4811]: I0122 09:07:04.990388 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:04Z","lastTransitionTime":"2026-01-22T09:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:04 crc kubenswrapper[4811]: I0122 09:07:04.991462 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bhj4l" Jan 22 09:07:04 crc kubenswrapper[4811]: E0122 09:07:04.991564 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bhj4l" podUID="de4b38a0-0c7a-4693-9f92-40fefd6bc9b4" Jan 22 09:07:05 crc kubenswrapper[4811]: I0122 09:07:05.003184 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 12:56:54.296984478 +0000 UTC Jan 22 09:07:05 crc kubenswrapper[4811]: I0122 09:07:05.091679 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:05 crc kubenswrapper[4811]: I0122 09:07:05.091706 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:05 crc kubenswrapper[4811]: I0122 09:07:05.091715 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:05 crc kubenswrapper[4811]: I0122 09:07:05.091727 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:05 crc kubenswrapper[4811]: I0122 09:07:05.091736 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:05Z","lastTransitionTime":"2026-01-22T09:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:05 crc kubenswrapper[4811]: I0122 09:07:05.193337 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:05 crc kubenswrapper[4811]: I0122 09:07:05.193471 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:05 crc kubenswrapper[4811]: I0122 09:07:05.193545 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:05 crc kubenswrapper[4811]: I0122 09:07:05.193615 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:05 crc kubenswrapper[4811]: I0122 09:07:05.193708 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:05Z","lastTransitionTime":"2026-01-22T09:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:05 crc kubenswrapper[4811]: I0122 09:07:05.295335 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:05 crc kubenswrapper[4811]: I0122 09:07:05.295460 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:05 crc kubenswrapper[4811]: I0122 09:07:05.295519 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:05 crc kubenswrapper[4811]: I0122 09:07:05.295581 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:05 crc kubenswrapper[4811]: I0122 09:07:05.295653 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:05Z","lastTransitionTime":"2026-01-22T09:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:05 crc kubenswrapper[4811]: I0122 09:07:05.304282 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-kfqgt_f2555861-d1bb-4f21-be4a-165ed9212932/kube-multus/0.log" Jan 22 09:07:05 crc kubenswrapper[4811]: I0122 09:07:05.304392 4811 generic.go:334] "Generic (PLEG): container finished" podID="f2555861-d1bb-4f21-be4a-165ed9212932" containerID="6ececfc4b2e4f2a120e7a3307235d606db00733e1a5631b6e19b5312afc8e8ff" exitCode=1 Jan 22 09:07:05 crc kubenswrapper[4811]: I0122 09:07:05.304468 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-kfqgt" event={"ID":"f2555861-d1bb-4f21-be4a-165ed9212932","Type":"ContainerDied","Data":"6ececfc4b2e4f2a120e7a3307235d606db00733e1a5631b6e19b5312afc8e8ff"} Jan 22 09:07:05 crc kubenswrapper[4811]: I0122 09:07:05.304797 4811 scope.go:117] "RemoveContainer" containerID="6ececfc4b2e4f2a120e7a3307235d606db00733e1a5631b6e19b5312afc8e8ff" Jan 22 09:07:05 crc kubenswrapper[4811]: I0122 09:07:05.314764 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab5dbe63-08fa-4157-9c52-003944e50d7b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af3bf6cdac4751814ecd71637d9e80cbdda1c0e99714275fe77cfa858b3823b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4419625f66ae957b2d9b146fc8b085a4a95c3f47860d6b4847f8fe71a6c4c86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7830976d23e073a099c3c6af1fa2392d9504ad9d41ddf2ee63c16632c22f833a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45fa5d37dcd1dcae1a7488bb855e4bd7c7dc39a1199fe80b82f83604be089f2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:05:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:07:05Z is after 2025-08-24T17:21:41Z" Jan 22 09:07:05 crc kubenswrapper[4811]: I0122 09:07:05.324695 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a14bfcc7d6d2ab5e1ad0c45999b2e73332e750ed65d9299e188505dafb26d0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:07:05Z is after 2025-08-24T17:21:41Z" Jan 22 09:07:05 crc kubenswrapper[4811]: I0122 09:07:05.333345 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bae2e22607dec646b15563dc62f66655e7a5c2b7ad809406c907e5f687f80105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75e290e203fab8f291146855a960477ceb9c33d965d15b4722c668dc2b3fded3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:07:05Z is after 2025-08-24T17:21:41Z" Jan 22 09:07:05 crc kubenswrapper[4811]: I0122 09:07:05.346087 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-274vf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cd0f0db-de53-47c0-9b45-2ce8b37392a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5f0ec565b046b22d600103b58946f6f520873f1686cc3042bf7ddc7f24bd4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e071df53e8cd1652f507eacae3e2c54ac75e76a6b9a426620c970762e81ecd84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c261cdea10a90cb9b06369f3b725b7778d070fb329e81ab44552446e12662451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f0736426a25dadac460e5ed467f3082cd8c4806190c053022d6f360c4290799\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9197a8c82d49fb9b4b1c97827ee95582ca8148656abb88904e06abb25c135597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55f26cdf9d61842467eb1367c6ec156b4de9c40a5b05fe60d3d100df5136a8a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://454af35caa860192c078a3fb4028bf2c4544607849290f3bd427688821757b2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://454af35caa860192c078a3fb4028bf2c4544607849290f3bd427688821757b2f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T09:06:42Z\\\",\\\"message\\\":\\\"ing service etcd on namespace openshift-etcd for network=default : 1.153957ms\\\\nI0122 09:06:42.648545 6404 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-controller-manager/controller-manager]} name:Service_openshift-controller-manager/controller-manager_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.149:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {cab7c637-a021-4a4d-a4b9-06d63c44316f}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0122 09:06:42.648515 6404 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-ingress-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"d8772e82-b0a4-4596-87d3-3d517c13344b\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-ingress-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]stri\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-274vf_openshift-ovn-kubernetes(1cd0f0db-de53-47c0-9b45-2ce8b37392a3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c741f2990f908528ccab348e1eb02e217a9b659becc7a41c4eb8da28bfd42e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://523ab909aa2cbac6043b3c572a17c40733e9b2155d4017253d68998cb85d3e7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://523ab909aa2cbac6043b3c572a17c40733e9b2155d4017253d68998cb85d3e7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-274vf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:07:05Z is after 2025-08-24T17:21:41Z" Jan 22 09:07:05 crc kubenswrapper[4811]: I0122 09:07:05.355451 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n2kj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66a7f3f9-ab88-4b1a-a28a-900480fa9651\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3a88defded9a5435941e2608c8a29baee840bc0ce48454a3731f344299347cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z47mj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n2kj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:07:05Z is after 2025-08-24T17:21:41Z" Jan 22 09:07:05 crc kubenswrapper[4811]: I0122 09:07:05.364390 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95b8b8c1-880c-4910-8387-3ff87b534859\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1a6925ee11245a252465e1a76bf8246c142164097c9c35f3467ab3d1650bc32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78e58b4688b2704d8a02a27ef452d900419711bd51a2d64b9c05de7d3a02ffbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72d5bfb65ec0c94b865b39227fa43cb243e05b615b8b6c8b2ce289357eb5488b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b13581fc07c90a3dba27e04a1079d8a678635bf14c0d5463cd7b337f0f40a4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b13581fc07c90a3dba27e04a1079d8a678635bf14c0d5463cd7b337f0f40a4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:56Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:05:56Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:07:05Z is after 2025-08-24T17:21:41Z" Jan 22 09:07:05 crc kubenswrapper[4811]: I0122 09:07:05.371458 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f776b1bc70646779cf35092ded02c00aa3b990c873de09d21e1f0d76258b0b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:07:05Z is after 2025-08-24T17:21:41Z" Jan 22 09:07:05 crc kubenswrapper[4811]: I0122 09:07:05.379812 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gwnx7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"153efd1a-2c09-4c49-94e1-2307bbf1e659\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71a7748f45c9a14885785ca825c32a5036a887c7a54b36cadd1a89e57608111f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nklsw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddce971b4fbeb72e1a32c0adab8e46cb8bce2ff9736cd875f63e105bf39dcc27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nklsw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gwnx7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:07:05Z is after 2025-08-24T17:21:41Z" Jan 22 09:07:05 crc kubenswrapper[4811]: I0122 09:07:05.394305 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cf1d28a-4ee8-45bb-9a86-3687d8db2105\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684f8ca14b2eabcbf9280085d495af9a6f1fa76fb187f15b8fa5659178efdc93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64382d1748f1eaeb9f2f09d243b35c56f9fd70b9a9765ddb64cb0c4866ed493b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://590bb419e3791ec3a267b9a5aa4022ee74ba5e302b047e85211563e135c4cf31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b14bb77b0cf24bacdb8f5edae5d02ccea019b12feb315e6d6c4887feb1a9354\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0d10794bea5b8ea94b9247775e0c424fa9d481267c663d4bcae16eb4e7eb729\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09c7d347a923430e4eb2af48033ea3fc7715bedad38dd1b644c7b29a1ab617fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09c7d347a923430e4eb2af48033ea3fc7715bedad38dd1b644c7b29a1ab617fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba74d0a51550a0637b01b701bbaa33c5efe84fb88a69b7294d1125f1f400b7cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba74d0a51550a0637b01b701bbaa33c5efe84fb88a69b7294d1125f1f400b7cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9a2da74ed29f44340dd1a4778f15ba78ba4f673eb1b34b676e35c7d7c84af31d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a2da74ed29f44340dd1a4778f15ba78ba4f673eb1b34b676e35c7d7c84af31d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:05:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:07:05Z is after 2025-08-24T17:21:41Z" Jan 22 09:07:05 crc kubenswrapper[4811]: I0122 09:07:05.397132 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:05 crc kubenswrapper[4811]: I0122 09:07:05.397160 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:05 crc kubenswrapper[4811]: I0122 09:07:05.397170 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:05 crc kubenswrapper[4811]: I0122 09:07:05.397182 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:05 crc kubenswrapper[4811]: I0122 09:07:05.397191 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:05Z","lastTransitionTime":"2026-01-22T09:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:05 crc kubenswrapper[4811]: I0122 09:07:05.402265 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:07:05Z is after 2025-08-24T17:21:41Z" Jan 22 09:07:05 crc kubenswrapper[4811]: I0122 09:07:05.412002 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kfqgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2555861-d1bb-4f21-be4a-165ed9212932\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ececfc4b2e4f2a120e7a3307235d606db00733e1a5631b6e19b5312afc8e8ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ececfc4b2e4f2a120e7a3307235d606db00733e1a5631b6e19b5312afc8e8ff\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T09:07:04Z\\\",\\\"message\\\":\\\"2026-01-22T09:06:19+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_2fe27a58-2281-4698-9763-8c4890d2a18e\\\\n2026-01-22T09:06:19+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_2fe27a58-2281-4698-9763-8c4890d2a18e to /host/opt/cni/bin/\\\\n2026-01-22T09:06:19Z [verbose] multus-daemon started\\\\n2026-01-22T09:06:19Z [verbose] Readiness Indicator file check\\\\n2026-01-22T09:07:04Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jt7cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kfqgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:07:05Z is after 2025-08-24T17:21:41Z" Jan 22 09:07:05 crc kubenswrapper[4811]: I0122 09:07:05.423413 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30b44c40-e6d4-4902-98e9-a259269d8bf7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c0b4f518da69d1ede47cb9d322e9d9ca120cfc5eb770f109f0e5fd2e3260964\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aee61445030e02ac4fa9de4fefd94fe50e821ac8897c7403e173f2594a2e0ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d66c2a7980e144d2ce01cb8ebac91297e012c09d4b9aec617e747542e1ca304\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4daff941637e717d7dc33c5627db72d253f50f50288c23257e1eaa66cd38c5d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a86cb7b767900041a2946ed016b105369283ee61acdf725ade666acb1e926ed0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0122 09:06:13.075349 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0122 09:06:13.075574 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 09:06:13.077171 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1744166093/tls.crt::/tmp/serving-cert-1744166093/tls.key\\\\\\\"\\\\nI0122 09:06:13.324400 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 09:06:13.330700 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 09:06:13.330724 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 09:06:13.330758 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 09:06:13.330771 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 09:06:13.337838 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 09:06:13.337866 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:06:13.337872 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:06:13.337876 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 09:06:13.337879 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 09:06:13.337883 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 09:06:13.337886 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 09:06:13.338151 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 09:06:13.339610 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9efb1ddf2ce6601c9860410e3ccacc38359aef779a7c8524bf4ec5950743d36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85e811085c9b345899231a1fcf56619c1cc5c8b0e6d22fc897fb96f20ba72e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85e811085c9b345899231a1fcf56619c1cc5c8b0e6d22fc897fb96f20ba72e37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:05:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:07:05Z is after 2025-08-24T17:21:41Z" Jan 22 09:07:05 crc kubenswrapper[4811]: I0122 09:07:05.431605 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:07:05Z is after 2025-08-24T17:21:41Z" Jan 22 09:07:05 crc kubenswrapper[4811]: I0122 09:07:05.440357 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:07:05Z is after 2025-08-24T17:21:41Z" Jan 22 09:07:05 crc kubenswrapper[4811]: I0122 09:07:05.449269 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xhhs7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f42dc1a6-d0c4-43e4-b9d9-b40c1f910400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a64d7f1d53f9a3d9616b76ea12902134c5e0a3a30c9a61113675074caae70d58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcfzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xhhs7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:07:05Z is after 2025-08-24T17:21:41Z" Jan 22 09:07:05 crc kubenswrapper[4811]: I0122 09:07:05.459575 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9g4j8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d23c9c9-89ca-4db5-99dc-1e5b9f80be38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://279ca4748f2c8229a6647c23a414668a7612c0c863b52ef33c5b3dd026de74ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e01aaa7db78471fea3a39a7dfc2be7926746a5c38a6a92dc3d33ebab7b8bf3cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e01aaa7db78471fea3a39a7dfc2be7926746a5c38a6a92dc3d33ebab7b8bf3cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c4b6317408e75e2cc7947a3ff91f9268141cd5000f951dcbfdab3fdc5c2386c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c4b6317408e75e2cc7947a3ff91f9268141cd5000f951dcbfdab3fdc5c2386c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9e774edddd1e144cf32c598245081727747e9a4e3c04ee339ff63e330ceb5cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9e774edddd1e144cf32c598245081727747e9a4e3c04ee339ff63e330ceb5cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66cb2961790d3d876438b07dabf72c50ed19c9e946e21dcdf90f098a38e02c84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66cb2961790d3d876438b07dabf72c50ed19c9e946e21dcdf90f098a38e02c84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://016900d5e167b9638dd095fc155f63e33c55037edb90af28ebef58c91b86fe7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://016900d5e167b9638dd095fc155f63e33c55037edb90af28ebef58c91b86fe7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4face38e0653080b6d84c1e1a126c96d043eb68c6e19f76dbd8def8bc96e9d40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4face38e0653080b6d84c1e1a126c96d043eb68c6e19f76dbd8def8bc96e9d40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9g4j8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:07:05Z is after 2025-08-24T17:21:41Z" Jan 22 09:07:05 crc kubenswrapper[4811]: I0122 09:07:05.467435 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84068a6b-e189-419b-87f5-f31428f6eafe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cde8418c9f22553519ec0a9ba1ddcdd553f64aafb44fa002e924499701236bca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thzhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbb067d04c1683ffd06d01aab41bb5988fdca44b8bd16012d35c42795d0d8011\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thzhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-txvcq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:07:05Z is after 2025-08-24T17:21:41Z" Jan 22 09:07:05 crc kubenswrapper[4811]: I0122 09:07:05.474398 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bhj4l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de4b38a0-0c7a-4693-9f92-40fefd6bc9b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mphcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mphcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bhj4l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:07:05Z is after 2025-08-24T17:21:41Z" Jan 22 09:07:05 crc kubenswrapper[4811]: I0122 09:07:05.498525 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:05 crc kubenswrapper[4811]: I0122 09:07:05.498557 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:05 crc kubenswrapper[4811]: I0122 09:07:05.498567 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:05 crc kubenswrapper[4811]: I0122 09:07:05.498583 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:05 crc kubenswrapper[4811]: I0122 09:07:05.498593 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:05Z","lastTransitionTime":"2026-01-22T09:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:05 crc kubenswrapper[4811]: I0122 09:07:05.600474 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:05 crc kubenswrapper[4811]: I0122 09:07:05.600509 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:05 crc kubenswrapper[4811]: I0122 09:07:05.600518 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:05 crc kubenswrapper[4811]: I0122 09:07:05.600529 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:05 crc kubenswrapper[4811]: I0122 09:07:05.600536 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:05Z","lastTransitionTime":"2026-01-22T09:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:05 crc kubenswrapper[4811]: I0122 09:07:05.702857 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:05 crc kubenswrapper[4811]: I0122 09:07:05.702892 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:05 crc kubenswrapper[4811]: I0122 09:07:05.702901 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:05 crc kubenswrapper[4811]: I0122 09:07:05.702924 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:05 crc kubenswrapper[4811]: I0122 09:07:05.702935 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:05Z","lastTransitionTime":"2026-01-22T09:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:05 crc kubenswrapper[4811]: I0122 09:07:05.805364 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:05 crc kubenswrapper[4811]: I0122 09:07:05.805393 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:05 crc kubenswrapper[4811]: I0122 09:07:05.805402 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:05 crc kubenswrapper[4811]: I0122 09:07:05.805414 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:05 crc kubenswrapper[4811]: I0122 09:07:05.805422 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:05Z","lastTransitionTime":"2026-01-22T09:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:05 crc kubenswrapper[4811]: I0122 09:07:05.906677 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:05 crc kubenswrapper[4811]: I0122 09:07:05.906768 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:05 crc kubenswrapper[4811]: I0122 09:07:05.906845 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:05 crc kubenswrapper[4811]: I0122 09:07:05.906907 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:05 crc kubenswrapper[4811]: I0122 09:07:05.906962 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:05Z","lastTransitionTime":"2026-01-22T09:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:05 crc kubenswrapper[4811]: I0122 09:07:05.991420 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 09:07:05 crc kubenswrapper[4811]: E0122 09:07:05.991553 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 09:07:05 crc kubenswrapper[4811]: I0122 09:07:05.991609 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:07:05 crc kubenswrapper[4811]: E0122 09:07:05.991702 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 09:07:05 crc kubenswrapper[4811]: I0122 09:07:05.991615 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 09:07:05 crc kubenswrapper[4811]: E0122 09:07:05.991807 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 09:07:06 crc kubenswrapper[4811]: I0122 09:07:06.002125 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95b8b8c1-880c-4910-8387-3ff87b534859\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1a6925ee11245a252465e1a76bf8246c142164097c9c35f3467ab3d1650bc32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78e58b4688b2704d8a02a27ef452d900419711bd51a2d64b9c05de7d3a02ffbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72d5bfb65ec0c94b865b39227fa43cb243e05b615b8b6c8b2ce289357eb5488b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b13581fc07c90a3dba27e04a1079d8a678635bf14c0d5463cd7b337f0f40a4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b13581fc07c90a3dba27e04a1079d8a678635bf14c0d5463cd7b337f0f40a4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:56Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:05:56Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:07:06Z is after 2025-08-24T17:21:41Z" Jan 22 09:07:06 crc kubenswrapper[4811]: I0122 09:07:06.003755 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 14:59:48.449661741 +0000 UTC Jan 22 09:07:06 crc kubenswrapper[4811]: I0122 09:07:06.009103 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:06 crc kubenswrapper[4811]: I0122 09:07:06.009148 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:06 crc kubenswrapper[4811]: I0122 09:07:06.009157 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:06 crc kubenswrapper[4811]: I0122 09:07:06.009168 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:06 crc kubenswrapper[4811]: I0122 09:07:06.009176 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:06Z","lastTransitionTime":"2026-01-22T09:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:06 crc kubenswrapper[4811]: I0122 09:07:06.011724 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f776b1bc70646779cf35092ded02c00aa3b990c873de09d21e1f0d76258b0b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:07:06Z is after 2025-08-24T17:21:41Z" Jan 22 09:07:06 crc kubenswrapper[4811]: I0122 09:07:06.019132 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gwnx7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"153efd1a-2c09-4c49-94e1-2307bbf1e659\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71a7748f45c9a14885785ca825c32a5036a887c7a54b36cadd1a89e57608111f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nklsw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddce971b4fbeb72e1a32c0adab8e46cb8bce2ff9736cd875f63e105bf39dcc27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nklsw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gwnx7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:07:06Z is after 2025-08-24T17:21:41Z" Jan 22 09:07:06 crc kubenswrapper[4811]: I0122 09:07:06.032537 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cf1d28a-4ee8-45bb-9a86-3687d8db2105\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684f8ca14b2eabcbf9280085d495af9a6f1fa76fb187f15b8fa5659178efdc93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64382d1748f1eaeb9f2f09d243b35c56f9fd70b9a9765ddb64cb0c4866ed493b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://590bb419e3791ec3a267b9a5aa4022ee74ba5e302b047e85211563e135c4cf31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b14bb77b0cf24bacdb8f5edae5d02ccea019b12feb315e6d6c4887feb1a9354\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0d10794bea5b8ea94b9247775e0c424fa9d481267c663d4bcae16eb4e7eb729\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09c7d347a923430e4eb2af48033ea3fc7715bedad38dd1b644c7b29a1ab617fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09c7d347a923430e4eb2af48033ea3fc7715bedad38dd1b644c7b29a1ab617fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba74d0a51550a0637b01b701bbaa33c5efe84fb88a69b7294d1125f1f400b7cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba74d0a51550a0637b01b701bbaa33c5efe84fb88a69b7294d1125f1f400b7cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9a2da74ed29f44340dd1a4778f15ba78ba4f673eb1b34b676e35c7d7c84af31d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a2da74ed29f44340dd1a4778f15ba78ba4f673eb1b34b676e35c7d7c84af31d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:05:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:07:06Z is after 2025-08-24T17:21:41Z" Jan 22 09:07:06 crc kubenswrapper[4811]: I0122 09:07:06.040115 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:07:06Z is after 2025-08-24T17:21:41Z" Jan 22 09:07:06 crc kubenswrapper[4811]: I0122 09:07:06.049100 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kfqgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2555861-d1bb-4f21-be4a-165ed9212932\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:07:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ececfc4b2e4f2a120e7a3307235d606db00733e1a5631b6e19b5312afc8e8ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ececfc4b2e4f2a120e7a3307235d606db00733e1a5631b6e19b5312afc8e8ff\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T09:07:04Z\\\",\\\"message\\\":\\\"2026-01-22T09:06:19+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_2fe27a58-2281-4698-9763-8c4890d2a18e\\\\n2026-01-22T09:06:19+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_2fe27a58-2281-4698-9763-8c4890d2a18e to /host/opt/cni/bin/\\\\n2026-01-22T09:06:19Z [verbose] multus-daemon started\\\\n2026-01-22T09:06:19Z [verbose] Readiness Indicator file check\\\\n2026-01-22T09:07:04Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jt7cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kfqgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:07:06Z is after 2025-08-24T17:21:41Z" Jan 22 09:07:06 crc kubenswrapper[4811]: I0122 09:07:06.062125 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84068a6b-e189-419b-87f5-f31428f6eafe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cde8418c9f22553519ec0a9ba1ddcdd553f64aafb44fa002e924499701236bca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thzhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbb067d04c1683ffd06d01aab41bb5988fdca44b8bd16012d35c42795d0d8011\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thzhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-txvcq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:07:06Z is after 2025-08-24T17:21:41Z" Jan 22 09:07:06 crc kubenswrapper[4811]: I0122 09:07:06.070703 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bhj4l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de4b38a0-0c7a-4693-9f92-40fefd6bc9b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mphcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mphcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bhj4l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:07:06Z is after 2025-08-24T17:21:41Z" Jan 22 09:07:06 crc kubenswrapper[4811]: I0122 09:07:06.079454 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30b44c40-e6d4-4902-98e9-a259269d8bf7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c0b4f518da69d1ede47cb9d322e9d9ca120cfc5eb770f109f0e5fd2e3260964\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aee61445030e02ac4fa9de4fefd94fe50e821ac8897c7403e173f2594a2e0ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d66c2a7980e144d2ce01cb8ebac91297e012c09d4b9aec617e747542e1ca304\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4daff941637e717d7dc33c5627db72d253f50f50288c23257e1eaa66cd38c5d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a86cb7b767900041a2946ed016b105369283ee61acdf725ade666acb1e926ed0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0122 09:06:13.075349 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0122 09:06:13.075574 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 09:06:13.077171 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1744166093/tls.crt::/tmp/serving-cert-1744166093/tls.key\\\\\\\"\\\\nI0122 09:06:13.324400 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 09:06:13.330700 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 09:06:13.330724 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 09:06:13.330758 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 09:06:13.330771 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 09:06:13.337838 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 09:06:13.337866 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:06:13.337872 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:06:13.337876 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 09:06:13.337879 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 09:06:13.337883 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 09:06:13.337886 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 09:06:13.338151 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 09:06:13.339610 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9efb1ddf2ce6601c9860410e3ccacc38359aef779a7c8524bf4ec5950743d36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85e811085c9b345899231a1fcf56619c1cc5c8b0e6d22fc897fb96f20ba72e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85e811085c9b345899231a1fcf56619c1cc5c8b0e6d22fc897fb96f20ba72e37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:05:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:07:06Z is after 2025-08-24T17:21:41Z" Jan 22 09:07:06 crc kubenswrapper[4811]: I0122 09:07:06.087704 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:07:06Z is after 2025-08-24T17:21:41Z" Jan 22 09:07:06 crc kubenswrapper[4811]: I0122 09:07:06.096334 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:07:06Z is after 2025-08-24T17:21:41Z" Jan 22 09:07:06 crc kubenswrapper[4811]: I0122 09:07:06.111104 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:06 crc kubenswrapper[4811]: I0122 09:07:06.111131 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:06 crc kubenswrapper[4811]: I0122 09:07:06.111139 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:06 crc kubenswrapper[4811]: I0122 09:07:06.111149 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:06 crc kubenswrapper[4811]: I0122 09:07:06.111157 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:06Z","lastTransitionTime":"2026-01-22T09:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:06 crc kubenswrapper[4811]: I0122 09:07:06.113722 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xhhs7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f42dc1a6-d0c4-43e4-b9d9-b40c1f910400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a64d7f1d53f9a3d9616b76ea12902134c5e0a3a30c9a61113675074caae70d58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcfzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xhhs7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:07:06Z is after 2025-08-24T17:21:41Z" Jan 22 09:07:06 crc kubenswrapper[4811]: I0122 09:07:06.127379 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9g4j8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d23c9c9-89ca-4db5-99dc-1e5b9f80be38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://279ca4748f2c8229a6647c23a414668a7612c0c863b52ef33c5b3dd026de74ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e01aaa7db78471fea3a39a7dfc2be7926746a5c38a6a92dc3d33ebab7b8bf3cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e01aaa7db78471fea3a39a7dfc2be7926746a5c38a6a92dc3d33ebab7b8bf3cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c4b6317408e75e2cc7947a3ff91f9268141cd5000f951dcbfdab3fdc5c2386c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c4b6317408e75e2cc7947a3ff91f9268141cd5000f951dcbfdab3fdc5c2386c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9e774edddd1e144cf32c598245081727747e9a4e3c04ee339ff63e330ceb5cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9e774edddd1e144cf32c598245081727747e9a4e3c04ee339ff63e330ceb5cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66cb2961790d3d876438b07dabf72c50ed19c9e946e21dcdf90f098a38e02c84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66cb2961790d3d876438b07dabf72c50ed19c9e946e21dcdf90f098a38e02c84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://016900d5e167b9638dd095fc155f63e33c55037edb90af28ebef58c91b86fe7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://016900d5e167b9638dd095fc155f63e33c55037edb90af28ebef58c91b86fe7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4face38e0653080b6d84c1e1a126c96d043eb68c6e19f76dbd8def8bc96e9d40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4face38e0653080b6d84c1e1a126c96d043eb68c6e19f76dbd8def8bc96e9d40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9g4j8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:07:06Z is after 2025-08-24T17:21:41Z" Jan 22 09:07:06 crc kubenswrapper[4811]: I0122 09:07:06.136291 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n2kj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66a7f3f9-ab88-4b1a-a28a-900480fa9651\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3a88defded9a5435941e2608c8a29baee840bc0ce48454a3731f344299347cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z47mj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n2kj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:07:06Z is after 2025-08-24T17:21:41Z" Jan 22 09:07:06 crc kubenswrapper[4811]: I0122 09:07:06.153270 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab5dbe63-08fa-4157-9c52-003944e50d7b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af3bf6cdac4751814ecd71637d9e80cbdda1c0e99714275fe77cfa858b3823b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4419625f66ae957b2d9b146fc8b085a4a95c3f47860d6b4847f8fe71a6c4c86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7830976d23e073a099c3c6af1fa2392d9504ad9d41ddf2ee63c16632c22f833a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45fa5d37dcd1dcae1a7488bb855e4bd7c7dc39a1199fe80b82f83604be089f2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:05:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:07:06Z is after 2025-08-24T17:21:41Z" Jan 22 09:07:06 crc kubenswrapper[4811]: I0122 09:07:06.167277 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a14bfcc7d6d2ab5e1ad0c45999b2e73332e750ed65d9299e188505dafb26d0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:07:06Z is after 2025-08-24T17:21:41Z" Jan 22 09:07:06 crc kubenswrapper[4811]: I0122 09:07:06.175787 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bae2e22607dec646b15563dc62f66655e7a5c2b7ad809406c907e5f687f80105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75e290e203fab8f291146855a960477ceb9c33d965d15b4722c668dc2b3fded3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:07:06Z is after 2025-08-24T17:21:41Z" Jan 22 09:07:06 crc kubenswrapper[4811]: I0122 09:07:06.187970 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-274vf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cd0f0db-de53-47c0-9b45-2ce8b37392a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5f0ec565b046b22d600103b58946f6f520873f1686cc3042bf7ddc7f24bd4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e071df53e8cd1652f507eacae3e2c54ac75e76a6b9a426620c970762e81ecd84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c261cdea10a90cb9b06369f3b725b7778d070fb329e81ab44552446e12662451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f0736426a25dadac460e5ed467f3082cd8c4806190c053022d6f360c4290799\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9197a8c82d49fb9b4b1c97827ee95582ca8148656abb88904e06abb25c135597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55f26cdf9d61842467eb1367c6ec156b4de9c40a5b05fe60d3d100df5136a8a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://454af35caa860192c078a3fb4028bf2c4544607849290f3bd427688821757b2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://454af35caa860192c078a3fb4028bf2c4544607849290f3bd427688821757b2f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T09:06:42Z\\\",\\\"message\\\":\\\"ing service etcd on namespace openshift-etcd for network=default : 1.153957ms\\\\nI0122 09:06:42.648545 6404 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-controller-manager/controller-manager]} name:Service_openshift-controller-manager/controller-manager_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.149:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {cab7c637-a021-4a4d-a4b9-06d63c44316f}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0122 09:06:42.648515 6404 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-ingress-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"d8772e82-b0a4-4596-87d3-3d517c13344b\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-ingress-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]stri\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-274vf_openshift-ovn-kubernetes(1cd0f0db-de53-47c0-9b45-2ce8b37392a3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c741f2990f908528ccab348e1eb02e217a9b659becc7a41c4eb8da28bfd42e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://523ab909aa2cbac6043b3c572a17c40733e9b2155d4017253d68998cb85d3e7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://523ab909aa2cbac6043b3c572a17c40733e9b2155d4017253d68998cb85d3e7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-274vf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:07:06Z is after 2025-08-24T17:21:41Z" Jan 22 09:07:06 crc kubenswrapper[4811]: I0122 09:07:06.213514 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:06 crc kubenswrapper[4811]: I0122 09:07:06.213543 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:06 crc kubenswrapper[4811]: I0122 09:07:06.213554 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:06 crc kubenswrapper[4811]: I0122 09:07:06.213786 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:06 crc kubenswrapper[4811]: I0122 09:07:06.213796 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:06Z","lastTransitionTime":"2026-01-22T09:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:06 crc kubenswrapper[4811]: I0122 09:07:06.308895 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-kfqgt_f2555861-d1bb-4f21-be4a-165ed9212932/kube-multus/0.log" Jan 22 09:07:06 crc kubenswrapper[4811]: I0122 09:07:06.308939 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-kfqgt" event={"ID":"f2555861-d1bb-4f21-be4a-165ed9212932","Type":"ContainerStarted","Data":"c2e4b2026355f189cbe6a2d70999613aa1b5868f0b38c25e834e42eda1b41088"} Jan 22 09:07:06 crc kubenswrapper[4811]: I0122 09:07:06.315075 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:06 crc kubenswrapper[4811]: I0122 09:07:06.315117 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:06 crc kubenswrapper[4811]: I0122 09:07:06.315128 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:06 crc kubenswrapper[4811]: I0122 09:07:06.315142 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:06 crc kubenswrapper[4811]: I0122 09:07:06.315151 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:06Z","lastTransitionTime":"2026-01-22T09:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:06 crc kubenswrapper[4811]: I0122 09:07:06.322344 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cf1d28a-4ee8-45bb-9a86-3687d8db2105\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684f8ca14b2eabcbf9280085d495af9a6f1fa76fb187f15b8fa5659178efdc93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64382d1748f1eaeb9f2f09d243b35c56f9fd70b9a9765ddb64cb0c4866ed493b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://590bb419e3791ec3a267b9a5aa4022ee74ba5e302b047e85211563e135c4cf31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b14bb77b0cf24bacdb8f5edae5d02ccea019b12feb315e6d6c4887feb1a9354\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0d10794bea5b8ea94b9247775e0c424fa9d481267c663d4bcae16eb4e7eb729\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09c7d347a923430e4eb2af48033ea3fc7715bedad38dd1b644c7b29a1ab617fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09c7d347a923430e4eb2af48033ea3fc7715bedad38dd1b644c7b29a1ab617fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba74d0a51550a0637b01b701bbaa33c5efe84fb88a69b7294d1125f1f400b7cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba74d0a51550a0637b01b701bbaa33c5efe84fb88a69b7294d1125f1f400b7cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9a2da74ed29f44340dd1a4778f15ba78ba4f673eb1b34b676e35c7d7c84af31d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a2da74ed29f44340dd1a4778f15ba78ba4f673eb1b34b676e35c7d7c84af31d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:05:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:07:06Z is after 2025-08-24T17:21:41Z" Jan 22 09:07:06 crc kubenswrapper[4811]: I0122 09:07:06.330702 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:07:06Z is after 2025-08-24T17:21:41Z" Jan 22 09:07:06 crc kubenswrapper[4811]: I0122 09:07:06.341558 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kfqgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2555861-d1bb-4f21-be4a-165ed9212932\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2e4b2026355f189cbe6a2d70999613aa1b5868f0b38c25e834e42eda1b41088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ececfc4b2e4f2a120e7a3307235d606db00733e1a5631b6e19b5312afc8e8ff\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T09:07:04Z\\\",\\\"message\\\":\\\"2026-01-22T09:06:19+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_2fe27a58-2281-4698-9763-8c4890d2a18e\\\\n2026-01-22T09:06:19+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_2fe27a58-2281-4698-9763-8c4890d2a18e to /host/opt/cni/bin/\\\\n2026-01-22T09:06:19Z [verbose] multus-daemon started\\\\n2026-01-22T09:06:19Z [verbose] Readiness Indicator file check\\\\n2026-01-22T09:07:04Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jt7cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kfqgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:07:06Z is after 2025-08-24T17:21:41Z" Jan 22 09:07:06 crc kubenswrapper[4811]: I0122 09:07:06.350619 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30b44c40-e6d4-4902-98e9-a259269d8bf7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c0b4f518da69d1ede47cb9d322e9d9ca120cfc5eb770f109f0e5fd2e3260964\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aee61445030e02ac4fa9de4fefd94fe50e821ac8897c7403e173f2594a2e0ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d66c2a7980e144d2ce01cb8ebac91297e012c09d4b9aec617e747542e1ca304\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4daff941637e717d7dc33c5627db72d253f50f50288c23257e1eaa66cd38c5d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a86cb7b767900041a2946ed016b105369283ee61acdf725ade666acb1e926ed0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0122 09:06:13.075349 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0122 09:06:13.075574 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 09:06:13.077171 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1744166093/tls.crt::/tmp/serving-cert-1744166093/tls.key\\\\\\\"\\\\nI0122 09:06:13.324400 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 09:06:13.330700 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 09:06:13.330724 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 09:06:13.330758 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 09:06:13.330771 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 09:06:13.337838 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 09:06:13.337866 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:06:13.337872 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:06:13.337876 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 09:06:13.337879 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 09:06:13.337883 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 09:06:13.337886 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 09:06:13.338151 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 09:06:13.339610 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9efb1ddf2ce6601c9860410e3ccacc38359aef779a7c8524bf4ec5950743d36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85e811085c9b345899231a1fcf56619c1cc5c8b0e6d22fc897fb96f20ba72e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85e811085c9b345899231a1fcf56619c1cc5c8b0e6d22fc897fb96f20ba72e37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:05:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:07:06Z is after 2025-08-24T17:21:41Z" Jan 22 09:07:06 crc kubenswrapper[4811]: I0122 09:07:06.358840 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:07:06Z is after 2025-08-24T17:21:41Z" Jan 22 09:07:06 crc kubenswrapper[4811]: I0122 09:07:06.367088 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:07:06Z is after 2025-08-24T17:21:41Z" Jan 22 09:07:06 crc kubenswrapper[4811]: I0122 09:07:06.373437 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xhhs7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f42dc1a6-d0c4-43e4-b9d9-b40c1f910400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a64d7f1d53f9a3d9616b76ea12902134c5e0a3a30c9a61113675074caae70d58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcfzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xhhs7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:07:06Z is after 2025-08-24T17:21:41Z" Jan 22 09:07:06 crc kubenswrapper[4811]: I0122 09:07:06.383002 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9g4j8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d23c9c9-89ca-4db5-99dc-1e5b9f80be38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://279ca4748f2c8229a6647c23a414668a7612c0c863b52ef33c5b3dd026de74ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e01aaa7db78471fea3a39a7dfc2be7926746a5c38a6a92dc3d33ebab7b8bf3cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e01aaa7db78471fea3a39a7dfc2be7926746a5c38a6a92dc3d33ebab7b8bf3cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c4b6317408e75e2cc7947a3ff91f9268141cd5000f951dcbfdab3fdc5c2386c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c4b6317408e75e2cc7947a3ff91f9268141cd5000f951dcbfdab3fdc5c2386c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9e774edddd1e144cf32c598245081727747e9a4e3c04ee339ff63e330ceb5cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9e774edddd1e144cf32c598245081727747e9a4e3c04ee339ff63e330ceb5cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66cb2961790d3d876438b07dabf72c50ed19c9e946e21dcdf90f098a38e02c84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66cb2961790d3d876438b07dabf72c50ed19c9e946e21dcdf90f098a38e02c84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://016900d5e167b9638dd095fc155f63e33c55037edb90af28ebef58c91b86fe7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://016900d5e167b9638dd095fc155f63e33c55037edb90af28ebef58c91b86fe7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4face38e0653080b6d84c1e1a126c96d043eb68c6e19f76dbd8def8bc96e9d40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4face38e0653080b6d84c1e1a126c96d043eb68c6e19f76dbd8def8bc96e9d40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9g4j8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:07:06Z is after 2025-08-24T17:21:41Z" Jan 22 09:07:06 crc kubenswrapper[4811]: I0122 09:07:06.390926 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84068a6b-e189-419b-87f5-f31428f6eafe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cde8418c9f22553519ec0a9ba1ddcdd553f64aafb44fa002e924499701236bca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thzhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbb067d04c1683ffd06d01aab41bb5988fdca44b8bd16012d35c42795d0d8011\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thzhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-txvcq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:07:06Z is after 2025-08-24T17:21:41Z" Jan 22 09:07:06 crc kubenswrapper[4811]: I0122 09:07:06.398058 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bhj4l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de4b38a0-0c7a-4693-9f92-40fefd6bc9b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mphcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mphcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bhj4l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:07:06Z is after 2025-08-24T17:21:41Z" Jan 22 09:07:06 crc kubenswrapper[4811]: I0122 09:07:06.408056 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab5dbe63-08fa-4157-9c52-003944e50d7b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af3bf6cdac4751814ecd71637d9e80cbdda1c0e99714275fe77cfa858b3823b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4419625f66ae957b2d9b146fc8b085a4a95c3f47860d6b4847f8fe71a6c4c86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7830976d23e073a099c3c6af1fa2392d9504ad9d41ddf2ee63c16632c22f833a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45fa5d37dcd1dcae1a7488bb855e4bd7c7dc39a1199fe80b82f83604be089f2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:05:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:07:06Z is after 2025-08-24T17:21:41Z" Jan 22 09:07:06 crc kubenswrapper[4811]: I0122 09:07:06.416523 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:06 crc kubenswrapper[4811]: I0122 09:07:06.416551 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:06 crc kubenswrapper[4811]: I0122 09:07:06.416560 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:06 crc kubenswrapper[4811]: I0122 09:07:06.416572 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:06 crc kubenswrapper[4811]: I0122 09:07:06.416580 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:06Z","lastTransitionTime":"2026-01-22T09:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:06 crc kubenswrapper[4811]: I0122 09:07:06.416736 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a14bfcc7d6d2ab5e1ad0c45999b2e73332e750ed65d9299e188505dafb26d0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:07:06Z is after 2025-08-24T17:21:41Z" Jan 22 09:07:06 crc kubenswrapper[4811]: I0122 09:07:06.424282 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bae2e22607dec646b15563dc62f66655e7a5c2b7ad809406c907e5f687f80105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75e290e203fab8f291146855a960477ceb9c33d965d15b4722c668dc2b3fded3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:07:06Z is after 2025-08-24T17:21:41Z" Jan 22 09:07:06 crc kubenswrapper[4811]: I0122 09:07:06.435517 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-274vf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cd0f0db-de53-47c0-9b45-2ce8b37392a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5f0ec565b046b22d600103b58946f6f520873f1686cc3042bf7ddc7f24bd4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e071df53e8cd1652f507eacae3e2c54ac75e76a6b9a426620c970762e81ecd84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c261cdea10a90cb9b06369f3b725b7778d070fb329e81ab44552446e12662451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f0736426a25dadac460e5ed467f3082cd8c4806190c053022d6f360c4290799\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9197a8c82d49fb9b4b1c97827ee95582ca8148656abb88904e06abb25c135597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55f26cdf9d61842467eb1367c6ec156b4de9c40a5b05fe60d3d100df5136a8a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://454af35caa860192c078a3fb4028bf2c4544607849290f3bd427688821757b2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://454af35caa860192c078a3fb4028bf2c4544607849290f3bd427688821757b2f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T09:06:42Z\\\",\\\"message\\\":\\\"ing service etcd on namespace openshift-etcd for network=default : 1.153957ms\\\\nI0122 09:06:42.648545 6404 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-controller-manager/controller-manager]} name:Service_openshift-controller-manager/controller-manager_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.149:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {cab7c637-a021-4a4d-a4b9-06d63c44316f}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0122 09:06:42.648515 6404 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-ingress-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"d8772e82-b0a4-4596-87d3-3d517c13344b\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-ingress-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]stri\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-274vf_openshift-ovn-kubernetes(1cd0f0db-de53-47c0-9b45-2ce8b37392a3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c741f2990f908528ccab348e1eb02e217a9b659becc7a41c4eb8da28bfd42e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://523ab909aa2cbac6043b3c572a17c40733e9b2155d4017253d68998cb85d3e7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://523ab909aa2cbac6043b3c572a17c40733e9b2155d4017253d68998cb85d3e7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-274vf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:07:06Z is after 2025-08-24T17:21:41Z" Jan 22 09:07:06 crc kubenswrapper[4811]: I0122 09:07:06.441819 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n2kj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66a7f3f9-ab88-4b1a-a28a-900480fa9651\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3a88defded9a5435941e2608c8a29baee840bc0ce48454a3731f344299347cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z47mj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n2kj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:07:06Z is after 2025-08-24T17:21:41Z" Jan 22 09:07:06 crc kubenswrapper[4811]: I0122 09:07:06.448500 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95b8b8c1-880c-4910-8387-3ff87b534859\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1a6925ee11245a252465e1a76bf8246c142164097c9c35f3467ab3d1650bc32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78e58b4688b2704d8a02a27ef452d900419711bd51a2d64b9c05de7d3a02ffbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72d5bfb65ec0c94b865b39227fa43cb243e05b615b8b6c8b2ce289357eb5488b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b13581fc07c90a3dba27e04a1079d8a678635bf14c0d5463cd7b337f0f40a4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b13581fc07c90a3dba27e04a1079d8a678635bf14c0d5463cd7b337f0f40a4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:56Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:05:56Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:07:06Z is after 2025-08-24T17:21:41Z" Jan 22 09:07:06 crc kubenswrapper[4811]: I0122 09:07:06.455247 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f776b1bc70646779cf35092ded02c00aa3b990c873de09d21e1f0d76258b0b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:07:06Z is after 2025-08-24T17:21:41Z" Jan 22 09:07:06 crc kubenswrapper[4811]: I0122 09:07:06.462026 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gwnx7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"153efd1a-2c09-4c49-94e1-2307bbf1e659\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71a7748f45c9a14885785ca825c32a5036a887c7a54b36cadd1a89e57608111f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nklsw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddce971b4fbeb72e1a32c0adab8e46cb8bce2ff9736cd875f63e105bf39dcc27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nklsw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gwnx7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:07:06Z is after 2025-08-24T17:21:41Z" Jan 22 09:07:06 crc kubenswrapper[4811]: I0122 09:07:06.518787 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:06 crc kubenswrapper[4811]: I0122 09:07:06.518813 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:06 crc kubenswrapper[4811]: I0122 09:07:06.518822 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:06 crc kubenswrapper[4811]: I0122 09:07:06.518854 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:06 crc kubenswrapper[4811]: I0122 09:07:06.518865 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:06Z","lastTransitionTime":"2026-01-22T09:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:06 crc kubenswrapper[4811]: I0122 09:07:06.621064 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:06 crc kubenswrapper[4811]: I0122 09:07:06.621112 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:06 crc kubenswrapper[4811]: I0122 09:07:06.621121 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:06 crc kubenswrapper[4811]: I0122 09:07:06.621131 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:06 crc kubenswrapper[4811]: I0122 09:07:06.621140 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:06Z","lastTransitionTime":"2026-01-22T09:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:06 crc kubenswrapper[4811]: I0122 09:07:06.722428 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:06 crc kubenswrapper[4811]: I0122 09:07:06.722462 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:06 crc kubenswrapper[4811]: I0122 09:07:06.722475 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:06 crc kubenswrapper[4811]: I0122 09:07:06.722490 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:06 crc kubenswrapper[4811]: I0122 09:07:06.722501 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:06Z","lastTransitionTime":"2026-01-22T09:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:06 crc kubenswrapper[4811]: I0122 09:07:06.823994 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:06 crc kubenswrapper[4811]: I0122 09:07:06.824286 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:06 crc kubenswrapper[4811]: I0122 09:07:06.824384 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:06 crc kubenswrapper[4811]: I0122 09:07:06.824461 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:06 crc kubenswrapper[4811]: I0122 09:07:06.824611 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:06Z","lastTransitionTime":"2026-01-22T09:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:06 crc kubenswrapper[4811]: I0122 09:07:06.926502 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:06 crc kubenswrapper[4811]: I0122 09:07:06.926547 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:06 crc kubenswrapper[4811]: I0122 09:07:06.926557 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:06 crc kubenswrapper[4811]: I0122 09:07:06.926573 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:06 crc kubenswrapper[4811]: I0122 09:07:06.926584 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:06Z","lastTransitionTime":"2026-01-22T09:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:06 crc kubenswrapper[4811]: I0122 09:07:06.991346 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bhj4l" Jan 22 09:07:06 crc kubenswrapper[4811]: E0122 09:07:06.991504 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bhj4l" podUID="de4b38a0-0c7a-4693-9f92-40fefd6bc9b4" Jan 22 09:07:07 crc kubenswrapper[4811]: I0122 09:07:07.004519 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 02:37:25.104698323 +0000 UTC Jan 22 09:07:07 crc kubenswrapper[4811]: I0122 09:07:07.028323 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:07 crc kubenswrapper[4811]: I0122 09:07:07.028351 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:07 crc kubenswrapper[4811]: I0122 09:07:07.028361 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:07 crc kubenswrapper[4811]: I0122 09:07:07.028372 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:07 crc kubenswrapper[4811]: I0122 09:07:07.028380 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:07Z","lastTransitionTime":"2026-01-22T09:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:07 crc kubenswrapper[4811]: I0122 09:07:07.130473 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:07 crc kubenswrapper[4811]: I0122 09:07:07.130501 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:07 crc kubenswrapper[4811]: I0122 09:07:07.130509 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:07 crc kubenswrapper[4811]: I0122 09:07:07.130539 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:07 crc kubenswrapper[4811]: I0122 09:07:07.130547 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:07Z","lastTransitionTime":"2026-01-22T09:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:07 crc kubenswrapper[4811]: I0122 09:07:07.232343 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:07 crc kubenswrapper[4811]: I0122 09:07:07.232515 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:07 crc kubenswrapper[4811]: I0122 09:07:07.232582 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:07 crc kubenswrapper[4811]: I0122 09:07:07.232654 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:07 crc kubenswrapper[4811]: I0122 09:07:07.232708 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:07Z","lastTransitionTime":"2026-01-22T09:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:07 crc kubenswrapper[4811]: I0122 09:07:07.334958 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:07 crc kubenswrapper[4811]: I0122 09:07:07.334987 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:07 crc kubenswrapper[4811]: I0122 09:07:07.334996 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:07 crc kubenswrapper[4811]: I0122 09:07:07.335008 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:07 crc kubenswrapper[4811]: I0122 09:07:07.335017 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:07Z","lastTransitionTime":"2026-01-22T09:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:07 crc kubenswrapper[4811]: I0122 09:07:07.436706 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:07 crc kubenswrapper[4811]: I0122 09:07:07.436864 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:07 crc kubenswrapper[4811]: I0122 09:07:07.436935 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:07 crc kubenswrapper[4811]: I0122 09:07:07.436995 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:07 crc kubenswrapper[4811]: I0122 09:07:07.437049 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:07Z","lastTransitionTime":"2026-01-22T09:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:07 crc kubenswrapper[4811]: I0122 09:07:07.474593 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:07 crc kubenswrapper[4811]: I0122 09:07:07.474617 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:07 crc kubenswrapper[4811]: I0122 09:07:07.474657 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:07 crc kubenswrapper[4811]: I0122 09:07:07.474669 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:07 crc kubenswrapper[4811]: I0122 09:07:07.474677 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:07Z","lastTransitionTime":"2026-01-22T09:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:07 crc kubenswrapper[4811]: E0122 09:07:07.482959 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:07:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:07:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:07:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:07:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:07:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:07:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:07:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:07:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0066ca33-f035-4af3-9028-0da78d54d55e\\\",\\\"systemUUID\\\":\\\"463fdb35-dd0c-4804-a85e-31cd33c59ce4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:07:07Z is after 2025-08-24T17:21:41Z" Jan 22 09:07:07 crc kubenswrapper[4811]: I0122 09:07:07.485115 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:07 crc kubenswrapper[4811]: I0122 09:07:07.485133 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:07 crc kubenswrapper[4811]: I0122 09:07:07.485142 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:07 crc kubenswrapper[4811]: I0122 09:07:07.485151 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:07 crc kubenswrapper[4811]: I0122 09:07:07.485159 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:07Z","lastTransitionTime":"2026-01-22T09:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:07 crc kubenswrapper[4811]: E0122 09:07:07.492614 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:07:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:07:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:07:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:07:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:07:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:07:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:07:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:07:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0066ca33-f035-4af3-9028-0da78d54d55e\\\",\\\"systemUUID\\\":\\\"463fdb35-dd0c-4804-a85e-31cd33c59ce4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:07:07Z is after 2025-08-24T17:21:41Z" Jan 22 09:07:07 crc kubenswrapper[4811]: I0122 09:07:07.494892 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:07 crc kubenswrapper[4811]: I0122 09:07:07.494913 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:07 crc kubenswrapper[4811]: I0122 09:07:07.494922 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:07 crc kubenswrapper[4811]: I0122 09:07:07.494931 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:07 crc kubenswrapper[4811]: I0122 09:07:07.494937 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:07Z","lastTransitionTime":"2026-01-22T09:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:07 crc kubenswrapper[4811]: E0122 09:07:07.502245 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:07:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:07:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:07:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:07:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:07:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:07:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:07:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:07:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0066ca33-f035-4af3-9028-0da78d54d55e\\\",\\\"systemUUID\\\":\\\"463fdb35-dd0c-4804-a85e-31cd33c59ce4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:07:07Z is after 2025-08-24T17:21:41Z" Jan 22 09:07:07 crc kubenswrapper[4811]: I0122 09:07:07.504389 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:07 crc kubenswrapper[4811]: I0122 09:07:07.504419 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:07 crc kubenswrapper[4811]: I0122 09:07:07.504428 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:07 crc kubenswrapper[4811]: I0122 09:07:07.504440 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:07 crc kubenswrapper[4811]: I0122 09:07:07.504449 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:07Z","lastTransitionTime":"2026-01-22T09:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:07 crc kubenswrapper[4811]: E0122 09:07:07.511754 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:07:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:07:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:07:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:07:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:07:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:07:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:07:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:07:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0066ca33-f035-4af3-9028-0da78d54d55e\\\",\\\"systemUUID\\\":\\\"463fdb35-dd0c-4804-a85e-31cd33c59ce4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:07:07Z is after 2025-08-24T17:21:41Z" Jan 22 09:07:07 crc kubenswrapper[4811]: I0122 09:07:07.513767 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:07 crc kubenswrapper[4811]: I0122 09:07:07.513802 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:07 crc kubenswrapper[4811]: I0122 09:07:07.513810 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:07 crc kubenswrapper[4811]: I0122 09:07:07.513819 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:07 crc kubenswrapper[4811]: I0122 09:07:07.513829 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:07Z","lastTransitionTime":"2026-01-22T09:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:07 crc kubenswrapper[4811]: E0122 09:07:07.521015 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:07:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:07:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:07:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:07:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:07:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:07:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:07:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:07:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0066ca33-f035-4af3-9028-0da78d54d55e\\\",\\\"systemUUID\\\":\\\"463fdb35-dd0c-4804-a85e-31cd33c59ce4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:07:07Z is after 2025-08-24T17:21:41Z" Jan 22 09:07:07 crc kubenswrapper[4811]: E0122 09:07:07.521106 4811 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 22 09:07:07 crc kubenswrapper[4811]: I0122 09:07:07.538318 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:07 crc kubenswrapper[4811]: I0122 09:07:07.538419 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:07 crc kubenswrapper[4811]: I0122 09:07:07.538591 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:07 crc kubenswrapper[4811]: I0122 09:07:07.538750 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:07 crc kubenswrapper[4811]: I0122 09:07:07.538888 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:07Z","lastTransitionTime":"2026-01-22T09:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:07 crc kubenswrapper[4811]: I0122 09:07:07.641713 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:07 crc kubenswrapper[4811]: I0122 09:07:07.641820 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:07 crc kubenswrapper[4811]: I0122 09:07:07.641893 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:07 crc kubenswrapper[4811]: I0122 09:07:07.641953 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:07 crc kubenswrapper[4811]: I0122 09:07:07.642008 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:07Z","lastTransitionTime":"2026-01-22T09:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:07 crc kubenswrapper[4811]: I0122 09:07:07.743472 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:07 crc kubenswrapper[4811]: I0122 09:07:07.743606 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:07 crc kubenswrapper[4811]: I0122 09:07:07.743698 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:07 crc kubenswrapper[4811]: I0122 09:07:07.743758 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:07 crc kubenswrapper[4811]: I0122 09:07:07.743823 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:07Z","lastTransitionTime":"2026-01-22T09:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:07 crc kubenswrapper[4811]: I0122 09:07:07.846041 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:07 crc kubenswrapper[4811]: I0122 09:07:07.846064 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:07 crc kubenswrapper[4811]: I0122 09:07:07.846073 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:07 crc kubenswrapper[4811]: I0122 09:07:07.846083 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:07 crc kubenswrapper[4811]: I0122 09:07:07.846091 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:07Z","lastTransitionTime":"2026-01-22T09:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:07 crc kubenswrapper[4811]: I0122 09:07:07.947889 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:07 crc kubenswrapper[4811]: I0122 09:07:07.947909 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:07 crc kubenswrapper[4811]: I0122 09:07:07.947918 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:07 crc kubenswrapper[4811]: I0122 09:07:07.947926 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:07 crc kubenswrapper[4811]: I0122 09:07:07.947935 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:07Z","lastTransitionTime":"2026-01-22T09:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:07 crc kubenswrapper[4811]: I0122 09:07:07.991433 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 09:07:07 crc kubenswrapper[4811]: I0122 09:07:07.991496 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:07:07 crc kubenswrapper[4811]: E0122 09:07:07.991529 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 09:07:07 crc kubenswrapper[4811]: I0122 09:07:07.991436 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 09:07:07 crc kubenswrapper[4811]: E0122 09:07:07.991605 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 09:07:07 crc kubenswrapper[4811]: E0122 09:07:07.991707 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 09:07:08 crc kubenswrapper[4811]: I0122 09:07:08.005530 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 12:32:32.364072421 +0000 UTC Jan 22 09:07:08 crc kubenswrapper[4811]: I0122 09:07:08.050220 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:08 crc kubenswrapper[4811]: I0122 09:07:08.050256 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:08 crc kubenswrapper[4811]: I0122 09:07:08.050268 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:08 crc kubenswrapper[4811]: I0122 09:07:08.050284 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:08 crc kubenswrapper[4811]: I0122 09:07:08.050294 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:08Z","lastTransitionTime":"2026-01-22T09:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:08 crc kubenswrapper[4811]: I0122 09:07:08.152291 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:08 crc kubenswrapper[4811]: I0122 09:07:08.152327 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:08 crc kubenswrapper[4811]: I0122 09:07:08.152337 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:08 crc kubenswrapper[4811]: I0122 09:07:08.152348 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:08 crc kubenswrapper[4811]: I0122 09:07:08.152357 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:08Z","lastTransitionTime":"2026-01-22T09:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:08 crc kubenswrapper[4811]: I0122 09:07:08.254090 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:08 crc kubenswrapper[4811]: I0122 09:07:08.254115 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:08 crc kubenswrapper[4811]: I0122 09:07:08.254122 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:08 crc kubenswrapper[4811]: I0122 09:07:08.254133 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:08 crc kubenswrapper[4811]: I0122 09:07:08.254141 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:08Z","lastTransitionTime":"2026-01-22T09:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:08 crc kubenswrapper[4811]: I0122 09:07:08.355483 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:08 crc kubenswrapper[4811]: I0122 09:07:08.355515 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:08 crc kubenswrapper[4811]: I0122 09:07:08.355523 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:08 crc kubenswrapper[4811]: I0122 09:07:08.355537 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:08 crc kubenswrapper[4811]: I0122 09:07:08.355548 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:08Z","lastTransitionTime":"2026-01-22T09:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:08 crc kubenswrapper[4811]: I0122 09:07:08.457009 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:08 crc kubenswrapper[4811]: I0122 09:07:08.457039 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:08 crc kubenswrapper[4811]: I0122 09:07:08.457051 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:08 crc kubenswrapper[4811]: I0122 09:07:08.457069 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:08 crc kubenswrapper[4811]: I0122 09:07:08.457081 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:08Z","lastTransitionTime":"2026-01-22T09:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:08 crc kubenswrapper[4811]: I0122 09:07:08.558760 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:08 crc kubenswrapper[4811]: I0122 09:07:08.558803 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:08 crc kubenswrapper[4811]: I0122 09:07:08.558814 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:08 crc kubenswrapper[4811]: I0122 09:07:08.558825 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:08 crc kubenswrapper[4811]: I0122 09:07:08.558833 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:08Z","lastTransitionTime":"2026-01-22T09:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:08 crc kubenswrapper[4811]: I0122 09:07:08.660916 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:08 crc kubenswrapper[4811]: I0122 09:07:08.661044 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:08 crc kubenswrapper[4811]: I0122 09:07:08.661112 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:08 crc kubenswrapper[4811]: I0122 09:07:08.661187 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:08 crc kubenswrapper[4811]: I0122 09:07:08.661262 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:08Z","lastTransitionTime":"2026-01-22T09:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:08 crc kubenswrapper[4811]: I0122 09:07:08.763138 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:08 crc kubenswrapper[4811]: I0122 09:07:08.763190 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:08 crc kubenswrapper[4811]: I0122 09:07:08.763205 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:08 crc kubenswrapper[4811]: I0122 09:07:08.763225 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:08 crc kubenswrapper[4811]: I0122 09:07:08.763236 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:08Z","lastTransitionTime":"2026-01-22T09:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:08 crc kubenswrapper[4811]: I0122 09:07:08.864856 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:08 crc kubenswrapper[4811]: I0122 09:07:08.864888 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:08 crc kubenswrapper[4811]: I0122 09:07:08.864899 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:08 crc kubenswrapper[4811]: I0122 09:07:08.864912 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:08 crc kubenswrapper[4811]: I0122 09:07:08.864921 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:08Z","lastTransitionTime":"2026-01-22T09:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:08 crc kubenswrapper[4811]: I0122 09:07:08.966856 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:08 crc kubenswrapper[4811]: I0122 09:07:08.966884 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:08 crc kubenswrapper[4811]: I0122 09:07:08.966893 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:08 crc kubenswrapper[4811]: I0122 09:07:08.966903 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:08 crc kubenswrapper[4811]: I0122 09:07:08.966911 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:08Z","lastTransitionTime":"2026-01-22T09:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:08 crc kubenswrapper[4811]: I0122 09:07:08.991912 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bhj4l" Jan 22 09:07:08 crc kubenswrapper[4811]: E0122 09:07:08.992013 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bhj4l" podUID="de4b38a0-0c7a-4693-9f92-40fefd6bc9b4" Jan 22 09:07:09 crc kubenswrapper[4811]: I0122 09:07:09.006162 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 05:16:54.180808884 +0000 UTC Jan 22 09:07:09 crc kubenswrapper[4811]: I0122 09:07:09.068104 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:09 crc kubenswrapper[4811]: I0122 09:07:09.068125 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:09 crc kubenswrapper[4811]: I0122 09:07:09.068134 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:09 crc kubenswrapper[4811]: I0122 09:07:09.068145 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:09 crc kubenswrapper[4811]: I0122 09:07:09.068152 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:09Z","lastTransitionTime":"2026-01-22T09:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:09 crc kubenswrapper[4811]: I0122 09:07:09.169850 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:09 crc kubenswrapper[4811]: I0122 09:07:09.169886 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:09 crc kubenswrapper[4811]: I0122 09:07:09.169895 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:09 crc kubenswrapper[4811]: I0122 09:07:09.169911 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:09 crc kubenswrapper[4811]: I0122 09:07:09.169921 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:09Z","lastTransitionTime":"2026-01-22T09:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:09 crc kubenswrapper[4811]: I0122 09:07:09.271307 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:09 crc kubenswrapper[4811]: I0122 09:07:09.271334 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:09 crc kubenswrapper[4811]: I0122 09:07:09.271342 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:09 crc kubenswrapper[4811]: I0122 09:07:09.271355 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:09 crc kubenswrapper[4811]: I0122 09:07:09.271366 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:09Z","lastTransitionTime":"2026-01-22T09:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:09 crc kubenswrapper[4811]: I0122 09:07:09.373287 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:09 crc kubenswrapper[4811]: I0122 09:07:09.373321 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:09 crc kubenswrapper[4811]: I0122 09:07:09.373329 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:09 crc kubenswrapper[4811]: I0122 09:07:09.373348 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:09 crc kubenswrapper[4811]: I0122 09:07:09.373357 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:09Z","lastTransitionTime":"2026-01-22T09:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:09 crc kubenswrapper[4811]: I0122 09:07:09.474847 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:09 crc kubenswrapper[4811]: I0122 09:07:09.474874 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:09 crc kubenswrapper[4811]: I0122 09:07:09.474884 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:09 crc kubenswrapper[4811]: I0122 09:07:09.474899 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:09 crc kubenswrapper[4811]: I0122 09:07:09.474912 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:09Z","lastTransitionTime":"2026-01-22T09:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:09 crc kubenswrapper[4811]: I0122 09:07:09.577192 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:09 crc kubenswrapper[4811]: I0122 09:07:09.577221 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:09 crc kubenswrapper[4811]: I0122 09:07:09.577229 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:09 crc kubenswrapper[4811]: I0122 09:07:09.577241 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:09 crc kubenswrapper[4811]: I0122 09:07:09.577250 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:09Z","lastTransitionTime":"2026-01-22T09:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:09 crc kubenswrapper[4811]: I0122 09:07:09.679091 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:09 crc kubenswrapper[4811]: I0122 09:07:09.679116 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:09 crc kubenswrapper[4811]: I0122 09:07:09.679125 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:09 crc kubenswrapper[4811]: I0122 09:07:09.679139 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:09 crc kubenswrapper[4811]: I0122 09:07:09.679152 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:09Z","lastTransitionTime":"2026-01-22T09:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:09 crc kubenswrapper[4811]: I0122 09:07:09.781232 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:09 crc kubenswrapper[4811]: I0122 09:07:09.781263 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:09 crc kubenswrapper[4811]: I0122 09:07:09.781273 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:09 crc kubenswrapper[4811]: I0122 09:07:09.781286 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:09 crc kubenswrapper[4811]: I0122 09:07:09.781296 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:09Z","lastTransitionTime":"2026-01-22T09:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:09 crc kubenswrapper[4811]: I0122 09:07:09.883494 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:09 crc kubenswrapper[4811]: I0122 09:07:09.883532 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:09 crc kubenswrapper[4811]: I0122 09:07:09.883543 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:09 crc kubenswrapper[4811]: I0122 09:07:09.883555 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:09 crc kubenswrapper[4811]: I0122 09:07:09.883563 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:09Z","lastTransitionTime":"2026-01-22T09:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:09 crc kubenswrapper[4811]: I0122 09:07:09.985792 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:09 crc kubenswrapper[4811]: I0122 09:07:09.985828 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:09 crc kubenswrapper[4811]: I0122 09:07:09.985837 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:09 crc kubenswrapper[4811]: I0122 09:07:09.985848 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:09 crc kubenswrapper[4811]: I0122 09:07:09.985856 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:09Z","lastTransitionTime":"2026-01-22T09:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:09 crc kubenswrapper[4811]: I0122 09:07:09.991460 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 09:07:09 crc kubenswrapper[4811]: I0122 09:07:09.991490 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:07:09 crc kubenswrapper[4811]: E0122 09:07:09.991562 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 09:07:09 crc kubenswrapper[4811]: I0122 09:07:09.991667 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 09:07:09 crc kubenswrapper[4811]: E0122 09:07:09.991814 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 09:07:09 crc kubenswrapper[4811]: E0122 09:07:09.992016 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 09:07:10 crc kubenswrapper[4811]: I0122 09:07:10.006714 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 22 09:07:10 crc kubenswrapper[4811]: I0122 09:07:10.006773 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 15:18:15.085703083 +0000 UTC Jan 22 09:07:10 crc kubenswrapper[4811]: I0122 09:07:10.087747 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:10 crc kubenswrapper[4811]: I0122 09:07:10.087785 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:10 crc kubenswrapper[4811]: I0122 09:07:10.087804 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:10 crc kubenswrapper[4811]: I0122 09:07:10.087819 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:10 crc kubenswrapper[4811]: I0122 09:07:10.087830 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:10Z","lastTransitionTime":"2026-01-22T09:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:10 crc kubenswrapper[4811]: I0122 09:07:10.189653 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:10 crc kubenswrapper[4811]: I0122 09:07:10.189681 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:10 crc kubenswrapper[4811]: I0122 09:07:10.189689 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:10 crc kubenswrapper[4811]: I0122 09:07:10.189716 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:10 crc kubenswrapper[4811]: I0122 09:07:10.189725 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:10Z","lastTransitionTime":"2026-01-22T09:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:10 crc kubenswrapper[4811]: I0122 09:07:10.291556 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:10 crc kubenswrapper[4811]: I0122 09:07:10.291586 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:10 crc kubenswrapper[4811]: I0122 09:07:10.291594 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:10 crc kubenswrapper[4811]: I0122 09:07:10.291603 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:10 crc kubenswrapper[4811]: I0122 09:07:10.291613 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:10Z","lastTransitionTime":"2026-01-22T09:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:10 crc kubenswrapper[4811]: I0122 09:07:10.393313 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:10 crc kubenswrapper[4811]: I0122 09:07:10.393349 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:10 crc kubenswrapper[4811]: I0122 09:07:10.393363 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:10 crc kubenswrapper[4811]: I0122 09:07:10.393377 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:10 crc kubenswrapper[4811]: I0122 09:07:10.393388 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:10Z","lastTransitionTime":"2026-01-22T09:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:10 crc kubenswrapper[4811]: I0122 09:07:10.495618 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:10 crc kubenswrapper[4811]: I0122 09:07:10.495700 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:10 crc kubenswrapper[4811]: I0122 09:07:10.495710 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:10 crc kubenswrapper[4811]: I0122 09:07:10.495722 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:10 crc kubenswrapper[4811]: I0122 09:07:10.495732 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:10Z","lastTransitionTime":"2026-01-22T09:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:10 crc kubenswrapper[4811]: I0122 09:07:10.597463 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:10 crc kubenswrapper[4811]: I0122 09:07:10.597509 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:10 crc kubenswrapper[4811]: I0122 09:07:10.597519 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:10 crc kubenswrapper[4811]: I0122 09:07:10.597531 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:10 crc kubenswrapper[4811]: I0122 09:07:10.597538 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:10Z","lastTransitionTime":"2026-01-22T09:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:10 crc kubenswrapper[4811]: I0122 09:07:10.699475 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:10 crc kubenswrapper[4811]: I0122 09:07:10.699501 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:10 crc kubenswrapper[4811]: I0122 09:07:10.699510 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:10 crc kubenswrapper[4811]: I0122 09:07:10.699520 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:10 crc kubenswrapper[4811]: I0122 09:07:10.699527 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:10Z","lastTransitionTime":"2026-01-22T09:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:10 crc kubenswrapper[4811]: I0122 09:07:10.801451 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:10 crc kubenswrapper[4811]: I0122 09:07:10.801496 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:10 crc kubenswrapper[4811]: I0122 09:07:10.801506 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:10 crc kubenswrapper[4811]: I0122 09:07:10.801522 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:10 crc kubenswrapper[4811]: I0122 09:07:10.801532 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:10Z","lastTransitionTime":"2026-01-22T09:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:10 crc kubenswrapper[4811]: I0122 09:07:10.902616 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:10 crc kubenswrapper[4811]: I0122 09:07:10.902664 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:10 crc kubenswrapper[4811]: I0122 09:07:10.902674 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:10 crc kubenswrapper[4811]: I0122 09:07:10.902689 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:10 crc kubenswrapper[4811]: I0122 09:07:10.902698 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:10Z","lastTransitionTime":"2026-01-22T09:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:10 crc kubenswrapper[4811]: I0122 09:07:10.991265 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bhj4l" Jan 22 09:07:10 crc kubenswrapper[4811]: E0122 09:07:10.991378 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bhj4l" podUID="de4b38a0-0c7a-4693-9f92-40fefd6bc9b4" Jan 22 09:07:11 crc kubenswrapper[4811]: I0122 09:07:11.004498 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:11 crc kubenswrapper[4811]: I0122 09:07:11.004540 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:11 crc kubenswrapper[4811]: I0122 09:07:11.004552 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:11 crc kubenswrapper[4811]: I0122 09:07:11.004562 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:11 crc kubenswrapper[4811]: I0122 09:07:11.004571 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:11Z","lastTransitionTime":"2026-01-22T09:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:11 crc kubenswrapper[4811]: I0122 09:07:11.007662 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 09:44:23.265366898 +0000 UTC Jan 22 09:07:11 crc kubenswrapper[4811]: I0122 09:07:11.105841 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:11 crc kubenswrapper[4811]: I0122 09:07:11.105867 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:11 crc kubenswrapper[4811]: I0122 09:07:11.105877 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:11 crc kubenswrapper[4811]: I0122 09:07:11.105888 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:11 crc kubenswrapper[4811]: I0122 09:07:11.105895 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:11Z","lastTransitionTime":"2026-01-22T09:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:11 crc kubenswrapper[4811]: I0122 09:07:11.207469 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:11 crc kubenswrapper[4811]: I0122 09:07:11.207505 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:11 crc kubenswrapper[4811]: I0122 09:07:11.207514 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:11 crc kubenswrapper[4811]: I0122 09:07:11.207524 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:11 crc kubenswrapper[4811]: I0122 09:07:11.207532 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:11Z","lastTransitionTime":"2026-01-22T09:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:11 crc kubenswrapper[4811]: I0122 09:07:11.309081 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:11 crc kubenswrapper[4811]: I0122 09:07:11.309106 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:11 crc kubenswrapper[4811]: I0122 09:07:11.309113 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:11 crc kubenswrapper[4811]: I0122 09:07:11.309124 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:11 crc kubenswrapper[4811]: I0122 09:07:11.309132 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:11Z","lastTransitionTime":"2026-01-22T09:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:11 crc kubenswrapper[4811]: I0122 09:07:11.410614 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:11 crc kubenswrapper[4811]: I0122 09:07:11.410662 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:11 crc kubenswrapper[4811]: I0122 09:07:11.410671 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:11 crc kubenswrapper[4811]: I0122 09:07:11.410683 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:11 crc kubenswrapper[4811]: I0122 09:07:11.410690 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:11Z","lastTransitionTime":"2026-01-22T09:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:11 crc kubenswrapper[4811]: I0122 09:07:11.512342 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:11 crc kubenswrapper[4811]: I0122 09:07:11.512366 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:11 crc kubenswrapper[4811]: I0122 09:07:11.512374 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:11 crc kubenswrapper[4811]: I0122 09:07:11.512383 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:11 crc kubenswrapper[4811]: I0122 09:07:11.512391 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:11Z","lastTransitionTime":"2026-01-22T09:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:11 crc kubenswrapper[4811]: I0122 09:07:11.613460 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:11 crc kubenswrapper[4811]: I0122 09:07:11.613484 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:11 crc kubenswrapper[4811]: I0122 09:07:11.613493 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:11 crc kubenswrapper[4811]: I0122 09:07:11.613502 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:11 crc kubenswrapper[4811]: I0122 09:07:11.613512 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:11Z","lastTransitionTime":"2026-01-22T09:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:11 crc kubenswrapper[4811]: I0122 09:07:11.714911 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:11 crc kubenswrapper[4811]: I0122 09:07:11.714954 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:11 crc kubenswrapper[4811]: I0122 09:07:11.714963 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:11 crc kubenswrapper[4811]: I0122 09:07:11.714978 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:11 crc kubenswrapper[4811]: I0122 09:07:11.714988 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:11Z","lastTransitionTime":"2026-01-22T09:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:11 crc kubenswrapper[4811]: I0122 09:07:11.815990 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:11 crc kubenswrapper[4811]: I0122 09:07:11.816016 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:11 crc kubenswrapper[4811]: I0122 09:07:11.816026 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:11 crc kubenswrapper[4811]: I0122 09:07:11.816038 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:11 crc kubenswrapper[4811]: I0122 09:07:11.816047 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:11Z","lastTransitionTime":"2026-01-22T09:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:11 crc kubenswrapper[4811]: I0122 09:07:11.917066 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:11 crc kubenswrapper[4811]: I0122 09:07:11.917084 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:11 crc kubenswrapper[4811]: I0122 09:07:11.917091 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:11 crc kubenswrapper[4811]: I0122 09:07:11.917100 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:11 crc kubenswrapper[4811]: I0122 09:07:11.917109 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:11Z","lastTransitionTime":"2026-01-22T09:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:11 crc kubenswrapper[4811]: I0122 09:07:11.991755 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 09:07:11 crc kubenswrapper[4811]: I0122 09:07:11.991797 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:07:11 crc kubenswrapper[4811]: E0122 09:07:11.991911 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 09:07:11 crc kubenswrapper[4811]: I0122 09:07:11.991946 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 09:07:11 crc kubenswrapper[4811]: E0122 09:07:11.992023 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 09:07:11 crc kubenswrapper[4811]: E0122 09:07:11.992130 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 09:07:12 crc kubenswrapper[4811]: I0122 09:07:12.007953 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 05:06:41.318361255 +0000 UTC Jan 22 09:07:12 crc kubenswrapper[4811]: I0122 09:07:12.018406 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:12 crc kubenswrapper[4811]: I0122 09:07:12.018459 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:12 crc kubenswrapper[4811]: I0122 09:07:12.018478 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:12 crc kubenswrapper[4811]: I0122 09:07:12.018495 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:12 crc kubenswrapper[4811]: I0122 09:07:12.018508 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:12Z","lastTransitionTime":"2026-01-22T09:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:12 crc kubenswrapper[4811]: I0122 09:07:12.120241 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:12 crc kubenswrapper[4811]: I0122 09:07:12.120266 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:12 crc kubenswrapper[4811]: I0122 09:07:12.120275 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:12 crc kubenswrapper[4811]: I0122 09:07:12.120286 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:12 crc kubenswrapper[4811]: I0122 09:07:12.120293 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:12Z","lastTransitionTime":"2026-01-22T09:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:12 crc kubenswrapper[4811]: I0122 09:07:12.221596 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:12 crc kubenswrapper[4811]: I0122 09:07:12.221678 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:12 crc kubenswrapper[4811]: I0122 09:07:12.221691 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:12 crc kubenswrapper[4811]: I0122 09:07:12.221706 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:12 crc kubenswrapper[4811]: I0122 09:07:12.221736 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:12Z","lastTransitionTime":"2026-01-22T09:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:12 crc kubenswrapper[4811]: I0122 09:07:12.322947 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:12 crc kubenswrapper[4811]: I0122 09:07:12.322971 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:12 crc kubenswrapper[4811]: I0122 09:07:12.322980 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:12 crc kubenswrapper[4811]: I0122 09:07:12.322990 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:12 crc kubenswrapper[4811]: I0122 09:07:12.322997 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:12Z","lastTransitionTime":"2026-01-22T09:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:12 crc kubenswrapper[4811]: I0122 09:07:12.424106 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:12 crc kubenswrapper[4811]: I0122 09:07:12.424145 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:12 crc kubenswrapper[4811]: I0122 09:07:12.424154 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:12 crc kubenswrapper[4811]: I0122 09:07:12.424165 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:12 crc kubenswrapper[4811]: I0122 09:07:12.424190 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:12Z","lastTransitionTime":"2026-01-22T09:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:12 crc kubenswrapper[4811]: I0122 09:07:12.525510 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:12 crc kubenswrapper[4811]: I0122 09:07:12.525536 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:12 crc kubenswrapper[4811]: I0122 09:07:12.525544 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:12 crc kubenswrapper[4811]: I0122 09:07:12.525554 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:12 crc kubenswrapper[4811]: I0122 09:07:12.525562 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:12Z","lastTransitionTime":"2026-01-22T09:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:12 crc kubenswrapper[4811]: I0122 09:07:12.627618 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:12 crc kubenswrapper[4811]: I0122 09:07:12.627685 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:12 crc kubenswrapper[4811]: I0122 09:07:12.627694 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:12 crc kubenswrapper[4811]: I0122 09:07:12.627704 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:12 crc kubenswrapper[4811]: I0122 09:07:12.627710 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:12Z","lastTransitionTime":"2026-01-22T09:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:12 crc kubenswrapper[4811]: I0122 09:07:12.729496 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:12 crc kubenswrapper[4811]: I0122 09:07:12.729539 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:12 crc kubenswrapper[4811]: I0122 09:07:12.729548 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:12 crc kubenswrapper[4811]: I0122 09:07:12.729564 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:12 crc kubenswrapper[4811]: I0122 09:07:12.729578 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:12Z","lastTransitionTime":"2026-01-22T09:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:12 crc kubenswrapper[4811]: I0122 09:07:12.830911 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:12 crc kubenswrapper[4811]: I0122 09:07:12.830960 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:12 crc kubenswrapper[4811]: I0122 09:07:12.830971 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:12 crc kubenswrapper[4811]: I0122 09:07:12.830983 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:12 crc kubenswrapper[4811]: I0122 09:07:12.830995 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:12Z","lastTransitionTime":"2026-01-22T09:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:12 crc kubenswrapper[4811]: I0122 09:07:12.932702 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:12 crc kubenswrapper[4811]: I0122 09:07:12.932736 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:12 crc kubenswrapper[4811]: I0122 09:07:12.932744 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:12 crc kubenswrapper[4811]: I0122 09:07:12.932758 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:12 crc kubenswrapper[4811]: I0122 09:07:12.932766 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:12Z","lastTransitionTime":"2026-01-22T09:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:12 crc kubenswrapper[4811]: I0122 09:07:12.991383 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bhj4l" Jan 22 09:07:12 crc kubenswrapper[4811]: E0122 09:07:12.991539 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bhj4l" podUID="de4b38a0-0c7a-4693-9f92-40fefd6bc9b4" Jan 22 09:07:13 crc kubenswrapper[4811]: I0122 09:07:13.008696 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 19:47:33.907390104 +0000 UTC Jan 22 09:07:13 crc kubenswrapper[4811]: I0122 09:07:13.034258 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:13 crc kubenswrapper[4811]: I0122 09:07:13.034290 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:13 crc kubenswrapper[4811]: I0122 09:07:13.034298 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:13 crc kubenswrapper[4811]: I0122 09:07:13.034312 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:13 crc kubenswrapper[4811]: I0122 09:07:13.034320 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:13Z","lastTransitionTime":"2026-01-22T09:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:13 crc kubenswrapper[4811]: I0122 09:07:13.136356 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:13 crc kubenswrapper[4811]: I0122 09:07:13.136394 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:13 crc kubenswrapper[4811]: I0122 09:07:13.136402 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:13 crc kubenswrapper[4811]: I0122 09:07:13.136416 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:13 crc kubenswrapper[4811]: I0122 09:07:13.136429 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:13Z","lastTransitionTime":"2026-01-22T09:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:13 crc kubenswrapper[4811]: I0122 09:07:13.238112 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:13 crc kubenswrapper[4811]: I0122 09:07:13.238149 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:13 crc kubenswrapper[4811]: I0122 09:07:13.238158 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:13 crc kubenswrapper[4811]: I0122 09:07:13.238171 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:13 crc kubenswrapper[4811]: I0122 09:07:13.238180 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:13Z","lastTransitionTime":"2026-01-22T09:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:13 crc kubenswrapper[4811]: I0122 09:07:13.340297 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:13 crc kubenswrapper[4811]: I0122 09:07:13.340326 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:13 crc kubenswrapper[4811]: I0122 09:07:13.340335 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:13 crc kubenswrapper[4811]: I0122 09:07:13.340346 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:13 crc kubenswrapper[4811]: I0122 09:07:13.340354 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:13Z","lastTransitionTime":"2026-01-22T09:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:13 crc kubenswrapper[4811]: I0122 09:07:13.442678 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:13 crc kubenswrapper[4811]: I0122 09:07:13.442862 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:13 crc kubenswrapper[4811]: I0122 09:07:13.442952 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:13 crc kubenswrapper[4811]: I0122 09:07:13.443045 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:13 crc kubenswrapper[4811]: I0122 09:07:13.443208 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:13Z","lastTransitionTime":"2026-01-22T09:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:13 crc kubenswrapper[4811]: I0122 09:07:13.545062 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:13 crc kubenswrapper[4811]: I0122 09:07:13.545088 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:13 crc kubenswrapper[4811]: I0122 09:07:13.545096 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:13 crc kubenswrapper[4811]: I0122 09:07:13.545106 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:13 crc kubenswrapper[4811]: I0122 09:07:13.545114 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:13Z","lastTransitionTime":"2026-01-22T09:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:13 crc kubenswrapper[4811]: I0122 09:07:13.646731 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:13 crc kubenswrapper[4811]: I0122 09:07:13.646756 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:13 crc kubenswrapper[4811]: I0122 09:07:13.646765 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:13 crc kubenswrapper[4811]: I0122 09:07:13.646774 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:13 crc kubenswrapper[4811]: I0122 09:07:13.646781 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:13Z","lastTransitionTime":"2026-01-22T09:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:13 crc kubenswrapper[4811]: I0122 09:07:13.748801 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:13 crc kubenswrapper[4811]: I0122 09:07:13.748841 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:13 crc kubenswrapper[4811]: I0122 09:07:13.748852 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:13 crc kubenswrapper[4811]: I0122 09:07:13.748861 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:13 crc kubenswrapper[4811]: I0122 09:07:13.748867 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:13Z","lastTransitionTime":"2026-01-22T09:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:13 crc kubenswrapper[4811]: I0122 09:07:13.850687 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:13 crc kubenswrapper[4811]: I0122 09:07:13.850725 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:13 crc kubenswrapper[4811]: I0122 09:07:13.850739 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:13 crc kubenswrapper[4811]: I0122 09:07:13.850754 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:13 crc kubenswrapper[4811]: I0122 09:07:13.850764 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:13Z","lastTransitionTime":"2026-01-22T09:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:13 crc kubenswrapper[4811]: I0122 09:07:13.952714 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:13 crc kubenswrapper[4811]: I0122 09:07:13.952739 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:13 crc kubenswrapper[4811]: I0122 09:07:13.952746 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:13 crc kubenswrapper[4811]: I0122 09:07:13.952756 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:13 crc kubenswrapper[4811]: I0122 09:07:13.952762 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:13Z","lastTransitionTime":"2026-01-22T09:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:13 crc kubenswrapper[4811]: I0122 09:07:13.992151 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:07:13 crc kubenswrapper[4811]: I0122 09:07:13.992207 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 09:07:13 crc kubenswrapper[4811]: E0122 09:07:13.992225 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 09:07:13 crc kubenswrapper[4811]: I0122 09:07:13.992251 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 09:07:13 crc kubenswrapper[4811]: E0122 09:07:13.992327 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 09:07:13 crc kubenswrapper[4811]: E0122 09:07:13.992358 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 09:07:13 crc kubenswrapper[4811]: I0122 09:07:13.992576 4811 scope.go:117] "RemoveContainer" containerID="454af35caa860192c078a3fb4028bf2c4544607849290f3bd427688821757b2f" Jan 22 09:07:14 crc kubenswrapper[4811]: I0122 09:07:14.009763 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 11:47:44.942601339 +0000 UTC Jan 22 09:07:14 crc kubenswrapper[4811]: I0122 09:07:14.054398 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:14 crc kubenswrapper[4811]: I0122 09:07:14.054464 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:14 crc kubenswrapper[4811]: I0122 09:07:14.054479 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:14 crc kubenswrapper[4811]: I0122 09:07:14.054503 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:14 crc kubenswrapper[4811]: I0122 09:07:14.054517 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:14Z","lastTransitionTime":"2026-01-22T09:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:14 crc kubenswrapper[4811]: I0122 09:07:14.156872 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:14 crc kubenswrapper[4811]: I0122 09:07:14.156912 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:14 crc kubenswrapper[4811]: I0122 09:07:14.156922 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:14 crc kubenswrapper[4811]: I0122 09:07:14.156938 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:14 crc kubenswrapper[4811]: I0122 09:07:14.156952 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:14Z","lastTransitionTime":"2026-01-22T09:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:14 crc kubenswrapper[4811]: I0122 09:07:14.259121 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:14 crc kubenswrapper[4811]: I0122 09:07:14.259160 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:14 crc kubenswrapper[4811]: I0122 09:07:14.259172 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:14 crc kubenswrapper[4811]: I0122 09:07:14.259187 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:14 crc kubenswrapper[4811]: I0122 09:07:14.259196 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:14Z","lastTransitionTime":"2026-01-22T09:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:14 crc kubenswrapper[4811]: I0122 09:07:14.328157 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-274vf_1cd0f0db-de53-47c0-9b45-2ce8b37392a3/ovnkube-controller/2.log" Jan 22 09:07:14 crc kubenswrapper[4811]: I0122 09:07:14.330739 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-274vf" event={"ID":"1cd0f0db-de53-47c0-9b45-2ce8b37392a3","Type":"ContainerStarted","Data":"c4ae4bac915c177448079f710c8f3f5cb8ee9368318699d1500a8e2ec2075fb5"} Jan 22 09:07:14 crc kubenswrapper[4811]: I0122 09:07:14.331173 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-274vf" Jan 22 09:07:14 crc kubenswrapper[4811]: I0122 09:07:14.345568 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cf1d28a-4ee8-45bb-9a86-3687d8db2105\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684f8ca14b2eabcbf9280085d495af9a6f1fa76fb187f15b8fa5659178efdc93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64382d1748f1eaeb9f2f09d243b35c56f9fd70b9a9765ddb64cb0c4866ed493b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://590bb419e3791ec3a267b9a5aa4022ee74ba5e302b047e85211563e135c4cf31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b14bb77b0cf24bacdb8f5edae5d02ccea019b12feb315e6d6c4887feb1a9354\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0d10794bea5b8ea94b9247775e0c424fa9d481267c663d4bcae16eb4e7eb729\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09c7d347a923430e4eb2af48033ea3fc7715bedad38dd1b644c7b29a1ab617fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09c7d347a923430e4eb2af48033ea3fc7715bedad38dd1b644c7b29a1ab617fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba74d0a51550a0637b01b701bbaa33c5efe84fb88a69b7294d1125f1f400b7cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba74d0a51550a0637b01b701bbaa33c5efe84fb88a69b7294d1125f1f400b7cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9a2da74ed29f44340dd1a4778f15ba78ba4f673eb1b34b676e35c7d7c84af31d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a2da74ed29f44340dd1a4778f15ba78ba4f673eb1b34b676e35c7d7c84af31d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:05:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:07:14Z is after 2025-08-24T17:21:41Z" Jan 22 09:07:14 crc kubenswrapper[4811]: I0122 09:07:14.354524 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:07:14Z is after 2025-08-24T17:21:41Z" Jan 22 09:07:14 crc kubenswrapper[4811]: I0122 09:07:14.360595 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:14 crc kubenswrapper[4811]: I0122 09:07:14.360645 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:14 crc kubenswrapper[4811]: I0122 09:07:14.360656 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:14 crc kubenswrapper[4811]: I0122 09:07:14.360668 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:14 crc kubenswrapper[4811]: I0122 09:07:14.360677 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:14Z","lastTransitionTime":"2026-01-22T09:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:14 crc kubenswrapper[4811]: I0122 09:07:14.366036 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kfqgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2555861-d1bb-4f21-be4a-165ed9212932\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2e4b2026355f189cbe6a2d70999613aa1b5868f0b38c25e834e42eda1b41088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ececfc4b2e4f2a120e7a3307235d606db00733e1a5631b6e19b5312afc8e8ff\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T09:07:04Z\\\",\\\"message\\\":\\\"2026-01-22T09:06:19+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_2fe27a58-2281-4698-9763-8c4890d2a18e\\\\n2026-01-22T09:06:19+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_2fe27a58-2281-4698-9763-8c4890d2a18e to /host/opt/cni/bin/\\\\n2026-01-22T09:06:19Z [verbose] multus-daemon started\\\\n2026-01-22T09:06:19Z [verbose] Readiness Indicator file check\\\\n2026-01-22T09:07:04Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jt7cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kfqgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:07:14Z is after 2025-08-24T17:21:41Z" Jan 22 09:07:14 crc kubenswrapper[4811]: I0122 09:07:14.379065 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xhhs7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f42dc1a6-d0c4-43e4-b9d9-b40c1f910400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a64d7f1d53f9a3d9616b76ea12902134c5e0a3a30c9a61113675074caae70d58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcfzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xhhs7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:07:14Z is after 2025-08-24T17:21:41Z" Jan 22 09:07:14 crc kubenswrapper[4811]: I0122 09:07:14.389441 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9g4j8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d23c9c9-89ca-4db5-99dc-1e5b9f80be38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://279ca4748f2c8229a6647c23a414668a7612c0c863b52ef33c5b3dd026de74ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e01aaa7db78471fea3a39a7dfc2be7926746a5c38a6a92dc3d33ebab7b8bf3cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e01aaa7db78471fea3a39a7dfc2be7926746a5c38a6a92dc3d33ebab7b8bf3cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c4b6317408e75e2cc7947a3ff91f9268141cd5000f951dcbfdab3fdc5c2386c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c4b6317408e75e2cc7947a3ff91f9268141cd5000f951dcbfdab3fdc5c2386c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9e774edddd1e144cf32c598245081727747e9a4e3c04ee339ff63e330ceb5cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9e774edddd1e144cf32c598245081727747e9a4e3c04ee339ff63e330ceb5cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66cb2961790d3d876438b07dabf72c50ed19c9e946e21dcdf90f098a38e02c84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66cb2961790d3d876438b07dabf72c50ed19c9e946e21dcdf90f098a38e02c84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://016900d5e167b9638dd095fc155f63e33c55037edb90af28ebef58c91b86fe7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://016900d5e167b9638dd095fc155f63e33c55037edb90af28ebef58c91b86fe7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4face38e0653080b6d84c1e1a126c96d043eb68c6e19f76dbd8def8bc96e9d40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4face38e0653080b6d84c1e1a126c96d043eb68c6e19f76dbd8def8bc96e9d40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9g4j8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:07:14Z is after 2025-08-24T17:21:41Z" Jan 22 09:07:14 crc kubenswrapper[4811]: I0122 09:07:14.405162 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84068a6b-e189-419b-87f5-f31428f6eafe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cde8418c9f22553519ec0a9ba1ddcdd553f64aafb44fa002e924499701236bca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thzhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbb067d04c1683ffd06d01aab41bb5988fdca44b8bd16012d35c42795d0d8011\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thzhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-txvcq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:07:14Z is after 2025-08-24T17:21:41Z" Jan 22 09:07:14 crc kubenswrapper[4811]: I0122 09:07:14.419900 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bhj4l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de4b38a0-0c7a-4693-9f92-40fefd6bc9b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mphcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mphcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bhj4l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:07:14Z is after 2025-08-24T17:21:41Z" Jan 22 09:07:14 crc kubenswrapper[4811]: I0122 09:07:14.438610 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30b44c40-e6d4-4902-98e9-a259269d8bf7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c0b4f518da69d1ede47cb9d322e9d9ca120cfc5eb770f109f0e5fd2e3260964\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aee61445030e02ac4fa9de4fefd94fe50e821ac8897c7403e173f2594a2e0ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d66c2a7980e144d2ce01cb8ebac91297e012c09d4b9aec617e747542e1ca304\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4daff941637e717d7dc33c5627db72d253f50f50288c23257e1eaa66cd38c5d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a86cb7b767900041a2946ed016b105369283ee61acdf725ade666acb1e926ed0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0122 09:06:13.075349 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0122 09:06:13.075574 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 09:06:13.077171 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1744166093/tls.crt::/tmp/serving-cert-1744166093/tls.key\\\\\\\"\\\\nI0122 09:06:13.324400 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 09:06:13.330700 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 09:06:13.330724 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 09:06:13.330758 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 09:06:13.330771 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 09:06:13.337838 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 09:06:13.337866 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:06:13.337872 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:06:13.337876 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 09:06:13.337879 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 09:06:13.337883 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 09:06:13.337886 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 09:06:13.338151 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 09:06:13.339610 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9efb1ddf2ce6601c9860410e3ccacc38359aef779a7c8524bf4ec5950743d36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85e811085c9b345899231a1fcf56619c1cc5c8b0e6d22fc897fb96f20ba72e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85e811085c9b345899231a1fcf56619c1cc5c8b0e6d22fc897fb96f20ba72e37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:05:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:07:14Z is after 2025-08-24T17:21:41Z" Jan 22 09:07:14 crc kubenswrapper[4811]: I0122 09:07:14.447720 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:07:14Z is after 2025-08-24T17:21:41Z" Jan 22 09:07:14 crc kubenswrapper[4811]: I0122 09:07:14.458323 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:07:14Z is after 2025-08-24T17:21:41Z" Jan 22 09:07:14 crc kubenswrapper[4811]: I0122 09:07:14.462598 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:14 crc kubenswrapper[4811]: I0122 09:07:14.462643 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:14 crc kubenswrapper[4811]: I0122 09:07:14.462657 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:14 crc kubenswrapper[4811]: I0122 09:07:14.462677 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:14 crc kubenswrapper[4811]: I0122 09:07:14.462689 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:14Z","lastTransitionTime":"2026-01-22T09:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:14 crc kubenswrapper[4811]: I0122 09:07:14.468858 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bae2e22607dec646b15563dc62f66655e7a5c2b7ad809406c907e5f687f80105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75e290e203fab8f291146855a960477ceb9c33d965d15b4722c668dc2b3fded3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:07:14Z is after 2025-08-24T17:21:41Z" Jan 22 09:07:14 crc kubenswrapper[4811]: I0122 09:07:14.482388 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-274vf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cd0f0db-de53-47c0-9b45-2ce8b37392a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5f0ec565b046b22d600103b58946f6f520873f1686cc3042bf7ddc7f24bd4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e071df53e8cd1652f507eacae3e2c54ac75e76a6b9a426620c970762e81ecd84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c261cdea10a90cb9b06369f3b725b7778d070fb329e81ab44552446e12662451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f0736426a25dadac460e5ed467f3082cd8c4806190c053022d6f360c4290799\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9197a8c82d49fb9b4b1c97827ee95582ca8148656abb88904e06abb25c135597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55f26cdf9d61842467eb1367c6ec156b4de9c40a5b05fe60d3d100df5136a8a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4ae4bac915c177448079f710c8f3f5cb8ee9368318699d1500a8e2ec2075fb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://454af35caa860192c078a3fb4028bf2c4544607849290f3bd427688821757b2f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T09:06:42Z\\\",\\\"message\\\":\\\"ing service etcd on namespace openshift-etcd for network=default : 1.153957ms\\\\nI0122 09:06:42.648545 6404 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-controller-manager/controller-manager]} name:Service_openshift-controller-manager/controller-manager_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.149:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {cab7c637-a021-4a4d-a4b9-06d63c44316f}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0122 09:06:42.648515 6404 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-ingress-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"d8772e82-b0a4-4596-87d3-3d517c13344b\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-ingress-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]stri\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c741f2990f908528ccab348e1eb02e217a9b659becc7a41c4eb8da28bfd42e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://523ab909aa2cbac6043b3c572a17c40733e9b2155d4017253d68998cb85d3e7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://523ab909aa2cbac6043b3c572a17c40733e9b2155d4017253d68998cb85d3e7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-274vf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:07:14Z is after 2025-08-24T17:21:41Z" Jan 22 09:07:14 crc kubenswrapper[4811]: I0122 09:07:14.491208 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n2kj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66a7f3f9-ab88-4b1a-a28a-900480fa9651\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3a88defded9a5435941e2608c8a29baee840bc0ce48454a3731f344299347cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z47mj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n2kj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:07:14Z is after 2025-08-24T17:21:41Z" Jan 22 09:07:14 crc kubenswrapper[4811]: I0122 09:07:14.498802 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"119726d3-5d86-422f-bfc6-69c5914dbdb3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95195b2b1e3673feadcfaadaaba692abe8e0e9a6b2c8fb776c38187616e59c7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2e1da6db1e9abd0c7f5dafb9d1cabf171e05168586b5bfcee84df0a7408e847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2e1da6db1e9abd0c7f5dafb9d1cabf171e05168586b5bfcee84df0a7408e847\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:05:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:07:14Z is after 2025-08-24T17:21:41Z" Jan 22 09:07:14 crc kubenswrapper[4811]: I0122 09:07:14.508571 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab5dbe63-08fa-4157-9c52-003944e50d7b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af3bf6cdac4751814ecd71637d9e80cbdda1c0e99714275fe77cfa858b3823b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4419625f66ae957b2d9b146fc8b085a4a95c3f47860d6b4847f8fe71a6c4c86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7830976d23e073a099c3c6af1fa2392d9504ad9d41ddf2ee63c16632c22f833a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45fa5d37dcd1dcae1a7488bb855e4bd7c7dc39a1199fe80b82f83604be089f2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:05:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:07:14Z is after 2025-08-24T17:21:41Z" Jan 22 09:07:14 crc kubenswrapper[4811]: I0122 09:07:14.517378 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a14bfcc7d6d2ab5e1ad0c45999b2e73332e750ed65d9299e188505dafb26d0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:07:14Z is after 2025-08-24T17:21:41Z" Jan 22 09:07:14 crc kubenswrapper[4811]: I0122 09:07:14.526392 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95b8b8c1-880c-4910-8387-3ff87b534859\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1a6925ee11245a252465e1a76bf8246c142164097c9c35f3467ab3d1650bc32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78e58b4688b2704d8a02a27ef452d900419711bd51a2d64b9c05de7d3a02ffbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72d5bfb65ec0c94b865b39227fa43cb243e05b615b8b6c8b2ce289357eb5488b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b13581fc07c90a3dba27e04a1079d8a678635bf14c0d5463cd7b337f0f40a4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b13581fc07c90a3dba27e04a1079d8a678635bf14c0d5463cd7b337f0f40a4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:56Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:05:56Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:07:14Z is after 2025-08-24T17:21:41Z" Jan 22 09:07:14 crc kubenswrapper[4811]: I0122 09:07:14.537572 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f776b1bc70646779cf35092ded02c00aa3b990c873de09d21e1f0d76258b0b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:07:14Z is after 2025-08-24T17:21:41Z" Jan 22 09:07:14 crc kubenswrapper[4811]: I0122 09:07:14.546934 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gwnx7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"153efd1a-2c09-4c49-94e1-2307bbf1e659\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71a7748f45c9a14885785ca825c32a5036a887c7a54b36cadd1a89e57608111f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nklsw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddce971b4fbeb72e1a32c0adab8e46cb8bce2ff9736cd875f63e105bf39dcc27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nklsw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gwnx7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:07:14Z is after 2025-08-24T17:21:41Z" Jan 22 09:07:14 crc kubenswrapper[4811]: I0122 09:07:14.564274 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:14 crc kubenswrapper[4811]: I0122 09:07:14.564306 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:14 crc kubenswrapper[4811]: I0122 09:07:14.564316 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:14 crc kubenswrapper[4811]: I0122 09:07:14.564330 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:14 crc kubenswrapper[4811]: I0122 09:07:14.564340 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:14Z","lastTransitionTime":"2026-01-22T09:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:14 crc kubenswrapper[4811]: I0122 09:07:14.666333 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:14 crc kubenswrapper[4811]: I0122 09:07:14.666366 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:14 crc kubenswrapper[4811]: I0122 09:07:14.666377 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:14 crc kubenswrapper[4811]: I0122 09:07:14.666390 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:14 crc kubenswrapper[4811]: I0122 09:07:14.666399 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:14Z","lastTransitionTime":"2026-01-22T09:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:14 crc kubenswrapper[4811]: I0122 09:07:14.768708 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:14 crc kubenswrapper[4811]: I0122 09:07:14.768773 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:14 crc kubenswrapper[4811]: I0122 09:07:14.768788 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:14 crc kubenswrapper[4811]: I0122 09:07:14.768836 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:14 crc kubenswrapper[4811]: I0122 09:07:14.768849 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:14Z","lastTransitionTime":"2026-01-22T09:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:14 crc kubenswrapper[4811]: I0122 09:07:14.871612 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:14 crc kubenswrapper[4811]: I0122 09:07:14.871664 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:14 crc kubenswrapper[4811]: I0122 09:07:14.871676 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:14 crc kubenswrapper[4811]: I0122 09:07:14.871695 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:14 crc kubenswrapper[4811]: I0122 09:07:14.871709 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:14Z","lastTransitionTime":"2026-01-22T09:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:14 crc kubenswrapper[4811]: I0122 09:07:14.974283 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:14 crc kubenswrapper[4811]: I0122 09:07:14.974317 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:14 crc kubenswrapper[4811]: I0122 09:07:14.974327 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:14 crc kubenswrapper[4811]: I0122 09:07:14.974342 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:14 crc kubenswrapper[4811]: I0122 09:07:14.974355 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:14Z","lastTransitionTime":"2026-01-22T09:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:14 crc kubenswrapper[4811]: I0122 09:07:14.991337 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bhj4l" Jan 22 09:07:14 crc kubenswrapper[4811]: E0122 09:07:14.991477 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bhj4l" podUID="de4b38a0-0c7a-4693-9f92-40fefd6bc9b4" Jan 22 09:07:15 crc kubenswrapper[4811]: I0122 09:07:15.010542 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 06:16:33.539925364 +0000 UTC Jan 22 09:07:15 crc kubenswrapper[4811]: I0122 09:07:15.076916 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:15 crc kubenswrapper[4811]: I0122 09:07:15.076949 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:15 crc kubenswrapper[4811]: I0122 09:07:15.076960 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:15 crc kubenswrapper[4811]: I0122 09:07:15.076975 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:15 crc kubenswrapper[4811]: I0122 09:07:15.076985 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:15Z","lastTransitionTime":"2026-01-22T09:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:15 crc kubenswrapper[4811]: I0122 09:07:15.179443 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:15 crc kubenswrapper[4811]: I0122 09:07:15.179739 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:15 crc kubenswrapper[4811]: I0122 09:07:15.179750 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:15 crc kubenswrapper[4811]: I0122 09:07:15.179765 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:15 crc kubenswrapper[4811]: I0122 09:07:15.179777 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:15Z","lastTransitionTime":"2026-01-22T09:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:15 crc kubenswrapper[4811]: I0122 09:07:15.281416 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:15 crc kubenswrapper[4811]: I0122 09:07:15.281450 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:15 crc kubenswrapper[4811]: I0122 09:07:15.281458 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:15 crc kubenswrapper[4811]: I0122 09:07:15.281472 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:15 crc kubenswrapper[4811]: I0122 09:07:15.281479 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:15Z","lastTransitionTime":"2026-01-22T09:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:15 crc kubenswrapper[4811]: I0122 09:07:15.335408 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-274vf_1cd0f0db-de53-47c0-9b45-2ce8b37392a3/ovnkube-controller/3.log" Jan 22 09:07:15 crc kubenswrapper[4811]: I0122 09:07:15.335983 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-274vf_1cd0f0db-de53-47c0-9b45-2ce8b37392a3/ovnkube-controller/2.log" Jan 22 09:07:15 crc kubenswrapper[4811]: I0122 09:07:15.337808 4811 generic.go:334] "Generic (PLEG): container finished" podID="1cd0f0db-de53-47c0-9b45-2ce8b37392a3" containerID="c4ae4bac915c177448079f710c8f3f5cb8ee9368318699d1500a8e2ec2075fb5" exitCode=1 Jan 22 09:07:15 crc kubenswrapper[4811]: I0122 09:07:15.337855 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-274vf" event={"ID":"1cd0f0db-de53-47c0-9b45-2ce8b37392a3","Type":"ContainerDied","Data":"c4ae4bac915c177448079f710c8f3f5cb8ee9368318699d1500a8e2ec2075fb5"} Jan 22 09:07:15 crc kubenswrapper[4811]: I0122 09:07:15.337891 4811 scope.go:117] "RemoveContainer" containerID="454af35caa860192c078a3fb4028bf2c4544607849290f3bd427688821757b2f" Jan 22 09:07:15 crc kubenswrapper[4811]: I0122 09:07:15.338522 4811 scope.go:117] "RemoveContainer" containerID="c4ae4bac915c177448079f710c8f3f5cb8ee9368318699d1500a8e2ec2075fb5" Jan 22 09:07:15 crc kubenswrapper[4811]: E0122 09:07:15.338693 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-274vf_openshift-ovn-kubernetes(1cd0f0db-de53-47c0-9b45-2ce8b37392a3)\"" pod="openshift-ovn-kubernetes/ovnkube-node-274vf" podUID="1cd0f0db-de53-47c0-9b45-2ce8b37392a3" Jan 22 09:07:15 crc kubenswrapper[4811]: I0122 09:07:15.350944 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"119726d3-5d86-422f-bfc6-69c5914dbdb3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95195b2b1e3673feadcfaadaaba692abe8e0e9a6b2c8fb776c38187616e59c7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2e1da6db1e9abd0c7f5dafb9d1cabf171e05168586b5bfcee84df0a7408e847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2e1da6db1e9abd0c7f5dafb9d1cabf171e05168586b5bfcee84df0a7408e847\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:05:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:07:15Z is after 2025-08-24T17:21:41Z" Jan 22 09:07:15 crc kubenswrapper[4811]: I0122 09:07:15.361386 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab5dbe63-08fa-4157-9c52-003944e50d7b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af3bf6cdac4751814ecd71637d9e80cbdda1c0e99714275fe77cfa858b3823b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4419625f66ae957b2d9b146fc8b085a4a95c3f47860d6b4847f8fe71a6c4c86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7830976d23e073a099c3c6af1fa2392d9504ad9d41ddf2ee63c16632c22f833a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45fa5d37dcd1dcae1a7488bb855e4bd7c7dc39a1199fe80b82f83604be089f2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:05:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:07:15Z is after 2025-08-24T17:21:41Z" Jan 22 09:07:15 crc kubenswrapper[4811]: I0122 09:07:15.371163 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a14bfcc7d6d2ab5e1ad0c45999b2e73332e750ed65d9299e188505dafb26d0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:07:15Z is after 2025-08-24T17:21:41Z" Jan 22 09:07:15 crc kubenswrapper[4811]: I0122 09:07:15.380307 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bae2e22607dec646b15563dc62f66655e7a5c2b7ad809406c907e5f687f80105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75e290e203fab8f291146855a960477ceb9c33d965d15b4722c668dc2b3fded3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:07:15Z is after 2025-08-24T17:21:41Z" Jan 22 09:07:15 crc kubenswrapper[4811]: I0122 09:07:15.383040 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:15 crc kubenswrapper[4811]: I0122 09:07:15.383063 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:15 crc kubenswrapper[4811]: I0122 09:07:15.383072 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:15 crc kubenswrapper[4811]: I0122 09:07:15.383086 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:15 crc kubenswrapper[4811]: I0122 09:07:15.383096 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:15Z","lastTransitionTime":"2026-01-22T09:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:15 crc kubenswrapper[4811]: I0122 09:07:15.393890 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-274vf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cd0f0db-de53-47c0-9b45-2ce8b37392a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5f0ec565b046b22d600103b58946f6f520873f1686cc3042bf7ddc7f24bd4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e071df53e8cd1652f507eacae3e2c54ac75e76a6b9a426620c970762e81ecd84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c261cdea10a90cb9b06369f3b725b7778d070fb329e81ab44552446e12662451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f0736426a25dadac460e5ed467f3082cd8c4806190c053022d6f360c4290799\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9197a8c82d49fb9b4b1c97827ee95582ca8148656abb88904e06abb25c135597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55f26cdf9d61842467eb1367c6ec156b4de9c40a5b05fe60d3d100df5136a8a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4ae4bac915c177448079f710c8f3f5cb8ee9368318699d1500a8e2ec2075fb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://454af35caa860192c078a3fb4028bf2c4544607849290f3bd427688821757b2f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T09:06:42Z\\\",\\\"message\\\":\\\"ing service etcd on namespace openshift-etcd for network=default : 1.153957ms\\\\nI0122 09:06:42.648545 6404 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-controller-manager/controller-manager]} name:Service_openshift-controller-manager/controller-manager_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.149:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {cab7c637-a021-4a4d-a4b9-06d63c44316f}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0122 09:06:42.648515 6404 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-ingress-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"d8772e82-b0a4-4596-87d3-3d517c13344b\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-ingress-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]stri\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ae4bac915c177448079f710c8f3f5cb8ee9368318699d1500a8e2ec2075fb5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T09:07:14Z\\\",\\\"message\\\":\\\"bj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-additional-cni-plugins-9g4j8\\\\nI0122 09:07:14.652033 6830 services_controller.go:451] Built service openshift-machine-api/machine-api-controllers cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-api/machine-api-controllers_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/machine-api-controllers\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.167\\\\\\\", Port:8441, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.167\\\\\\\", Port:8442, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.167\\\\\\\", Port:8444, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0122 09:07:14.652209 6830 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0122 09:07:14.652215 6830 ovn.go:134] Ensuring zo\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c741f2990f908528ccab348e1eb02e217a9b659becc7a41c4eb8da28bfd42e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://523ab909aa2cbac6043b3c572a17c40733e9b2155d4017253d68998cb85d3e7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://523ab909aa2cbac6043b3c572a17c40733e9b2155d4017253d68998cb85d3e7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-274vf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:07:15Z is after 2025-08-24T17:21:41Z" Jan 22 09:07:15 crc kubenswrapper[4811]: I0122 09:07:15.400669 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n2kj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66a7f3f9-ab88-4b1a-a28a-900480fa9651\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3a88defded9a5435941e2608c8a29baee840bc0ce48454a3731f344299347cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z47mj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n2kj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:07:15Z is after 2025-08-24T17:21:41Z" Jan 22 09:07:15 crc kubenswrapper[4811]: I0122 09:07:15.407935 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95b8b8c1-880c-4910-8387-3ff87b534859\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1a6925ee11245a252465e1a76bf8246c142164097c9c35f3467ab3d1650bc32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78e58b4688b2704d8a02a27ef452d900419711bd51a2d64b9c05de7d3a02ffbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72d5bfb65ec0c94b865b39227fa43cb243e05b615b8b6c8b2ce289357eb5488b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b13581fc07c90a3dba27e04a1079d8a678635bf14c0d5463cd7b337f0f40a4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b13581fc07c90a3dba27e04a1079d8a678635bf14c0d5463cd7b337f0f40a4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:56Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:05:56Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:07:15Z is after 2025-08-24T17:21:41Z" Jan 22 09:07:15 crc kubenswrapper[4811]: I0122 09:07:15.414839 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f776b1bc70646779cf35092ded02c00aa3b990c873de09d21e1f0d76258b0b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:07:15Z is after 2025-08-24T17:21:41Z" Jan 22 09:07:15 crc kubenswrapper[4811]: I0122 09:07:15.422620 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gwnx7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"153efd1a-2c09-4c49-94e1-2307bbf1e659\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71a7748f45c9a14885785ca825c32a5036a887c7a54b36cadd1a89e57608111f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nklsw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddce971b4fbeb72e1a32c0adab8e46cb8bce2ff9736cd875f63e105bf39dcc27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nklsw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gwnx7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:07:15Z is after 2025-08-24T17:21:41Z" Jan 22 09:07:15 crc kubenswrapper[4811]: I0122 09:07:15.435435 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cf1d28a-4ee8-45bb-9a86-3687d8db2105\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684f8ca14b2eabcbf9280085d495af9a6f1fa76fb187f15b8fa5659178efdc93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64382d1748f1eaeb9f2f09d243b35c56f9fd70b9a9765ddb64cb0c4866ed493b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://590bb419e3791ec3a267b9a5aa4022ee74ba5e302b047e85211563e135c4cf31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b14bb77b0cf24bacdb8f5edae5d02ccea019b12feb315e6d6c4887feb1a9354\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0d10794bea5b8ea94b9247775e0c424fa9d481267c663d4bcae16eb4e7eb729\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09c7d347a923430e4eb2af48033ea3fc7715bedad38dd1b644c7b29a1ab617fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09c7d347a923430e4eb2af48033ea3fc7715bedad38dd1b644c7b29a1ab617fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba74d0a51550a0637b01b701bbaa33c5efe84fb88a69b7294d1125f1f400b7cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba74d0a51550a0637b01b701bbaa33c5efe84fb88a69b7294d1125f1f400b7cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9a2da74ed29f44340dd1a4778f15ba78ba4f673eb1b34b676e35c7d7c84af31d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a2da74ed29f44340dd1a4778f15ba78ba4f673eb1b34b676e35c7d7c84af31d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:05:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:07:15Z is after 2025-08-24T17:21:41Z" Jan 22 09:07:15 crc kubenswrapper[4811]: I0122 09:07:15.450837 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:07:15Z is after 2025-08-24T17:21:41Z" Jan 22 09:07:15 crc kubenswrapper[4811]: I0122 09:07:15.459842 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kfqgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2555861-d1bb-4f21-be4a-165ed9212932\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2e4b2026355f189cbe6a2d70999613aa1b5868f0b38c25e834e42eda1b41088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ececfc4b2e4f2a120e7a3307235d606db00733e1a5631b6e19b5312afc8e8ff\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T09:07:04Z\\\",\\\"message\\\":\\\"2026-01-22T09:06:19+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_2fe27a58-2281-4698-9763-8c4890d2a18e\\\\n2026-01-22T09:06:19+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_2fe27a58-2281-4698-9763-8c4890d2a18e to /host/opt/cni/bin/\\\\n2026-01-22T09:06:19Z [verbose] multus-daemon started\\\\n2026-01-22T09:06:19Z [verbose] Readiness Indicator file check\\\\n2026-01-22T09:07:04Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jt7cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kfqgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:07:15Z is after 2025-08-24T17:21:41Z" Jan 22 09:07:15 crc kubenswrapper[4811]: I0122 09:07:15.469097 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30b44c40-e6d4-4902-98e9-a259269d8bf7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c0b4f518da69d1ede47cb9d322e9d9ca120cfc5eb770f109f0e5fd2e3260964\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aee61445030e02ac4fa9de4fefd94fe50e821ac8897c7403e173f2594a2e0ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d66c2a7980e144d2ce01cb8ebac91297e012c09d4b9aec617e747542e1ca304\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4daff941637e717d7dc33c5627db72d253f50f50288c23257e1eaa66cd38c5d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a86cb7b767900041a2946ed016b105369283ee61acdf725ade666acb1e926ed0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0122 09:06:13.075349 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0122 09:06:13.075574 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 09:06:13.077171 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1744166093/tls.crt::/tmp/serving-cert-1744166093/tls.key\\\\\\\"\\\\nI0122 09:06:13.324400 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 09:06:13.330700 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 09:06:13.330724 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 09:06:13.330758 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 09:06:13.330771 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 09:06:13.337838 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 09:06:13.337866 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:06:13.337872 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:06:13.337876 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 09:06:13.337879 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 09:06:13.337883 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 09:06:13.337886 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 09:06:13.338151 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 09:06:13.339610 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9efb1ddf2ce6601c9860410e3ccacc38359aef779a7c8524bf4ec5950743d36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85e811085c9b345899231a1fcf56619c1cc5c8b0e6d22fc897fb96f20ba72e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85e811085c9b345899231a1fcf56619c1cc5c8b0e6d22fc897fb96f20ba72e37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:05:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:07:15Z is after 2025-08-24T17:21:41Z" Jan 22 09:07:15 crc kubenswrapper[4811]: I0122 09:07:15.477805 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:07:15Z is after 2025-08-24T17:21:41Z" Jan 22 09:07:15 crc kubenswrapper[4811]: I0122 09:07:15.484933 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:15 crc kubenswrapper[4811]: I0122 09:07:15.484961 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:15 crc kubenswrapper[4811]: I0122 09:07:15.484970 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:15 crc kubenswrapper[4811]: I0122 09:07:15.484983 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:15 crc kubenswrapper[4811]: I0122 09:07:15.484993 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:15Z","lastTransitionTime":"2026-01-22T09:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:15 crc kubenswrapper[4811]: I0122 09:07:15.486756 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:07:15Z is after 2025-08-24T17:21:41Z" Jan 22 09:07:15 crc kubenswrapper[4811]: I0122 09:07:15.493734 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xhhs7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f42dc1a6-d0c4-43e4-b9d9-b40c1f910400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a64d7f1d53f9a3d9616b76ea12902134c5e0a3a30c9a61113675074caae70d58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcfzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xhhs7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:07:15Z is after 2025-08-24T17:21:41Z" Jan 22 09:07:15 crc kubenswrapper[4811]: I0122 09:07:15.502863 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9g4j8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d23c9c9-89ca-4db5-99dc-1e5b9f80be38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://279ca4748f2c8229a6647c23a414668a7612c0c863b52ef33c5b3dd026de74ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e01aaa7db78471fea3a39a7dfc2be7926746a5c38a6a92dc3d33ebab7b8bf3cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e01aaa7db78471fea3a39a7dfc2be7926746a5c38a6a92dc3d33ebab7b8bf3cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c4b6317408e75e2cc7947a3ff91f9268141cd5000f951dcbfdab3fdc5c2386c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c4b6317408e75e2cc7947a3ff91f9268141cd5000f951dcbfdab3fdc5c2386c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9e774edddd1e144cf32c598245081727747e9a4e3c04ee339ff63e330ceb5cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9e774edddd1e144cf32c598245081727747e9a4e3c04ee339ff63e330ceb5cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66cb2961790d3d876438b07dabf72c50ed19c9e946e21dcdf90f098a38e02c84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66cb2961790d3d876438b07dabf72c50ed19c9e946e21dcdf90f098a38e02c84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://016900d5e167b9638dd095fc155f63e33c55037edb90af28ebef58c91b86fe7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://016900d5e167b9638dd095fc155f63e33c55037edb90af28ebef58c91b86fe7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4face38e0653080b6d84c1e1a126c96d043eb68c6e19f76dbd8def8bc96e9d40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4face38e0653080b6d84c1e1a126c96d043eb68c6e19f76dbd8def8bc96e9d40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9g4j8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:07:15Z is after 2025-08-24T17:21:41Z" Jan 22 09:07:15 crc kubenswrapper[4811]: I0122 09:07:15.511355 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84068a6b-e189-419b-87f5-f31428f6eafe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cde8418c9f22553519ec0a9ba1ddcdd553f64aafb44fa002e924499701236bca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thzhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbb067d04c1683ffd06d01aab41bb5988fdca44b8bd16012d35c42795d0d8011\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thzhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-txvcq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:07:15Z is after 2025-08-24T17:21:41Z" Jan 22 09:07:15 crc kubenswrapper[4811]: I0122 09:07:15.518060 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bhj4l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de4b38a0-0c7a-4693-9f92-40fefd6bc9b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mphcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mphcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bhj4l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:07:15Z is after 2025-08-24T17:21:41Z" Jan 22 09:07:15 crc kubenswrapper[4811]: I0122 09:07:15.587410 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:15 crc kubenswrapper[4811]: I0122 09:07:15.587450 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:15 crc kubenswrapper[4811]: I0122 09:07:15.587460 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:15 crc kubenswrapper[4811]: I0122 09:07:15.587478 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:15 crc kubenswrapper[4811]: I0122 09:07:15.587492 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:15Z","lastTransitionTime":"2026-01-22T09:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:15 crc kubenswrapper[4811]: I0122 09:07:15.689045 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:15 crc kubenswrapper[4811]: I0122 09:07:15.689072 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:15 crc kubenswrapper[4811]: I0122 09:07:15.689082 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:15 crc kubenswrapper[4811]: I0122 09:07:15.689093 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:15 crc kubenswrapper[4811]: I0122 09:07:15.689101 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:15Z","lastTransitionTime":"2026-01-22T09:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:15 crc kubenswrapper[4811]: I0122 09:07:15.790421 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:15 crc kubenswrapper[4811]: I0122 09:07:15.790448 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:15 crc kubenswrapper[4811]: I0122 09:07:15.790458 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:15 crc kubenswrapper[4811]: I0122 09:07:15.790472 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:15 crc kubenswrapper[4811]: I0122 09:07:15.790483 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:15Z","lastTransitionTime":"2026-01-22T09:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:15 crc kubenswrapper[4811]: I0122 09:07:15.892601 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:15 crc kubenswrapper[4811]: I0122 09:07:15.892650 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:15 crc kubenswrapper[4811]: I0122 09:07:15.892659 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:15 crc kubenswrapper[4811]: I0122 09:07:15.892671 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:15 crc kubenswrapper[4811]: I0122 09:07:15.892680 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:15Z","lastTransitionTime":"2026-01-22T09:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:15 crc kubenswrapper[4811]: I0122 09:07:15.991797 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:07:15 crc kubenswrapper[4811]: I0122 09:07:15.991872 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 09:07:15 crc kubenswrapper[4811]: I0122 09:07:15.991957 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 09:07:15 crc kubenswrapper[4811]: E0122 09:07:15.992029 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 09:07:15 crc kubenswrapper[4811]: E0122 09:07:15.992145 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 09:07:15 crc kubenswrapper[4811]: E0122 09:07:15.992369 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 09:07:15 crc kubenswrapper[4811]: I0122 09:07:15.994024 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:15 crc kubenswrapper[4811]: I0122 09:07:15.994055 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:15 crc kubenswrapper[4811]: I0122 09:07:15.994067 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:15 crc kubenswrapper[4811]: I0122 09:07:15.994082 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:15 crc kubenswrapper[4811]: I0122 09:07:15.994093 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:15Z","lastTransitionTime":"2026-01-22T09:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:16 crc kubenswrapper[4811]: I0122 09:07:16.006580 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cf1d28a-4ee8-45bb-9a86-3687d8db2105\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684f8ca14b2eabcbf9280085d495af9a6f1fa76fb187f15b8fa5659178efdc93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64382d1748f1eaeb9f2f09d243b35c56f9fd70b9a9765ddb64cb0c4866ed493b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://590bb419e3791ec3a267b9a5aa4022ee74ba5e302b047e85211563e135c4cf31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b14bb77b0cf24bacdb8f5edae5d02ccea019b12feb315e6d6c4887feb1a9354\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0d10794bea5b8ea94b9247775e0c424fa9d481267c663d4bcae16eb4e7eb729\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09c7d347a923430e4eb2af48033ea3fc7715bedad38dd1b644c7b29a1ab617fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09c7d347a923430e4eb2af48033ea3fc7715bedad38dd1b644c7b29a1ab617fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba74d0a51550a0637b01b701bbaa33c5efe84fb88a69b7294d1125f1f400b7cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba74d0a51550a0637b01b701bbaa33c5efe84fb88a69b7294d1125f1f400b7cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9a2da74ed29f44340dd1a4778f15ba78ba4f673eb1b34b676e35c7d7c84af31d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a2da74ed29f44340dd1a4778f15ba78ba4f673eb1b34b676e35c7d7c84af31d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:05:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:07:16Z is after 2025-08-24T17:21:41Z" Jan 22 09:07:16 crc kubenswrapper[4811]: I0122 09:07:16.010810 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 21:35:34.625740329 +0000 UTC Jan 22 09:07:16 crc kubenswrapper[4811]: I0122 09:07:16.015503 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:07:16Z is after 2025-08-24T17:21:41Z" Jan 22 09:07:16 crc kubenswrapper[4811]: I0122 09:07:16.024673 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kfqgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2555861-d1bb-4f21-be4a-165ed9212932\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2e4b2026355f189cbe6a2d70999613aa1b5868f0b38c25e834e42eda1b41088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ececfc4b2e4f2a120e7a3307235d606db00733e1a5631b6e19b5312afc8e8ff\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T09:07:04Z\\\",\\\"message\\\":\\\"2026-01-22T09:06:19+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_2fe27a58-2281-4698-9763-8c4890d2a18e\\\\n2026-01-22T09:06:19+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_2fe27a58-2281-4698-9763-8c4890d2a18e to /host/opt/cni/bin/\\\\n2026-01-22T09:06:19Z [verbose] multus-daemon started\\\\n2026-01-22T09:06:19Z [verbose] Readiness Indicator file check\\\\n2026-01-22T09:07:04Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jt7cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kfqgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:07:16Z is after 2025-08-24T17:21:41Z" Jan 22 09:07:16 crc kubenswrapper[4811]: I0122 09:07:16.032532 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84068a6b-e189-419b-87f5-f31428f6eafe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cde8418c9f22553519ec0a9ba1ddcdd553f64aafb44fa002e924499701236bca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thzhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbb067d04c1683ffd06d01aab41bb5988fdca44b8bd16012d35c42795d0d8011\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thzhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-txvcq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:07:16Z is after 2025-08-24T17:21:41Z" Jan 22 09:07:16 crc kubenswrapper[4811]: I0122 09:07:16.039616 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bhj4l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de4b38a0-0c7a-4693-9f92-40fefd6bc9b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mphcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mphcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bhj4l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:07:16Z is after 2025-08-24T17:21:41Z" Jan 22 09:07:16 crc kubenswrapper[4811]: I0122 09:07:16.048664 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30b44c40-e6d4-4902-98e9-a259269d8bf7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c0b4f518da69d1ede47cb9d322e9d9ca120cfc5eb770f109f0e5fd2e3260964\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aee61445030e02ac4fa9de4fefd94fe50e821ac8897c7403e173f2594a2e0ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d66c2a7980e144d2ce01cb8ebac91297e012c09d4b9aec617e747542e1ca304\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4daff941637e717d7dc33c5627db72d253f50f50288c23257e1eaa66cd38c5d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a86cb7b767900041a2946ed016b105369283ee61acdf725ade666acb1e926ed0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0122 09:06:13.075349 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0122 09:06:13.075574 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 09:06:13.077171 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1744166093/tls.crt::/tmp/serving-cert-1744166093/tls.key\\\\\\\"\\\\nI0122 09:06:13.324400 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 09:06:13.330700 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 09:06:13.330724 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 09:06:13.330758 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 09:06:13.330771 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 09:06:13.337838 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 09:06:13.337866 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:06:13.337872 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:06:13.337876 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 09:06:13.337879 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 09:06:13.337883 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 09:06:13.337886 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 09:06:13.338151 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 09:06:13.339610 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9efb1ddf2ce6601c9860410e3ccacc38359aef779a7c8524bf4ec5950743d36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85e811085c9b345899231a1fcf56619c1cc5c8b0e6d22fc897fb96f20ba72e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85e811085c9b345899231a1fcf56619c1cc5c8b0e6d22fc897fb96f20ba72e37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:05:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:07:16Z is after 2025-08-24T17:21:41Z" Jan 22 09:07:16 crc kubenswrapper[4811]: I0122 09:07:16.058700 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:07:16Z is after 2025-08-24T17:21:41Z" Jan 22 09:07:16 crc kubenswrapper[4811]: I0122 09:07:16.069553 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:07:16Z is after 2025-08-24T17:21:41Z" Jan 22 09:07:16 crc kubenswrapper[4811]: I0122 09:07:16.076352 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xhhs7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f42dc1a6-d0c4-43e4-b9d9-b40c1f910400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a64d7f1d53f9a3d9616b76ea12902134c5e0a3a30c9a61113675074caae70d58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcfzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xhhs7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:07:16Z is after 2025-08-24T17:21:41Z" Jan 22 09:07:16 crc kubenswrapper[4811]: I0122 09:07:16.086439 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9g4j8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d23c9c9-89ca-4db5-99dc-1e5b9f80be38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://279ca4748f2c8229a6647c23a414668a7612c0c863b52ef33c5b3dd026de74ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e01aaa7db78471fea3a39a7dfc2be7926746a5c38a6a92dc3d33ebab7b8bf3cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e01aaa7db78471fea3a39a7dfc2be7926746a5c38a6a92dc3d33ebab7b8bf3cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c4b6317408e75e2cc7947a3ff91f9268141cd5000f951dcbfdab3fdc5c2386c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c4b6317408e75e2cc7947a3ff91f9268141cd5000f951dcbfdab3fdc5c2386c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9e774edddd1e144cf32c598245081727747e9a4e3c04ee339ff63e330ceb5cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9e774edddd1e144cf32c598245081727747e9a4e3c04ee339ff63e330ceb5cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66cb2961790d3d876438b07dabf72c50ed19c9e946e21dcdf90f098a38e02c84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66cb2961790d3d876438b07dabf72c50ed19c9e946e21dcdf90f098a38e02c84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://016900d5e167b9638dd095fc155f63e33c55037edb90af28ebef58c91b86fe7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://016900d5e167b9638dd095fc155f63e33c55037edb90af28ebef58c91b86fe7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4face38e0653080b6d84c1e1a126c96d043eb68c6e19f76dbd8def8bc96e9d40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4face38e0653080b6d84c1e1a126c96d043eb68c6e19f76dbd8def8bc96e9d40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9g4j8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:07:16Z is after 2025-08-24T17:21:41Z" Jan 22 09:07:16 crc kubenswrapper[4811]: I0122 09:07:16.093620 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n2kj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66a7f3f9-ab88-4b1a-a28a-900480fa9651\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3a88defded9a5435941e2608c8a29baee840bc0ce48454a3731f344299347cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z47mj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n2kj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:07:16Z is after 2025-08-24T17:21:41Z" Jan 22 09:07:16 crc kubenswrapper[4811]: I0122 09:07:16.095725 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:16 crc kubenswrapper[4811]: I0122 09:07:16.095762 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:16 crc kubenswrapper[4811]: I0122 09:07:16.095782 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:16 crc kubenswrapper[4811]: I0122 09:07:16.095798 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:16 crc kubenswrapper[4811]: I0122 09:07:16.095809 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:16Z","lastTransitionTime":"2026-01-22T09:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:16 crc kubenswrapper[4811]: I0122 09:07:16.101362 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"119726d3-5d86-422f-bfc6-69c5914dbdb3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95195b2b1e3673feadcfaadaaba692abe8e0e9a6b2c8fb776c38187616e59c7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2e1da6db1e9abd0c7f5dafb9d1cabf171e05168586b5bfcee84df0a7408e847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2e1da6db1e9abd0c7f5dafb9d1cabf171e05168586b5bfcee84df0a7408e847\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:05:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:07:16Z is after 2025-08-24T17:21:41Z" Jan 22 09:07:16 crc kubenswrapper[4811]: I0122 09:07:16.110107 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab5dbe63-08fa-4157-9c52-003944e50d7b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af3bf6cdac4751814ecd71637d9e80cbdda1c0e99714275fe77cfa858b3823b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4419625f66ae957b2d9b146fc8b085a4a95c3f47860d6b4847f8fe71a6c4c86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7830976d23e073a099c3c6af1fa2392d9504ad9d41ddf2ee63c16632c22f833a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45fa5d37dcd1dcae1a7488bb855e4bd7c7dc39a1199fe80b82f83604be089f2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:05:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:07:16Z is after 2025-08-24T17:21:41Z" Jan 22 09:07:16 crc kubenswrapper[4811]: I0122 09:07:16.118844 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a14bfcc7d6d2ab5e1ad0c45999b2e73332e750ed65d9299e188505dafb26d0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:07:16Z is after 2025-08-24T17:21:41Z" Jan 22 09:07:16 crc kubenswrapper[4811]: I0122 09:07:16.127886 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bae2e22607dec646b15563dc62f66655e7a5c2b7ad809406c907e5f687f80105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75e290e203fab8f291146855a960477ceb9c33d965d15b4722c668dc2b3fded3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:07:16Z is after 2025-08-24T17:21:41Z" Jan 22 09:07:16 crc kubenswrapper[4811]: I0122 09:07:16.140516 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-274vf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cd0f0db-de53-47c0-9b45-2ce8b37392a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5f0ec565b046b22d600103b58946f6f520873f1686cc3042bf7ddc7f24bd4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e071df53e8cd1652f507eacae3e2c54ac75e76a6b9a426620c970762e81ecd84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c261cdea10a90cb9b06369f3b725b7778d070fb329e81ab44552446e12662451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f0736426a25dadac460e5ed467f3082cd8c4806190c053022d6f360c4290799\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9197a8c82d49fb9b4b1c97827ee95582ca8148656abb88904e06abb25c135597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55f26cdf9d61842467eb1367c6ec156b4de9c40a5b05fe60d3d100df5136a8a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4ae4bac915c177448079f710c8f3f5cb8ee9368318699d1500a8e2ec2075fb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://454af35caa860192c078a3fb4028bf2c4544607849290f3bd427688821757b2f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T09:06:42Z\\\",\\\"message\\\":\\\"ing service etcd on namespace openshift-etcd for network=default : 1.153957ms\\\\nI0122 09:06:42.648545 6404 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-controller-manager/controller-manager]} name:Service_openshift-controller-manager/controller-manager_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.149:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {cab7c637-a021-4a4d-a4b9-06d63c44316f}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0122 09:06:42.648515 6404 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-ingress-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"d8772e82-b0a4-4596-87d3-3d517c13344b\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-ingress-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]stri\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ae4bac915c177448079f710c8f3f5cb8ee9368318699d1500a8e2ec2075fb5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T09:07:14Z\\\",\\\"message\\\":\\\"bj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-additional-cni-plugins-9g4j8\\\\nI0122 09:07:14.652033 6830 services_controller.go:451] Built service openshift-machine-api/machine-api-controllers cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-api/machine-api-controllers_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/machine-api-controllers\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.167\\\\\\\", Port:8441, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.167\\\\\\\", Port:8442, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.167\\\\\\\", Port:8444, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0122 09:07:14.652209 6830 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0122 09:07:14.652215 6830 ovn.go:134] Ensuring zo\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c741f2990f908528ccab348e1eb02e217a9b659becc7a41c4eb8da28bfd42e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://523ab909aa2cbac6043b3c572a17c40733e9b2155d4017253d68998cb85d3e7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://523ab909aa2cbac6043b3c572a17c40733e9b2155d4017253d68998cb85d3e7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-274vf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:07:16Z is after 2025-08-24T17:21:41Z" Jan 22 09:07:16 crc kubenswrapper[4811]: I0122 09:07:16.148506 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95b8b8c1-880c-4910-8387-3ff87b534859\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1a6925ee11245a252465e1a76bf8246c142164097c9c35f3467ab3d1650bc32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78e58b4688b2704d8a02a27ef452d900419711bd51a2d64b9c05de7d3a02ffbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72d5bfb65ec0c94b865b39227fa43cb243e05b615b8b6c8b2ce289357eb5488b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b13581fc07c90a3dba27e04a1079d8a678635bf14c0d5463cd7b337f0f40a4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b13581fc07c90a3dba27e04a1079d8a678635bf14c0d5463cd7b337f0f40a4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:56Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:05:56Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:07:16Z is after 2025-08-24T17:21:41Z" Jan 22 09:07:16 crc kubenswrapper[4811]: I0122 09:07:16.155887 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f776b1bc70646779cf35092ded02c00aa3b990c873de09d21e1f0d76258b0b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:07:16Z is after 2025-08-24T17:21:41Z" Jan 22 09:07:16 crc kubenswrapper[4811]: I0122 09:07:16.163689 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gwnx7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"153efd1a-2c09-4c49-94e1-2307bbf1e659\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71a7748f45c9a14885785ca825c32a5036a887c7a54b36cadd1a89e57608111f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nklsw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddce971b4fbeb72e1a32c0adab8e46cb8bce2ff9736cd875f63e105bf39dcc27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nklsw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gwnx7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:07:16Z is after 2025-08-24T17:21:41Z" Jan 22 09:07:16 crc kubenswrapper[4811]: I0122 09:07:16.198095 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:16 crc kubenswrapper[4811]: I0122 09:07:16.198119 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:16 crc kubenswrapper[4811]: I0122 09:07:16.198127 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:16 crc kubenswrapper[4811]: I0122 09:07:16.198140 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:16 crc kubenswrapper[4811]: I0122 09:07:16.198148 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:16Z","lastTransitionTime":"2026-01-22T09:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:16 crc kubenswrapper[4811]: I0122 09:07:16.299678 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:16 crc kubenswrapper[4811]: I0122 09:07:16.299732 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:16 crc kubenswrapper[4811]: I0122 09:07:16.299741 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:16 crc kubenswrapper[4811]: I0122 09:07:16.299763 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:16 crc kubenswrapper[4811]: I0122 09:07:16.299780 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:16Z","lastTransitionTime":"2026-01-22T09:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:16 crc kubenswrapper[4811]: I0122 09:07:16.341131 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-274vf_1cd0f0db-de53-47c0-9b45-2ce8b37392a3/ovnkube-controller/3.log" Jan 22 09:07:16 crc kubenswrapper[4811]: I0122 09:07:16.343611 4811 scope.go:117] "RemoveContainer" containerID="c4ae4bac915c177448079f710c8f3f5cb8ee9368318699d1500a8e2ec2075fb5" Jan 22 09:07:16 crc kubenswrapper[4811]: E0122 09:07:16.343776 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-274vf_openshift-ovn-kubernetes(1cd0f0db-de53-47c0-9b45-2ce8b37392a3)\"" pod="openshift-ovn-kubernetes/ovnkube-node-274vf" podUID="1cd0f0db-de53-47c0-9b45-2ce8b37392a3" Jan 22 09:07:16 crc kubenswrapper[4811]: I0122 09:07:16.351234 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"119726d3-5d86-422f-bfc6-69c5914dbdb3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95195b2b1e3673feadcfaadaaba692abe8e0e9a6b2c8fb776c38187616e59c7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2e1da6db1e9abd0c7f5dafb9d1cabf171e05168586b5bfcee84df0a7408e847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2e1da6db1e9abd0c7f5dafb9d1cabf171e05168586b5bfcee84df0a7408e847\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:05:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:07:16Z is after 2025-08-24T17:21:41Z" Jan 22 09:07:16 crc kubenswrapper[4811]: I0122 09:07:16.360087 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab5dbe63-08fa-4157-9c52-003944e50d7b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af3bf6cdac4751814ecd71637d9e80cbdda1c0e99714275fe77cfa858b3823b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4419625f66ae957b2d9b146fc8b085a4a95c3f47860d6b4847f8fe71a6c4c86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7830976d23e073a099c3c6af1fa2392d9504ad9d41ddf2ee63c16632c22f833a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45fa5d37dcd1dcae1a7488bb855e4bd7c7dc39a1199fe80b82f83604be089f2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:05:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:07:16Z is after 2025-08-24T17:21:41Z" Jan 22 09:07:16 crc kubenswrapper[4811]: I0122 09:07:16.368765 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a14bfcc7d6d2ab5e1ad0c45999b2e73332e750ed65d9299e188505dafb26d0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:07:16Z is after 2025-08-24T17:21:41Z" Jan 22 09:07:16 crc kubenswrapper[4811]: I0122 09:07:16.376767 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bae2e22607dec646b15563dc62f66655e7a5c2b7ad809406c907e5f687f80105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75e290e203fab8f291146855a960477ceb9c33d965d15b4722c668dc2b3fded3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:07:16Z is after 2025-08-24T17:21:41Z" Jan 22 09:07:16 crc kubenswrapper[4811]: I0122 09:07:16.388280 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-274vf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cd0f0db-de53-47c0-9b45-2ce8b37392a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5f0ec565b046b22d600103b58946f6f520873f1686cc3042bf7ddc7f24bd4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e071df53e8cd1652f507eacae3e2c54ac75e76a6b9a426620c970762e81ecd84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c261cdea10a90cb9b06369f3b725b7778d070fb329e81ab44552446e12662451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f0736426a25dadac460e5ed467f3082cd8c4806190c053022d6f360c4290799\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9197a8c82d49fb9b4b1c97827ee95582ca8148656abb88904e06abb25c135597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55f26cdf9d61842467eb1367c6ec156b4de9c40a5b05fe60d3d100df5136a8a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4ae4bac915c177448079f710c8f3f5cb8ee9368318699d1500a8e2ec2075fb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ae4bac915c177448079f710c8f3f5cb8ee9368318699d1500a8e2ec2075fb5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T09:07:14Z\\\",\\\"message\\\":\\\"bj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-additional-cni-plugins-9g4j8\\\\nI0122 09:07:14.652033 6830 services_controller.go:451] Built service openshift-machine-api/machine-api-controllers cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-api/machine-api-controllers_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/machine-api-controllers\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.167\\\\\\\", Port:8441, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.167\\\\\\\", Port:8442, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.167\\\\\\\", Port:8444, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0122 09:07:14.652209 6830 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0122 09:07:14.652215 6830 ovn.go:134] Ensuring zo\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:07:14Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-274vf_openshift-ovn-kubernetes(1cd0f0db-de53-47c0-9b45-2ce8b37392a3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c741f2990f908528ccab348e1eb02e217a9b659becc7a41c4eb8da28bfd42e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://523ab909aa2cbac6043b3c572a17c40733e9b2155d4017253d68998cb85d3e7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://523ab909aa2cbac6043b3c572a17c40733e9b2155d4017253d68998cb85d3e7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwxdh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-274vf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:07:16Z is after 2025-08-24T17:21:41Z" Jan 22 09:07:16 crc kubenswrapper[4811]: I0122 09:07:16.395742 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n2kj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66a7f3f9-ab88-4b1a-a28a-900480fa9651\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3a88defded9a5435941e2608c8a29baee840bc0ce48454a3731f344299347cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z47mj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n2kj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:07:16Z is after 2025-08-24T17:21:41Z" Jan 22 09:07:16 crc kubenswrapper[4811]: I0122 09:07:16.401067 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:16 crc kubenswrapper[4811]: I0122 09:07:16.401183 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:16 crc kubenswrapper[4811]: I0122 09:07:16.401256 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:16 crc kubenswrapper[4811]: I0122 09:07:16.401331 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:16 crc kubenswrapper[4811]: I0122 09:07:16.401387 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:16Z","lastTransitionTime":"2026-01-22T09:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:16 crc kubenswrapper[4811]: I0122 09:07:16.403650 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95b8b8c1-880c-4910-8387-3ff87b534859\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1a6925ee11245a252465e1a76bf8246c142164097c9c35f3467ab3d1650bc32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78e58b4688b2704d8a02a27ef452d900419711bd51a2d64b9c05de7d3a02ffbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72d5bfb65ec0c94b865b39227fa43cb243e05b615b8b6c8b2ce289357eb5488b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b13581fc07c90a3dba27e04a1079d8a678635bf14c0d5463cd7b337f0f40a4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b13581fc07c90a3dba27e04a1079d8a678635bf14c0d5463cd7b337f0f40a4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:56Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:05:56Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:07:16Z is after 2025-08-24T17:21:41Z" Jan 22 09:07:16 crc kubenswrapper[4811]: I0122 09:07:16.411217 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f776b1bc70646779cf35092ded02c00aa3b990c873de09d21e1f0d76258b0b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:07:16Z is after 2025-08-24T17:21:41Z" Jan 22 09:07:16 crc kubenswrapper[4811]: I0122 09:07:16.418264 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gwnx7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"153efd1a-2c09-4c49-94e1-2307bbf1e659\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71a7748f45c9a14885785ca825c32a5036a887c7a54b36cadd1a89e57608111f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nklsw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddce971b4fbeb72e1a32c0adab8e46cb8bce2ff9736cd875f63e105bf39dcc27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nklsw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gwnx7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:07:16Z is after 2025-08-24T17:21:41Z" Jan 22 09:07:16 crc kubenswrapper[4811]: I0122 09:07:16.430886 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cf1d28a-4ee8-45bb-9a86-3687d8db2105\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684f8ca14b2eabcbf9280085d495af9a6f1fa76fb187f15b8fa5659178efdc93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64382d1748f1eaeb9f2f09d243b35c56f9fd70b9a9765ddb64cb0c4866ed493b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://590bb419e3791ec3a267b9a5aa4022ee74ba5e302b047e85211563e135c4cf31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b14bb77b0cf24bacdb8f5edae5d02ccea019b12feb315e6d6c4887feb1a9354\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0d10794bea5b8ea94b9247775e0c424fa9d481267c663d4bcae16eb4e7eb729\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09c7d347a923430e4eb2af48033ea3fc7715bedad38dd1b644c7b29a1ab617fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09c7d347a923430e4eb2af48033ea3fc7715bedad38dd1b644c7b29a1ab617fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba74d0a51550a0637b01b701bbaa33c5efe84fb88a69b7294d1125f1f400b7cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba74d0a51550a0637b01b701bbaa33c5efe84fb88a69b7294d1125f1f400b7cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9a2da74ed29f44340dd1a4778f15ba78ba4f673eb1b34b676e35c7d7c84af31d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a2da74ed29f44340dd1a4778f15ba78ba4f673eb1b34b676e35c7d7c84af31d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:05:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:07:16Z is after 2025-08-24T17:21:41Z" Jan 22 09:07:16 crc kubenswrapper[4811]: I0122 09:07:16.439042 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:07:16Z is after 2025-08-24T17:21:41Z" Jan 22 09:07:16 crc kubenswrapper[4811]: I0122 09:07:16.447385 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kfqgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2555861-d1bb-4f21-be4a-165ed9212932\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2e4b2026355f189cbe6a2d70999613aa1b5868f0b38c25e834e42eda1b41088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ececfc4b2e4f2a120e7a3307235d606db00733e1a5631b6e19b5312afc8e8ff\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T09:07:04Z\\\",\\\"message\\\":\\\"2026-01-22T09:06:19+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_2fe27a58-2281-4698-9763-8c4890d2a18e\\\\n2026-01-22T09:06:19+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_2fe27a58-2281-4698-9763-8c4890d2a18e to /host/opt/cni/bin/\\\\n2026-01-22T09:06:19Z [verbose] multus-daemon started\\\\n2026-01-22T09:06:19Z [verbose] Readiness Indicator file check\\\\n2026-01-22T09:07:04Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:07:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jt7cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kfqgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:07:16Z is after 2025-08-24T17:21:41Z" Jan 22 09:07:16 crc kubenswrapper[4811]: I0122 09:07:16.455650 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30b44c40-e6d4-4902-98e9-a259269d8bf7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c0b4f518da69d1ede47cb9d322e9d9ca120cfc5eb770f109f0e5fd2e3260964\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aee61445030e02ac4fa9de4fefd94fe50e821ac8897c7403e173f2594a2e0ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d66c2a7980e144d2ce01cb8ebac91297e012c09d4b9aec617e747542e1ca304\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4daff941637e717d7dc33c5627db72d253f50f50288c23257e1eaa66cd38c5d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a86cb7b767900041a2946ed016b105369283ee61acdf725ade666acb1e926ed0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0122 09:06:13.075349 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0122 09:06:13.075574 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 09:06:13.077171 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1744166093/tls.crt::/tmp/serving-cert-1744166093/tls.key\\\\\\\"\\\\nI0122 09:06:13.324400 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 09:06:13.330700 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 09:06:13.330724 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 09:06:13.330758 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 09:06:13.330771 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 09:06:13.337838 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 09:06:13.337866 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:06:13.337872 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 09:06:13.337876 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 09:06:13.337879 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 09:06:13.337883 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 09:06:13.337886 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 09:06:13.338151 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 09:06:13.339610 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9efb1ddf2ce6601c9860410e3ccacc38359aef779a7c8524bf4ec5950743d36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:05:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85e811085c9b345899231a1fcf56619c1cc5c8b0e6d22fc897fb96f20ba72e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85e811085c9b345899231a1fcf56619c1cc5c8b0e6d22fc897fb96f20ba72e37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:05:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:05:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:05:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:07:16Z is after 2025-08-24T17:21:41Z" Jan 22 09:07:16 crc kubenswrapper[4811]: I0122 09:07:16.463010 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:07:16Z is after 2025-08-24T17:21:41Z" Jan 22 09:07:16 crc kubenswrapper[4811]: I0122 09:07:16.472000 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:07:16Z is after 2025-08-24T17:21:41Z" Jan 22 09:07:16 crc kubenswrapper[4811]: I0122 09:07:16.478997 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xhhs7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f42dc1a6-d0c4-43e4-b9d9-b40c1f910400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a64d7f1d53f9a3d9616b76ea12902134c5e0a3a30c9a61113675074caae70d58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcfzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xhhs7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:07:16Z is after 2025-08-24T17:21:41Z" Jan 22 09:07:16 crc kubenswrapper[4811]: I0122 09:07:16.490251 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9g4j8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d23c9c9-89ca-4db5-99dc-1e5b9f80be38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://279ca4748f2c8229a6647c23a414668a7612c0c863b52ef33c5b3dd026de74ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e01aaa7db78471fea3a39a7dfc2be7926746a5c38a6a92dc3d33ebab7b8bf3cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e01aaa7db78471fea3a39a7dfc2be7926746a5c38a6a92dc3d33ebab7b8bf3cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c4b6317408e75e2cc7947a3ff91f9268141cd5000f951dcbfdab3fdc5c2386c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c4b6317408e75e2cc7947a3ff91f9268141cd5000f951dcbfdab3fdc5c2386c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9e774edddd1e144cf32c598245081727747e9a4e3c04ee339ff63e330ceb5cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9e774edddd1e144cf32c598245081727747e9a4e3c04ee339ff63e330ceb5cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66cb2961790d3d876438b07dabf72c50ed19c9e946e21dcdf90f098a38e02c84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66cb2961790d3d876438b07dabf72c50ed19c9e946e21dcdf90f098a38e02c84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://016900d5e167b9638dd095fc155f63e33c55037edb90af28ebef58c91b86fe7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://016900d5e167b9638dd095fc155f63e33c55037edb90af28ebef58c91b86fe7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4face38e0653080b6d84c1e1a126c96d043eb68c6e19f76dbd8def8bc96e9d40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4face38e0653080b6d84c1e1a126c96d043eb68c6e19f76dbd8def8bc96e9d40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T09:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T09:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwl4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9g4j8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:07:16Z is after 2025-08-24T17:21:41Z" Jan 22 09:07:16 crc kubenswrapper[4811]: I0122 09:07:16.498761 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84068a6b-e189-419b-87f5-f31428f6eafe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cde8418c9f22553519ec0a9ba1ddcdd553f64aafb44fa002e924499701236bca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thzhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbb067d04c1683ffd06d01aab41bb5988fdca44b8bd16012d35c42795d0d8011\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T09:06:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thzhp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-txvcq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:07:16Z is after 2025-08-24T17:21:41Z" Jan 22 09:07:16 crc kubenswrapper[4811]: I0122 09:07:16.503551 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:16 crc kubenswrapper[4811]: I0122 09:07:16.503575 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:16 crc kubenswrapper[4811]: I0122 09:07:16.503587 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:16 crc kubenswrapper[4811]: I0122 09:07:16.503600 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:16 crc kubenswrapper[4811]: I0122 09:07:16.503608 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:16Z","lastTransitionTime":"2026-01-22T09:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:16 crc kubenswrapper[4811]: I0122 09:07:16.506278 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bhj4l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de4b38a0-0c7a-4693-9f92-40fefd6bc9b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T09:06:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mphcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mphcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T09:06:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bhj4l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:07:16Z is after 2025-08-24T17:21:41Z" Jan 22 09:07:16 crc kubenswrapper[4811]: I0122 09:07:16.605954 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:16 crc kubenswrapper[4811]: I0122 09:07:16.605991 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:16 crc kubenswrapper[4811]: I0122 09:07:16.606002 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:16 crc kubenswrapper[4811]: I0122 09:07:16.606017 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:16 crc kubenswrapper[4811]: I0122 09:07:16.606028 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:16Z","lastTransitionTime":"2026-01-22T09:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:16 crc kubenswrapper[4811]: I0122 09:07:16.707966 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:16 crc kubenswrapper[4811]: I0122 09:07:16.708002 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:16 crc kubenswrapper[4811]: I0122 09:07:16.708010 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:16 crc kubenswrapper[4811]: I0122 09:07:16.708023 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:16 crc kubenswrapper[4811]: I0122 09:07:16.708032 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:16Z","lastTransitionTime":"2026-01-22T09:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:16 crc kubenswrapper[4811]: I0122 09:07:16.809897 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:16 crc kubenswrapper[4811]: I0122 09:07:16.809932 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:16 crc kubenswrapper[4811]: I0122 09:07:16.809942 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:16 crc kubenswrapper[4811]: I0122 09:07:16.809958 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:16 crc kubenswrapper[4811]: I0122 09:07:16.809967 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:16Z","lastTransitionTime":"2026-01-22T09:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:16 crc kubenswrapper[4811]: I0122 09:07:16.911492 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:16 crc kubenswrapper[4811]: I0122 09:07:16.911701 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:16 crc kubenswrapper[4811]: I0122 09:07:16.911787 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:16 crc kubenswrapper[4811]: I0122 09:07:16.911867 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:16 crc kubenswrapper[4811]: I0122 09:07:16.911931 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:16Z","lastTransitionTime":"2026-01-22T09:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:16 crc kubenswrapper[4811]: I0122 09:07:16.991385 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bhj4l" Jan 22 09:07:16 crc kubenswrapper[4811]: E0122 09:07:16.991567 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bhj4l" podUID="de4b38a0-0c7a-4693-9f92-40fefd6bc9b4" Jan 22 09:07:17 crc kubenswrapper[4811]: I0122 09:07:17.011944 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 03:31:39.858937523 +0000 UTC Jan 22 09:07:17 crc kubenswrapper[4811]: I0122 09:07:17.014044 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:17 crc kubenswrapper[4811]: I0122 09:07:17.014104 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:17 crc kubenswrapper[4811]: I0122 09:07:17.014113 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:17 crc kubenswrapper[4811]: I0122 09:07:17.014123 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:17 crc kubenswrapper[4811]: I0122 09:07:17.014130 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:17Z","lastTransitionTime":"2026-01-22T09:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:17 crc kubenswrapper[4811]: I0122 09:07:17.115919 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:17 crc kubenswrapper[4811]: I0122 09:07:17.115950 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:17 crc kubenswrapper[4811]: I0122 09:07:17.115960 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:17 crc kubenswrapper[4811]: I0122 09:07:17.115971 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:17 crc kubenswrapper[4811]: I0122 09:07:17.115978 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:17Z","lastTransitionTime":"2026-01-22T09:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:17 crc kubenswrapper[4811]: I0122 09:07:17.217646 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:17 crc kubenswrapper[4811]: I0122 09:07:17.217673 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:17 crc kubenswrapper[4811]: I0122 09:07:17.217681 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:17 crc kubenswrapper[4811]: I0122 09:07:17.217690 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:17 crc kubenswrapper[4811]: I0122 09:07:17.217698 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:17Z","lastTransitionTime":"2026-01-22T09:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:17 crc kubenswrapper[4811]: I0122 09:07:17.319834 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:17 crc kubenswrapper[4811]: I0122 09:07:17.319879 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:17 crc kubenswrapper[4811]: I0122 09:07:17.319889 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:17 crc kubenswrapper[4811]: I0122 09:07:17.319903 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:17 crc kubenswrapper[4811]: I0122 09:07:17.319913 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:17Z","lastTransitionTime":"2026-01-22T09:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:17 crc kubenswrapper[4811]: I0122 09:07:17.428027 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:17 crc kubenswrapper[4811]: I0122 09:07:17.428062 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:17 crc kubenswrapper[4811]: I0122 09:07:17.428080 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:17 crc kubenswrapper[4811]: I0122 09:07:17.428094 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:17 crc kubenswrapper[4811]: I0122 09:07:17.428103 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:17Z","lastTransitionTime":"2026-01-22T09:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:17 crc kubenswrapper[4811]: I0122 09:07:17.530669 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:17 crc kubenswrapper[4811]: I0122 09:07:17.530728 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:17 crc kubenswrapper[4811]: I0122 09:07:17.530740 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:17 crc kubenswrapper[4811]: I0122 09:07:17.530756 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:17 crc kubenswrapper[4811]: I0122 09:07:17.530768 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:17Z","lastTransitionTime":"2026-01-22T09:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:17 crc kubenswrapper[4811]: I0122 09:07:17.633364 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:17 crc kubenswrapper[4811]: I0122 09:07:17.633414 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:17 crc kubenswrapper[4811]: I0122 09:07:17.633426 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:17 crc kubenswrapper[4811]: I0122 09:07:17.633440 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:17 crc kubenswrapper[4811]: I0122 09:07:17.633451 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:17Z","lastTransitionTime":"2026-01-22T09:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:17 crc kubenswrapper[4811]: I0122 09:07:17.735376 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:17 crc kubenswrapper[4811]: I0122 09:07:17.735489 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:17 crc kubenswrapper[4811]: I0122 09:07:17.735562 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:17 crc kubenswrapper[4811]: I0122 09:07:17.735657 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:17 crc kubenswrapper[4811]: I0122 09:07:17.735724 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:17Z","lastTransitionTime":"2026-01-22T09:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:17 crc kubenswrapper[4811]: I0122 09:07:17.745326 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:17 crc kubenswrapper[4811]: I0122 09:07:17.745388 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:17 crc kubenswrapper[4811]: I0122 09:07:17.745401 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:17 crc kubenswrapper[4811]: I0122 09:07:17.745421 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:17 crc kubenswrapper[4811]: I0122 09:07:17.745433 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:17Z","lastTransitionTime":"2026-01-22T09:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:17 crc kubenswrapper[4811]: I0122 09:07:17.755718 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 09:07:17 crc kubenswrapper[4811]: I0122 09:07:17.755793 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 09:07:17 crc kubenswrapper[4811]: E0122 09:07:17.755895 4811 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 09:07:17 crc kubenswrapper[4811]: E0122 09:07:17.755915 4811 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 09:07:17 crc kubenswrapper[4811]: E0122 09:07:17.755926 4811 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 09:07:17 crc kubenswrapper[4811]: E0122 09:07:17.755966 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-22 09:08:21.755954748 +0000 UTC m=+146.078141871 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 09:07:17 crc kubenswrapper[4811]: E0122 09:07:17.755895 4811 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 09:07:17 crc kubenswrapper[4811]: E0122 09:07:17.755994 4811 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 09:07:17 crc kubenswrapper[4811]: E0122 09:07:17.756004 4811 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 09:07:17 crc kubenswrapper[4811]: E0122 09:07:17.756036 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-22 09:08:21.756028116 +0000 UTC m=+146.078215240 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 09:07:17 crc kubenswrapper[4811]: E0122 09:07:17.756064 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:07:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:07:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:07:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:07:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:07:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:07:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:07:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:07:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0066ca33-f035-4af3-9028-0da78d54d55e\\\",\\\"systemUUID\\\":\\\"463fdb35-dd0c-4804-a85e-31cd33c59ce4\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:07:17Z is after 2025-08-24T17:21:41Z" Jan 22 09:07:17 crc kubenswrapper[4811]: I0122 09:07:17.759926 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:17 crc kubenswrapper[4811]: I0122 09:07:17.759968 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:17 crc kubenswrapper[4811]: I0122 09:07:17.759978 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:17 crc kubenswrapper[4811]: I0122 09:07:17.759991 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:17 crc kubenswrapper[4811]: I0122 09:07:17.760001 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:17Z","lastTransitionTime":"2026-01-22T09:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:17 crc kubenswrapper[4811]: E0122 09:07:17.767799 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:07:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:07:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:07:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:07:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:07:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:07:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:07:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:07:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0066ca33-f035-4af3-9028-0da78d54d55e\\\",\\\"systemUUID\\\":\\\"463fdb35-dd0c-4804-a85e-31cd33c59ce4\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:07:17Z is after 2025-08-24T17:21:41Z" Jan 22 09:07:17 crc kubenswrapper[4811]: I0122 09:07:17.770333 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:17 crc kubenswrapper[4811]: I0122 09:07:17.770359 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:17 crc kubenswrapper[4811]: I0122 09:07:17.770368 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:17 crc kubenswrapper[4811]: I0122 09:07:17.770379 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:17 crc kubenswrapper[4811]: I0122 09:07:17.770385 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:17Z","lastTransitionTime":"2026-01-22T09:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:17 crc kubenswrapper[4811]: E0122 09:07:17.778111 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:07:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:07:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:07:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:07:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:07:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:07:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:07:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:07:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0066ca33-f035-4af3-9028-0da78d54d55e\\\",\\\"systemUUID\\\":\\\"463fdb35-dd0c-4804-a85e-31cd33c59ce4\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:07:17Z is after 2025-08-24T17:21:41Z" Jan 22 09:07:17 crc kubenswrapper[4811]: I0122 09:07:17.780569 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:17 crc kubenswrapper[4811]: I0122 09:07:17.780595 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:17 crc kubenswrapper[4811]: I0122 09:07:17.780603 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:17 crc kubenswrapper[4811]: I0122 09:07:17.780615 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:17 crc kubenswrapper[4811]: I0122 09:07:17.780641 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:17Z","lastTransitionTime":"2026-01-22T09:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:17 crc kubenswrapper[4811]: E0122 09:07:17.787836 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:07:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:07:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:07:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:07:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:07:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:07:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:07:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:07:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0066ca33-f035-4af3-9028-0da78d54d55e\\\",\\\"systemUUID\\\":\\\"463fdb35-dd0c-4804-a85e-31cd33c59ce4\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:07:17Z is after 2025-08-24T17:21:41Z" Jan 22 09:07:17 crc kubenswrapper[4811]: I0122 09:07:17.790176 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:17 crc kubenswrapper[4811]: I0122 09:07:17.790258 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:17 crc kubenswrapper[4811]: I0122 09:07:17.790330 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:17 crc kubenswrapper[4811]: I0122 09:07:17.790397 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:17 crc kubenswrapper[4811]: I0122 09:07:17.790451 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:17Z","lastTransitionTime":"2026-01-22T09:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:17 crc kubenswrapper[4811]: E0122 09:07:17.797795 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:07:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:07:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:07:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:07:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:07:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:07:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T09:07:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T09:07:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0066ca33-f035-4af3-9028-0da78d54d55e\\\",\\\"systemUUID\\\":\\\"463fdb35-dd0c-4804-a85e-31cd33c59ce4\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T09:07:17Z is after 2025-08-24T17:21:41Z" Jan 22 09:07:17 crc kubenswrapper[4811]: E0122 09:07:17.798033 4811 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 22 09:07:17 crc kubenswrapper[4811]: I0122 09:07:17.837810 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:17 crc kubenswrapper[4811]: I0122 09:07:17.837845 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:17 crc kubenswrapper[4811]: I0122 09:07:17.837854 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:17 crc kubenswrapper[4811]: I0122 09:07:17.837865 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:17 crc kubenswrapper[4811]: I0122 09:07:17.837873 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:17Z","lastTransitionTime":"2026-01-22T09:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:17 crc kubenswrapper[4811]: I0122 09:07:17.939559 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:17 crc kubenswrapper[4811]: I0122 09:07:17.939815 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:17 crc kubenswrapper[4811]: I0122 09:07:17.939909 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:17 crc kubenswrapper[4811]: I0122 09:07:17.939978 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:17 crc kubenswrapper[4811]: I0122 09:07:17.940055 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:17Z","lastTransitionTime":"2026-01-22T09:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:17 crc kubenswrapper[4811]: I0122 09:07:17.957228 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 09:07:17 crc kubenswrapper[4811]: E0122 09:07:17.957330 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 09:08:21.957310512 +0000 UTC m=+146.279497636 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:07:17 crc kubenswrapper[4811]: I0122 09:07:17.957372 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:07:17 crc kubenswrapper[4811]: I0122 09:07:17.957496 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:07:17 crc kubenswrapper[4811]: E0122 09:07:17.957524 4811 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 09:07:17 crc kubenswrapper[4811]: E0122 09:07:17.957582 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 09:08:21.957569803 +0000 UTC m=+146.279756926 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 09:07:17 crc kubenswrapper[4811]: E0122 09:07:17.957602 4811 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 09:07:17 crc kubenswrapper[4811]: E0122 09:07:17.957711 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 09:08:21.957694919 +0000 UTC m=+146.279882043 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 09:07:17 crc kubenswrapper[4811]: I0122 09:07:17.991323 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 09:07:17 crc kubenswrapper[4811]: I0122 09:07:17.991386 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 09:07:17 crc kubenswrapper[4811]: I0122 09:07:17.991403 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:07:17 crc kubenswrapper[4811]: E0122 09:07:17.992080 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 09:07:17 crc kubenswrapper[4811]: E0122 09:07:17.992122 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 09:07:17 crc kubenswrapper[4811]: E0122 09:07:17.992139 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 09:07:18 crc kubenswrapper[4811]: I0122 09:07:18.012354 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 08:43:25.428718574 +0000 UTC Jan 22 09:07:18 crc kubenswrapper[4811]: I0122 09:07:18.041654 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:18 crc kubenswrapper[4811]: I0122 09:07:18.041764 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:18 crc kubenswrapper[4811]: I0122 09:07:18.041857 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:18 crc kubenswrapper[4811]: I0122 09:07:18.041932 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:18 crc kubenswrapper[4811]: I0122 09:07:18.041993 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:18Z","lastTransitionTime":"2026-01-22T09:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:18 crc kubenswrapper[4811]: I0122 09:07:18.144197 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:18 crc kubenswrapper[4811]: I0122 09:07:18.144263 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:18 crc kubenswrapper[4811]: I0122 09:07:18.144276 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:18 crc kubenswrapper[4811]: I0122 09:07:18.144294 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:18 crc kubenswrapper[4811]: I0122 09:07:18.144305 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:18Z","lastTransitionTime":"2026-01-22T09:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:18 crc kubenswrapper[4811]: I0122 09:07:18.246069 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:18 crc kubenswrapper[4811]: I0122 09:07:18.246098 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:18 crc kubenswrapper[4811]: I0122 09:07:18.246106 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:18 crc kubenswrapper[4811]: I0122 09:07:18.246118 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:18 crc kubenswrapper[4811]: I0122 09:07:18.246128 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:18Z","lastTransitionTime":"2026-01-22T09:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:18 crc kubenswrapper[4811]: I0122 09:07:18.347403 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:18 crc kubenswrapper[4811]: I0122 09:07:18.347450 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:18 crc kubenswrapper[4811]: I0122 09:07:18.347458 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:18 crc kubenswrapper[4811]: I0122 09:07:18.347469 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:18 crc kubenswrapper[4811]: I0122 09:07:18.347477 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:18Z","lastTransitionTime":"2026-01-22T09:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:18 crc kubenswrapper[4811]: I0122 09:07:18.448913 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:18 crc kubenswrapper[4811]: I0122 09:07:18.448944 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:18 crc kubenswrapper[4811]: I0122 09:07:18.448952 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:18 crc kubenswrapper[4811]: I0122 09:07:18.448964 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:18 crc kubenswrapper[4811]: I0122 09:07:18.448971 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:18Z","lastTransitionTime":"2026-01-22T09:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:18 crc kubenswrapper[4811]: I0122 09:07:18.550320 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:18 crc kubenswrapper[4811]: I0122 09:07:18.550345 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:18 crc kubenswrapper[4811]: I0122 09:07:18.550352 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:18 crc kubenswrapper[4811]: I0122 09:07:18.550362 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:18 crc kubenswrapper[4811]: I0122 09:07:18.550370 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:18Z","lastTransitionTime":"2026-01-22T09:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:18 crc kubenswrapper[4811]: I0122 09:07:18.651765 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:18 crc kubenswrapper[4811]: I0122 09:07:18.651796 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:18 crc kubenswrapper[4811]: I0122 09:07:18.651805 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:18 crc kubenswrapper[4811]: I0122 09:07:18.651816 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:18 crc kubenswrapper[4811]: I0122 09:07:18.651825 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:18Z","lastTransitionTime":"2026-01-22T09:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:18 crc kubenswrapper[4811]: I0122 09:07:18.753567 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:18 crc kubenswrapper[4811]: I0122 09:07:18.753591 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:18 crc kubenswrapper[4811]: I0122 09:07:18.753598 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:18 crc kubenswrapper[4811]: I0122 09:07:18.753610 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:18 crc kubenswrapper[4811]: I0122 09:07:18.753617 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:18Z","lastTransitionTime":"2026-01-22T09:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:18 crc kubenswrapper[4811]: I0122 09:07:18.855343 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:18 crc kubenswrapper[4811]: I0122 09:07:18.855386 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:18 crc kubenswrapper[4811]: I0122 09:07:18.855395 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:18 crc kubenswrapper[4811]: I0122 09:07:18.855404 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:18 crc kubenswrapper[4811]: I0122 09:07:18.855411 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:18Z","lastTransitionTime":"2026-01-22T09:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:18 crc kubenswrapper[4811]: I0122 09:07:18.957025 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:18 crc kubenswrapper[4811]: I0122 09:07:18.957145 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:18 crc kubenswrapper[4811]: I0122 09:07:18.957275 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:18 crc kubenswrapper[4811]: I0122 09:07:18.957346 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:18 crc kubenswrapper[4811]: I0122 09:07:18.957407 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:18Z","lastTransitionTime":"2026-01-22T09:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:18 crc kubenswrapper[4811]: I0122 09:07:18.991275 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bhj4l" Jan 22 09:07:18 crc kubenswrapper[4811]: E0122 09:07:18.991488 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bhj4l" podUID="de4b38a0-0c7a-4693-9f92-40fefd6bc9b4" Jan 22 09:07:19 crc kubenswrapper[4811]: I0122 09:07:19.013677 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 14:52:01.116771316 +0000 UTC Jan 22 09:07:19 crc kubenswrapper[4811]: I0122 09:07:19.058740 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:19 crc kubenswrapper[4811]: I0122 09:07:19.058758 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:19 crc kubenswrapper[4811]: I0122 09:07:19.058765 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:19 crc kubenswrapper[4811]: I0122 09:07:19.058774 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:19 crc kubenswrapper[4811]: I0122 09:07:19.058781 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:19Z","lastTransitionTime":"2026-01-22T09:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:19 crc kubenswrapper[4811]: I0122 09:07:19.160166 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:19 crc kubenswrapper[4811]: I0122 09:07:19.160314 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:19 crc kubenswrapper[4811]: I0122 09:07:19.160380 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:19 crc kubenswrapper[4811]: I0122 09:07:19.160448 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:19 crc kubenswrapper[4811]: I0122 09:07:19.160504 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:19Z","lastTransitionTime":"2026-01-22T09:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:19 crc kubenswrapper[4811]: I0122 09:07:19.262126 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:19 crc kubenswrapper[4811]: I0122 09:07:19.262167 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:19 crc kubenswrapper[4811]: I0122 09:07:19.262174 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:19 crc kubenswrapper[4811]: I0122 09:07:19.262183 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:19 crc kubenswrapper[4811]: I0122 09:07:19.262189 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:19Z","lastTransitionTime":"2026-01-22T09:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:19 crc kubenswrapper[4811]: I0122 09:07:19.363909 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:19 crc kubenswrapper[4811]: I0122 09:07:19.363931 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:19 crc kubenswrapper[4811]: I0122 09:07:19.363938 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:19 crc kubenswrapper[4811]: I0122 09:07:19.363947 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:19 crc kubenswrapper[4811]: I0122 09:07:19.363954 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:19Z","lastTransitionTime":"2026-01-22T09:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:19 crc kubenswrapper[4811]: I0122 09:07:19.465417 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:19 crc kubenswrapper[4811]: I0122 09:07:19.465440 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:19 crc kubenswrapper[4811]: I0122 09:07:19.465448 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:19 crc kubenswrapper[4811]: I0122 09:07:19.465458 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:19 crc kubenswrapper[4811]: I0122 09:07:19.465466 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:19Z","lastTransitionTime":"2026-01-22T09:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:19 crc kubenswrapper[4811]: I0122 09:07:19.567437 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:19 crc kubenswrapper[4811]: I0122 09:07:19.567462 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:19 crc kubenswrapper[4811]: I0122 09:07:19.567470 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:19 crc kubenswrapper[4811]: I0122 09:07:19.567479 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:19 crc kubenswrapper[4811]: I0122 09:07:19.567487 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:19Z","lastTransitionTime":"2026-01-22T09:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:19 crc kubenswrapper[4811]: I0122 09:07:19.669137 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:19 crc kubenswrapper[4811]: I0122 09:07:19.669236 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:19 crc kubenswrapper[4811]: I0122 09:07:19.669303 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:19 crc kubenswrapper[4811]: I0122 09:07:19.669358 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:19 crc kubenswrapper[4811]: I0122 09:07:19.669410 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:19Z","lastTransitionTime":"2026-01-22T09:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:19 crc kubenswrapper[4811]: I0122 09:07:19.771218 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:19 crc kubenswrapper[4811]: I0122 09:07:19.771341 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:19 crc kubenswrapper[4811]: I0122 09:07:19.771394 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:19 crc kubenswrapper[4811]: I0122 09:07:19.771447 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:19 crc kubenswrapper[4811]: I0122 09:07:19.771504 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:19Z","lastTransitionTime":"2026-01-22T09:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:19 crc kubenswrapper[4811]: I0122 09:07:19.872729 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:19 crc kubenswrapper[4811]: I0122 09:07:19.872757 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:19 crc kubenswrapper[4811]: I0122 09:07:19.872765 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:19 crc kubenswrapper[4811]: I0122 09:07:19.872800 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:19 crc kubenswrapper[4811]: I0122 09:07:19.872810 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:19Z","lastTransitionTime":"2026-01-22T09:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:19 crc kubenswrapper[4811]: I0122 09:07:19.974698 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:19 crc kubenswrapper[4811]: I0122 09:07:19.974719 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:19 crc kubenswrapper[4811]: I0122 09:07:19.974726 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:19 crc kubenswrapper[4811]: I0122 09:07:19.974737 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:19 crc kubenswrapper[4811]: I0122 09:07:19.974744 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:19Z","lastTransitionTime":"2026-01-22T09:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:19 crc kubenswrapper[4811]: I0122 09:07:19.990980 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 09:07:19 crc kubenswrapper[4811]: I0122 09:07:19.990993 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 09:07:19 crc kubenswrapper[4811]: E0122 09:07:19.991053 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 09:07:19 crc kubenswrapper[4811]: I0122 09:07:19.990980 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:07:19 crc kubenswrapper[4811]: E0122 09:07:19.991123 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 09:07:19 crc kubenswrapper[4811]: E0122 09:07:19.991152 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 09:07:20 crc kubenswrapper[4811]: I0122 09:07:20.014407 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 17:03:22.862033909 +0000 UTC Jan 22 09:07:20 crc kubenswrapper[4811]: I0122 09:07:20.076784 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:20 crc kubenswrapper[4811]: I0122 09:07:20.076807 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:20 crc kubenswrapper[4811]: I0122 09:07:20.076815 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:20 crc kubenswrapper[4811]: I0122 09:07:20.076823 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:20 crc kubenswrapper[4811]: I0122 09:07:20.076830 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:20Z","lastTransitionTime":"2026-01-22T09:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:20 crc kubenswrapper[4811]: I0122 09:07:20.178432 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:20 crc kubenswrapper[4811]: I0122 09:07:20.178453 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:20 crc kubenswrapper[4811]: I0122 09:07:20.178460 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:20 crc kubenswrapper[4811]: I0122 09:07:20.178468 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:20 crc kubenswrapper[4811]: I0122 09:07:20.178475 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:20Z","lastTransitionTime":"2026-01-22T09:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:20 crc kubenswrapper[4811]: I0122 09:07:20.279604 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:20 crc kubenswrapper[4811]: I0122 09:07:20.279646 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:20 crc kubenswrapper[4811]: I0122 09:07:20.279656 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:20 crc kubenswrapper[4811]: I0122 09:07:20.279666 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:20 crc kubenswrapper[4811]: I0122 09:07:20.279673 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:20Z","lastTransitionTime":"2026-01-22T09:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:20 crc kubenswrapper[4811]: I0122 09:07:20.381328 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:20 crc kubenswrapper[4811]: I0122 09:07:20.381351 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:20 crc kubenswrapper[4811]: I0122 09:07:20.381358 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:20 crc kubenswrapper[4811]: I0122 09:07:20.381366 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:20 crc kubenswrapper[4811]: I0122 09:07:20.381373 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:20Z","lastTransitionTime":"2026-01-22T09:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:20 crc kubenswrapper[4811]: I0122 09:07:20.483429 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:20 crc kubenswrapper[4811]: I0122 09:07:20.483454 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:20 crc kubenswrapper[4811]: I0122 09:07:20.483470 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:20 crc kubenswrapper[4811]: I0122 09:07:20.483483 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:20 crc kubenswrapper[4811]: I0122 09:07:20.483492 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:20Z","lastTransitionTime":"2026-01-22T09:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:20 crc kubenswrapper[4811]: I0122 09:07:20.585222 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:20 crc kubenswrapper[4811]: I0122 09:07:20.585244 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:20 crc kubenswrapper[4811]: I0122 09:07:20.585252 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:20 crc kubenswrapper[4811]: I0122 09:07:20.585261 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:20 crc kubenswrapper[4811]: I0122 09:07:20.585279 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:20Z","lastTransitionTime":"2026-01-22T09:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:20 crc kubenswrapper[4811]: I0122 09:07:20.687053 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:20 crc kubenswrapper[4811]: I0122 09:07:20.687079 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:20 crc kubenswrapper[4811]: I0122 09:07:20.687086 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:20 crc kubenswrapper[4811]: I0122 09:07:20.687095 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:20 crc kubenswrapper[4811]: I0122 09:07:20.687102 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:20Z","lastTransitionTime":"2026-01-22T09:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:20 crc kubenswrapper[4811]: I0122 09:07:20.788894 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:20 crc kubenswrapper[4811]: I0122 09:07:20.788918 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:20 crc kubenswrapper[4811]: I0122 09:07:20.788926 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:20 crc kubenswrapper[4811]: I0122 09:07:20.788934 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:20 crc kubenswrapper[4811]: I0122 09:07:20.788941 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:20Z","lastTransitionTime":"2026-01-22T09:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:20 crc kubenswrapper[4811]: I0122 09:07:20.890368 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:20 crc kubenswrapper[4811]: I0122 09:07:20.890392 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:20 crc kubenswrapper[4811]: I0122 09:07:20.890399 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:20 crc kubenswrapper[4811]: I0122 09:07:20.890407 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:20 crc kubenswrapper[4811]: I0122 09:07:20.890414 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:20Z","lastTransitionTime":"2026-01-22T09:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:20 crc kubenswrapper[4811]: I0122 09:07:20.991381 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bhj4l" Jan 22 09:07:20 crc kubenswrapper[4811]: E0122 09:07:20.991467 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bhj4l" podUID="de4b38a0-0c7a-4693-9f92-40fefd6bc9b4" Jan 22 09:07:20 crc kubenswrapper[4811]: I0122 09:07:20.992330 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:20 crc kubenswrapper[4811]: I0122 09:07:20.992356 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:20 crc kubenswrapper[4811]: I0122 09:07:20.992365 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:20 crc kubenswrapper[4811]: I0122 09:07:20.992374 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:20 crc kubenswrapper[4811]: I0122 09:07:20.992382 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:20Z","lastTransitionTime":"2026-01-22T09:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:21 crc kubenswrapper[4811]: I0122 09:07:21.014661 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 09:51:56.696138814 +0000 UTC Jan 22 09:07:21 crc kubenswrapper[4811]: I0122 09:07:21.093614 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:21 crc kubenswrapper[4811]: I0122 09:07:21.093654 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:21 crc kubenswrapper[4811]: I0122 09:07:21.093661 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:21 crc kubenswrapper[4811]: I0122 09:07:21.093670 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:21 crc kubenswrapper[4811]: I0122 09:07:21.093677 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:21Z","lastTransitionTime":"2026-01-22T09:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:21 crc kubenswrapper[4811]: I0122 09:07:21.195496 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:21 crc kubenswrapper[4811]: I0122 09:07:21.195521 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:21 crc kubenswrapper[4811]: I0122 09:07:21.195527 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:21 crc kubenswrapper[4811]: I0122 09:07:21.195536 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:21 crc kubenswrapper[4811]: I0122 09:07:21.195543 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:21Z","lastTransitionTime":"2026-01-22T09:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:21 crc kubenswrapper[4811]: I0122 09:07:21.296832 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:21 crc kubenswrapper[4811]: I0122 09:07:21.296852 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:21 crc kubenswrapper[4811]: I0122 09:07:21.296869 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:21 crc kubenswrapper[4811]: I0122 09:07:21.296878 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:21 crc kubenswrapper[4811]: I0122 09:07:21.296885 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:21Z","lastTransitionTime":"2026-01-22T09:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:21 crc kubenswrapper[4811]: I0122 09:07:21.398122 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:21 crc kubenswrapper[4811]: I0122 09:07:21.398144 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:21 crc kubenswrapper[4811]: I0122 09:07:21.398150 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:21 crc kubenswrapper[4811]: I0122 09:07:21.398159 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:21 crc kubenswrapper[4811]: I0122 09:07:21.398166 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:21Z","lastTransitionTime":"2026-01-22T09:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:21 crc kubenswrapper[4811]: I0122 09:07:21.499563 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:21 crc kubenswrapper[4811]: I0122 09:07:21.499591 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:21 crc kubenswrapper[4811]: I0122 09:07:21.499600 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:21 crc kubenswrapper[4811]: I0122 09:07:21.499610 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:21 crc kubenswrapper[4811]: I0122 09:07:21.499617 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:21Z","lastTransitionTime":"2026-01-22T09:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:21 crc kubenswrapper[4811]: I0122 09:07:21.600658 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:21 crc kubenswrapper[4811]: I0122 09:07:21.600706 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:21 crc kubenswrapper[4811]: I0122 09:07:21.600714 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:21 crc kubenswrapper[4811]: I0122 09:07:21.600728 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:21 crc kubenswrapper[4811]: I0122 09:07:21.600737 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:21Z","lastTransitionTime":"2026-01-22T09:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:21 crc kubenswrapper[4811]: I0122 09:07:21.701897 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:21 crc kubenswrapper[4811]: I0122 09:07:21.701920 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:21 crc kubenswrapper[4811]: I0122 09:07:21.701928 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:21 crc kubenswrapper[4811]: I0122 09:07:21.701938 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:21 crc kubenswrapper[4811]: I0122 09:07:21.701945 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:21Z","lastTransitionTime":"2026-01-22T09:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:21 crc kubenswrapper[4811]: I0122 09:07:21.803511 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:21 crc kubenswrapper[4811]: I0122 09:07:21.803590 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:21 crc kubenswrapper[4811]: I0122 09:07:21.803605 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:21 crc kubenswrapper[4811]: I0122 09:07:21.803651 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:21 crc kubenswrapper[4811]: I0122 09:07:21.803668 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:21Z","lastTransitionTime":"2026-01-22T09:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:21 crc kubenswrapper[4811]: I0122 09:07:21.905802 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:21 crc kubenswrapper[4811]: I0122 09:07:21.905837 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:21 crc kubenswrapper[4811]: I0122 09:07:21.905845 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:21 crc kubenswrapper[4811]: I0122 09:07:21.905857 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:21 crc kubenswrapper[4811]: I0122 09:07:21.905875 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:21Z","lastTransitionTime":"2026-01-22T09:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:21 crc kubenswrapper[4811]: I0122 09:07:21.991714 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 09:07:21 crc kubenswrapper[4811]: I0122 09:07:21.991714 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:07:21 crc kubenswrapper[4811]: E0122 09:07:21.991826 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 09:07:21 crc kubenswrapper[4811]: E0122 09:07:21.991893 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 09:07:21 crc kubenswrapper[4811]: I0122 09:07:21.991807 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 09:07:21 crc kubenswrapper[4811]: E0122 09:07:21.992086 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 09:07:22 crc kubenswrapper[4811]: I0122 09:07:22.007191 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:22 crc kubenswrapper[4811]: I0122 09:07:22.007214 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:22 crc kubenswrapper[4811]: I0122 09:07:22.007222 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:22 crc kubenswrapper[4811]: I0122 09:07:22.007233 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:22 crc kubenswrapper[4811]: I0122 09:07:22.007241 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:22Z","lastTransitionTime":"2026-01-22T09:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:22 crc kubenswrapper[4811]: I0122 09:07:22.015679 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 04:53:02.767978586 +0000 UTC Jan 22 09:07:22 crc kubenswrapper[4811]: I0122 09:07:22.109338 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:22 crc kubenswrapper[4811]: I0122 09:07:22.109361 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:22 crc kubenswrapper[4811]: I0122 09:07:22.109369 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:22 crc kubenswrapper[4811]: I0122 09:07:22.109387 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:22 crc kubenswrapper[4811]: I0122 09:07:22.109396 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:22Z","lastTransitionTime":"2026-01-22T09:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:22 crc kubenswrapper[4811]: I0122 09:07:22.211104 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:22 crc kubenswrapper[4811]: I0122 09:07:22.211128 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:22 crc kubenswrapper[4811]: I0122 09:07:22.211136 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:22 crc kubenswrapper[4811]: I0122 09:07:22.211145 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:22 crc kubenswrapper[4811]: I0122 09:07:22.211170 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:22Z","lastTransitionTime":"2026-01-22T09:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:22 crc kubenswrapper[4811]: I0122 09:07:22.312987 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:22 crc kubenswrapper[4811]: I0122 09:07:22.313004 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:22 crc kubenswrapper[4811]: I0122 09:07:22.313012 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:22 crc kubenswrapper[4811]: I0122 09:07:22.313021 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:22 crc kubenswrapper[4811]: I0122 09:07:22.313028 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:22Z","lastTransitionTime":"2026-01-22T09:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:22 crc kubenswrapper[4811]: I0122 09:07:22.414282 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:22 crc kubenswrapper[4811]: I0122 09:07:22.414324 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:22 crc kubenswrapper[4811]: I0122 09:07:22.414337 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:22 crc kubenswrapper[4811]: I0122 09:07:22.414350 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:22 crc kubenswrapper[4811]: I0122 09:07:22.414361 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:22Z","lastTransitionTime":"2026-01-22T09:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:22 crc kubenswrapper[4811]: I0122 09:07:22.515761 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:22 crc kubenswrapper[4811]: I0122 09:07:22.515799 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:22 crc kubenswrapper[4811]: I0122 09:07:22.515809 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:22 crc kubenswrapper[4811]: I0122 09:07:22.515819 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:22 crc kubenswrapper[4811]: I0122 09:07:22.515828 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:22Z","lastTransitionTime":"2026-01-22T09:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:22 crc kubenswrapper[4811]: I0122 09:07:22.617057 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:22 crc kubenswrapper[4811]: I0122 09:07:22.617086 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:22 crc kubenswrapper[4811]: I0122 09:07:22.617096 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:22 crc kubenswrapper[4811]: I0122 09:07:22.617106 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:22 crc kubenswrapper[4811]: I0122 09:07:22.617114 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:22Z","lastTransitionTime":"2026-01-22T09:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:22 crc kubenswrapper[4811]: I0122 09:07:22.719040 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:22 crc kubenswrapper[4811]: I0122 09:07:22.719077 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:22 crc kubenswrapper[4811]: I0122 09:07:22.719085 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:22 crc kubenswrapper[4811]: I0122 09:07:22.719095 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:22 crc kubenswrapper[4811]: I0122 09:07:22.719101 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:22Z","lastTransitionTime":"2026-01-22T09:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:22 crc kubenswrapper[4811]: I0122 09:07:22.820920 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:22 crc kubenswrapper[4811]: I0122 09:07:22.821134 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:22 crc kubenswrapper[4811]: I0122 09:07:22.821144 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:22 crc kubenswrapper[4811]: I0122 09:07:22.821155 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:22 crc kubenswrapper[4811]: I0122 09:07:22.821164 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:22Z","lastTransitionTime":"2026-01-22T09:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:22 crc kubenswrapper[4811]: I0122 09:07:22.922892 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:22 crc kubenswrapper[4811]: I0122 09:07:22.922916 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:22 crc kubenswrapper[4811]: I0122 09:07:22.922925 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:22 crc kubenswrapper[4811]: I0122 09:07:22.922935 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:22 crc kubenswrapper[4811]: I0122 09:07:22.922942 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:22Z","lastTransitionTime":"2026-01-22T09:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:22 crc kubenswrapper[4811]: I0122 09:07:22.991752 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bhj4l" Jan 22 09:07:22 crc kubenswrapper[4811]: E0122 09:07:22.991961 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bhj4l" podUID="de4b38a0-0c7a-4693-9f92-40fefd6bc9b4" Jan 22 09:07:23 crc kubenswrapper[4811]: I0122 09:07:23.016290 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 11:42:48.347136286 +0000 UTC Jan 22 09:07:23 crc kubenswrapper[4811]: I0122 09:07:23.023955 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:23 crc kubenswrapper[4811]: I0122 09:07:23.023978 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:23 crc kubenswrapper[4811]: I0122 09:07:23.023986 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:23 crc kubenswrapper[4811]: I0122 09:07:23.023995 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:23 crc kubenswrapper[4811]: I0122 09:07:23.024002 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:23Z","lastTransitionTime":"2026-01-22T09:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:23 crc kubenswrapper[4811]: I0122 09:07:23.125096 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:23 crc kubenswrapper[4811]: I0122 09:07:23.125125 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:23 crc kubenswrapper[4811]: I0122 09:07:23.125135 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:23 crc kubenswrapper[4811]: I0122 09:07:23.125165 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:23 crc kubenswrapper[4811]: I0122 09:07:23.125175 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:23Z","lastTransitionTime":"2026-01-22T09:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:23 crc kubenswrapper[4811]: I0122 09:07:23.226801 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:23 crc kubenswrapper[4811]: I0122 09:07:23.226824 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:23 crc kubenswrapper[4811]: I0122 09:07:23.226832 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:23 crc kubenswrapper[4811]: I0122 09:07:23.226841 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:23 crc kubenswrapper[4811]: I0122 09:07:23.226847 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:23Z","lastTransitionTime":"2026-01-22T09:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:23 crc kubenswrapper[4811]: I0122 09:07:23.328588 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:23 crc kubenswrapper[4811]: I0122 09:07:23.328615 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:23 crc kubenswrapper[4811]: I0122 09:07:23.328651 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:23 crc kubenswrapper[4811]: I0122 09:07:23.328663 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:23 crc kubenswrapper[4811]: I0122 09:07:23.328671 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:23Z","lastTransitionTime":"2026-01-22T09:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:23 crc kubenswrapper[4811]: I0122 09:07:23.429808 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:23 crc kubenswrapper[4811]: I0122 09:07:23.429829 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:23 crc kubenswrapper[4811]: I0122 09:07:23.429836 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:23 crc kubenswrapper[4811]: I0122 09:07:23.429843 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:23 crc kubenswrapper[4811]: I0122 09:07:23.429850 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:23Z","lastTransitionTime":"2026-01-22T09:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:23 crc kubenswrapper[4811]: I0122 09:07:23.531565 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:23 crc kubenswrapper[4811]: I0122 09:07:23.531584 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:23 crc kubenswrapper[4811]: I0122 09:07:23.531591 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:23 crc kubenswrapper[4811]: I0122 09:07:23.531599 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:23 crc kubenswrapper[4811]: I0122 09:07:23.531605 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:23Z","lastTransitionTime":"2026-01-22T09:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:23 crc kubenswrapper[4811]: I0122 09:07:23.633591 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:23 crc kubenswrapper[4811]: I0122 09:07:23.633610 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:23 crc kubenswrapper[4811]: I0122 09:07:23.633616 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:23 crc kubenswrapper[4811]: I0122 09:07:23.633645 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:23 crc kubenswrapper[4811]: I0122 09:07:23.633653 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:23Z","lastTransitionTime":"2026-01-22T09:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:23 crc kubenswrapper[4811]: I0122 09:07:23.735317 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:23 crc kubenswrapper[4811]: I0122 09:07:23.735336 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:23 crc kubenswrapper[4811]: I0122 09:07:23.735360 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:23 crc kubenswrapper[4811]: I0122 09:07:23.735369 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:23 crc kubenswrapper[4811]: I0122 09:07:23.735375 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:23Z","lastTransitionTime":"2026-01-22T09:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:23 crc kubenswrapper[4811]: I0122 09:07:23.837172 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:23 crc kubenswrapper[4811]: I0122 09:07:23.837190 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:23 crc kubenswrapper[4811]: I0122 09:07:23.837196 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:23 crc kubenswrapper[4811]: I0122 09:07:23.837206 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:23 crc kubenswrapper[4811]: I0122 09:07:23.837212 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:23Z","lastTransitionTime":"2026-01-22T09:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:23 crc kubenswrapper[4811]: I0122 09:07:23.939028 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:23 crc kubenswrapper[4811]: I0122 09:07:23.939048 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:23 crc kubenswrapper[4811]: I0122 09:07:23.939055 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:23 crc kubenswrapper[4811]: I0122 09:07:23.939064 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:23 crc kubenswrapper[4811]: I0122 09:07:23.939070 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:23Z","lastTransitionTime":"2026-01-22T09:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:23 crc kubenswrapper[4811]: I0122 09:07:23.991744 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 09:07:23 crc kubenswrapper[4811]: I0122 09:07:23.991793 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 09:07:23 crc kubenswrapper[4811]: I0122 09:07:23.991793 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:07:23 crc kubenswrapper[4811]: E0122 09:07:23.991827 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 09:07:23 crc kubenswrapper[4811]: E0122 09:07:23.991889 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 09:07:23 crc kubenswrapper[4811]: E0122 09:07:23.991944 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 09:07:24 crc kubenswrapper[4811]: I0122 09:07:24.016615 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 13:33:26.875920726 +0000 UTC Jan 22 09:07:24 crc kubenswrapper[4811]: I0122 09:07:24.040524 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:24 crc kubenswrapper[4811]: I0122 09:07:24.040545 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:24 crc kubenswrapper[4811]: I0122 09:07:24.040552 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:24 crc kubenswrapper[4811]: I0122 09:07:24.040561 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:24 crc kubenswrapper[4811]: I0122 09:07:24.040568 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:24Z","lastTransitionTime":"2026-01-22T09:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:24 crc kubenswrapper[4811]: I0122 09:07:24.142068 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:24 crc kubenswrapper[4811]: I0122 09:07:24.142103 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:24 crc kubenswrapper[4811]: I0122 09:07:24.142110 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:24 crc kubenswrapper[4811]: I0122 09:07:24.142120 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:24 crc kubenswrapper[4811]: I0122 09:07:24.142126 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:24Z","lastTransitionTime":"2026-01-22T09:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:24 crc kubenswrapper[4811]: I0122 09:07:24.243279 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:24 crc kubenswrapper[4811]: I0122 09:07:24.243301 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:24 crc kubenswrapper[4811]: I0122 09:07:24.243309 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:24 crc kubenswrapper[4811]: I0122 09:07:24.243319 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:24 crc kubenswrapper[4811]: I0122 09:07:24.243328 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:24Z","lastTransitionTime":"2026-01-22T09:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:24 crc kubenswrapper[4811]: I0122 09:07:24.344518 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:24 crc kubenswrapper[4811]: I0122 09:07:24.344609 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:24 crc kubenswrapper[4811]: I0122 09:07:24.344696 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:24 crc kubenswrapper[4811]: I0122 09:07:24.344761 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:24 crc kubenswrapper[4811]: I0122 09:07:24.344817 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:24Z","lastTransitionTime":"2026-01-22T09:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:24 crc kubenswrapper[4811]: I0122 09:07:24.446692 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:24 crc kubenswrapper[4811]: I0122 09:07:24.446723 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:24 crc kubenswrapper[4811]: I0122 09:07:24.446744 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:24 crc kubenswrapper[4811]: I0122 09:07:24.446753 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:24 crc kubenswrapper[4811]: I0122 09:07:24.446761 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:24Z","lastTransitionTime":"2026-01-22T09:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:24 crc kubenswrapper[4811]: I0122 09:07:24.548225 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:24 crc kubenswrapper[4811]: I0122 09:07:24.548281 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:24 crc kubenswrapper[4811]: I0122 09:07:24.548290 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:24 crc kubenswrapper[4811]: I0122 09:07:24.548297 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:24 crc kubenswrapper[4811]: I0122 09:07:24.548305 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:24Z","lastTransitionTime":"2026-01-22T09:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:24 crc kubenswrapper[4811]: I0122 09:07:24.649797 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:24 crc kubenswrapper[4811]: I0122 09:07:24.649822 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:24 crc kubenswrapper[4811]: I0122 09:07:24.649837 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:24 crc kubenswrapper[4811]: I0122 09:07:24.649865 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:24 crc kubenswrapper[4811]: I0122 09:07:24.649873 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:24Z","lastTransitionTime":"2026-01-22T09:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:24 crc kubenswrapper[4811]: I0122 09:07:24.751251 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:24 crc kubenswrapper[4811]: I0122 09:07:24.751271 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:24 crc kubenswrapper[4811]: I0122 09:07:24.751278 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:24 crc kubenswrapper[4811]: I0122 09:07:24.751286 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:24 crc kubenswrapper[4811]: I0122 09:07:24.751292 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:24Z","lastTransitionTime":"2026-01-22T09:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:24 crc kubenswrapper[4811]: I0122 09:07:24.852417 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:24 crc kubenswrapper[4811]: I0122 09:07:24.852559 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:24 crc kubenswrapper[4811]: I0122 09:07:24.852619 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:24 crc kubenswrapper[4811]: I0122 09:07:24.852705 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:24 crc kubenswrapper[4811]: I0122 09:07:24.852773 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:24Z","lastTransitionTime":"2026-01-22T09:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:24 crc kubenswrapper[4811]: I0122 09:07:24.954079 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:24 crc kubenswrapper[4811]: I0122 09:07:24.954214 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:24 crc kubenswrapper[4811]: I0122 09:07:24.954300 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:24 crc kubenswrapper[4811]: I0122 09:07:24.954358 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:24 crc kubenswrapper[4811]: I0122 09:07:24.954434 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:24Z","lastTransitionTime":"2026-01-22T09:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:24 crc kubenswrapper[4811]: I0122 09:07:24.991500 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bhj4l" Jan 22 09:07:24 crc kubenswrapper[4811]: E0122 09:07:24.991720 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bhj4l" podUID="de4b38a0-0c7a-4693-9f92-40fefd6bc9b4" Jan 22 09:07:25 crc kubenswrapper[4811]: I0122 09:07:25.016998 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 18:55:13.982763139 +0000 UTC Jan 22 09:07:25 crc kubenswrapper[4811]: I0122 09:07:25.056194 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:25 crc kubenswrapper[4811]: I0122 09:07:25.056215 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:25 crc kubenswrapper[4811]: I0122 09:07:25.056223 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:25 crc kubenswrapper[4811]: I0122 09:07:25.056231 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:25 crc kubenswrapper[4811]: I0122 09:07:25.056245 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:25Z","lastTransitionTime":"2026-01-22T09:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:25 crc kubenswrapper[4811]: I0122 09:07:25.157750 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:25 crc kubenswrapper[4811]: I0122 09:07:25.157773 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:25 crc kubenswrapper[4811]: I0122 09:07:25.157780 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:25 crc kubenswrapper[4811]: I0122 09:07:25.157789 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:25 crc kubenswrapper[4811]: I0122 09:07:25.157796 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:25Z","lastTransitionTime":"2026-01-22T09:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:25 crc kubenswrapper[4811]: I0122 09:07:25.259250 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:25 crc kubenswrapper[4811]: I0122 09:07:25.259282 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:25 crc kubenswrapper[4811]: I0122 09:07:25.259290 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:25 crc kubenswrapper[4811]: I0122 09:07:25.259298 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:25 crc kubenswrapper[4811]: I0122 09:07:25.259304 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:25Z","lastTransitionTime":"2026-01-22T09:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:25 crc kubenswrapper[4811]: I0122 09:07:25.360136 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:25 crc kubenswrapper[4811]: I0122 09:07:25.360227 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:25 crc kubenswrapper[4811]: I0122 09:07:25.360286 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:25 crc kubenswrapper[4811]: I0122 09:07:25.360340 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:25 crc kubenswrapper[4811]: I0122 09:07:25.360396 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:25Z","lastTransitionTime":"2026-01-22T09:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:25 crc kubenswrapper[4811]: I0122 09:07:25.462223 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:25 crc kubenswrapper[4811]: I0122 09:07:25.462248 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:25 crc kubenswrapper[4811]: I0122 09:07:25.462256 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:25 crc kubenswrapper[4811]: I0122 09:07:25.462266 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:25 crc kubenswrapper[4811]: I0122 09:07:25.462277 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:25Z","lastTransitionTime":"2026-01-22T09:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:25 crc kubenswrapper[4811]: I0122 09:07:25.563443 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:25 crc kubenswrapper[4811]: I0122 09:07:25.563465 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:25 crc kubenswrapper[4811]: I0122 09:07:25.563473 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:25 crc kubenswrapper[4811]: I0122 09:07:25.563481 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:25 crc kubenswrapper[4811]: I0122 09:07:25.563489 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:25Z","lastTransitionTime":"2026-01-22T09:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:25 crc kubenswrapper[4811]: I0122 09:07:25.665264 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:25 crc kubenswrapper[4811]: I0122 09:07:25.665288 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:25 crc kubenswrapper[4811]: I0122 09:07:25.665297 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:25 crc kubenswrapper[4811]: I0122 09:07:25.665308 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:25 crc kubenswrapper[4811]: I0122 09:07:25.665315 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:25Z","lastTransitionTime":"2026-01-22T09:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:25 crc kubenswrapper[4811]: I0122 09:07:25.766817 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:25 crc kubenswrapper[4811]: I0122 09:07:25.766845 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:25 crc kubenswrapper[4811]: I0122 09:07:25.766853 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:25 crc kubenswrapper[4811]: I0122 09:07:25.766866 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:25 crc kubenswrapper[4811]: I0122 09:07:25.766874 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:25Z","lastTransitionTime":"2026-01-22T09:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:25 crc kubenswrapper[4811]: I0122 09:07:25.868260 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:25 crc kubenswrapper[4811]: I0122 09:07:25.868285 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:25 crc kubenswrapper[4811]: I0122 09:07:25.868300 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:25 crc kubenswrapper[4811]: I0122 09:07:25.868309 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:25 crc kubenswrapper[4811]: I0122 09:07:25.868317 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:25Z","lastTransitionTime":"2026-01-22T09:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:25 crc kubenswrapper[4811]: I0122 09:07:25.969608 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:25 crc kubenswrapper[4811]: I0122 09:07:25.969739 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:25 crc kubenswrapper[4811]: I0122 09:07:25.969840 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:25 crc kubenswrapper[4811]: I0122 09:07:25.969919 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:25 crc kubenswrapper[4811]: I0122 09:07:25.969994 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:25Z","lastTransitionTime":"2026-01-22T09:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:25 crc kubenswrapper[4811]: I0122 09:07:25.991125 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 09:07:25 crc kubenswrapper[4811]: I0122 09:07:25.991141 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:07:25 crc kubenswrapper[4811]: I0122 09:07:25.991164 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 09:07:25 crc kubenswrapper[4811]: E0122 09:07:25.991203 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 09:07:25 crc kubenswrapper[4811]: E0122 09:07:25.991256 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 09:07:25 crc kubenswrapper[4811]: E0122 09:07:25.991314 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 09:07:26 crc kubenswrapper[4811]: I0122 09:07:26.012277 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=37.012261797 podStartE2EDuration="37.012261797s" podCreationTimestamp="2026-01-22 09:06:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:07:26.011952171 +0000 UTC m=+90.334139294" watchObservedRunningTime="2026-01-22 09:07:26.012261797 +0000 UTC m=+90.334448920" Jan 22 09:07:26 crc kubenswrapper[4811]: I0122 09:07:26.012569 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gwnx7" podStartSLOduration=68.012564889 podStartE2EDuration="1m8.012564889s" podCreationTimestamp="2026-01-22 09:06:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:07:26.004700759 +0000 UTC m=+90.326887902" watchObservedRunningTime="2026-01-22 09:07:26.012564889 +0000 UTC m=+90.334752013" Jan 22 09:07:26 crc kubenswrapper[4811]: I0122 09:07:26.017072 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 15:10:00.42201896 +0000 UTC Jan 22 09:07:26 crc kubenswrapper[4811]: I0122 09:07:26.043385 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=68.043370252 podStartE2EDuration="1m8.043370252s" podCreationTimestamp="2026-01-22 09:06:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:07:26.04300905 +0000 UTC m=+90.365196183" watchObservedRunningTime="2026-01-22 09:07:26.043370252 +0000 UTC m=+90.365557376" Jan 22 09:07:26 crc kubenswrapper[4811]: I0122 09:07:26.043588 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-kfqgt" podStartSLOduration=69.043582274 podStartE2EDuration="1m9.043582274s" podCreationTimestamp="2026-01-22 09:06:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:07:26.027067635 +0000 UTC m=+90.349254759" watchObservedRunningTime="2026-01-22 09:07:26.043582274 +0000 UTC m=+90.365769396" Jan 22 09:07:26 crc kubenswrapper[4811]: I0122 09:07:26.071045 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:26 crc kubenswrapper[4811]: I0122 09:07:26.071079 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:26 crc kubenswrapper[4811]: I0122 09:07:26.071088 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:26 crc kubenswrapper[4811]: I0122 09:07:26.071099 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:26 crc kubenswrapper[4811]: I0122 09:07:26.071106 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:26Z","lastTransitionTime":"2026-01-22T09:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:26 crc kubenswrapper[4811]: I0122 09:07:26.075796 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-xhhs7" podStartSLOduration=69.075784988 podStartE2EDuration="1m9.075784988s" podCreationTimestamp="2026-01-22 09:06:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:07:26.075432983 +0000 UTC m=+90.397620105" watchObservedRunningTime="2026-01-22 09:07:26.075784988 +0000 UTC m=+90.397972101" Jan 22 09:07:26 crc kubenswrapper[4811]: I0122 09:07:26.097644 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-9g4j8" podStartSLOduration=69.097613106 podStartE2EDuration="1m9.097613106s" podCreationTimestamp="2026-01-22 09:06:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:07:26.087670027 +0000 UTC m=+90.409857150" watchObservedRunningTime="2026-01-22 09:07:26.097613106 +0000 UTC m=+90.419800230" Jan 22 09:07:26 crc kubenswrapper[4811]: I0122 09:07:26.105669 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podStartSLOduration=69.105657929 podStartE2EDuration="1m9.105657929s" podCreationTimestamp="2026-01-22 09:06:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:07:26.098447653 +0000 UTC m=+90.420634776" watchObservedRunningTime="2026-01-22 09:07:26.105657929 +0000 UTC m=+90.427845052" Jan 22 09:07:26 crc kubenswrapper[4811]: I0122 09:07:26.126698 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=72.126684052 podStartE2EDuration="1m12.126684052s" podCreationTimestamp="2026-01-22 09:06:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:07:26.117859185 +0000 UTC m=+90.440046298" watchObservedRunningTime="2026-01-22 09:07:26.126684052 +0000 UTC m=+90.448871175" Jan 22 09:07:26 crc kubenswrapper[4811]: I0122 09:07:26.169965 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-n2kj4" podStartSLOduration=69.169952528 podStartE2EDuration="1m9.169952528s" podCreationTimestamp="2026-01-22 09:06:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:07:26.169682507 +0000 UTC m=+90.491869630" watchObservedRunningTime="2026-01-22 09:07:26.169952528 +0000 UTC m=+90.492139651" Jan 22 09:07:26 crc kubenswrapper[4811]: I0122 09:07:26.172470 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:26 crc kubenswrapper[4811]: I0122 09:07:26.172495 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:26 crc kubenswrapper[4811]: I0122 09:07:26.172503 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:26 crc kubenswrapper[4811]: I0122 09:07:26.172513 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:26 crc kubenswrapper[4811]: I0122 09:07:26.172522 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:26Z","lastTransitionTime":"2026-01-22T09:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:26 crc kubenswrapper[4811]: I0122 09:07:26.177649 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=17.177640074 podStartE2EDuration="17.177640074s" podCreationTimestamp="2026-01-22 09:07:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:07:26.177001187 +0000 UTC m=+90.499188310" watchObservedRunningTime="2026-01-22 09:07:26.177640074 +0000 UTC m=+90.499827197" Jan 22 09:07:26 crc kubenswrapper[4811]: I0122 09:07:26.186709 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=66.186700697 podStartE2EDuration="1m6.186700697s" podCreationTimestamp="2026-01-22 09:06:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:07:26.185908029 +0000 UTC m=+90.508095152" watchObservedRunningTime="2026-01-22 09:07:26.186700697 +0000 UTC m=+90.508887820" Jan 22 09:07:26 crc kubenswrapper[4811]: I0122 09:07:26.274041 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:26 crc kubenswrapper[4811]: I0122 09:07:26.274074 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:26 crc kubenswrapper[4811]: I0122 09:07:26.274083 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:26 crc kubenswrapper[4811]: I0122 09:07:26.274095 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:26 crc kubenswrapper[4811]: I0122 09:07:26.274104 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:26Z","lastTransitionTime":"2026-01-22T09:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:26 crc kubenswrapper[4811]: I0122 09:07:26.376191 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:26 crc kubenswrapper[4811]: I0122 09:07:26.376220 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:26 crc kubenswrapper[4811]: I0122 09:07:26.376228 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:26 crc kubenswrapper[4811]: I0122 09:07:26.376239 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:26 crc kubenswrapper[4811]: I0122 09:07:26.376249 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:26Z","lastTransitionTime":"2026-01-22T09:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:26 crc kubenswrapper[4811]: I0122 09:07:26.478318 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:26 crc kubenswrapper[4811]: I0122 09:07:26.478479 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:26 crc kubenswrapper[4811]: I0122 09:07:26.478563 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:26 crc kubenswrapper[4811]: I0122 09:07:26.478655 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:26 crc kubenswrapper[4811]: I0122 09:07:26.478722 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:26Z","lastTransitionTime":"2026-01-22T09:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:26 crc kubenswrapper[4811]: I0122 09:07:26.580940 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:26 crc kubenswrapper[4811]: I0122 09:07:26.580966 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:26 crc kubenswrapper[4811]: I0122 09:07:26.580974 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:26 crc kubenswrapper[4811]: I0122 09:07:26.580984 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:26 crc kubenswrapper[4811]: I0122 09:07:26.580991 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:26Z","lastTransitionTime":"2026-01-22T09:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:26 crc kubenswrapper[4811]: I0122 09:07:26.682926 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:26 crc kubenswrapper[4811]: I0122 09:07:26.682985 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:26 crc kubenswrapper[4811]: I0122 09:07:26.682995 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:26 crc kubenswrapper[4811]: I0122 09:07:26.683011 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:26 crc kubenswrapper[4811]: I0122 09:07:26.683021 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:26Z","lastTransitionTime":"2026-01-22T09:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:26 crc kubenswrapper[4811]: I0122 09:07:26.784563 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:26 crc kubenswrapper[4811]: I0122 09:07:26.784618 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:26 crc kubenswrapper[4811]: I0122 09:07:26.784644 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:26 crc kubenswrapper[4811]: I0122 09:07:26.784659 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:26 crc kubenswrapper[4811]: I0122 09:07:26.784670 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:26Z","lastTransitionTime":"2026-01-22T09:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:26 crc kubenswrapper[4811]: I0122 09:07:26.886435 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:26 crc kubenswrapper[4811]: I0122 09:07:26.886476 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:26 crc kubenswrapper[4811]: I0122 09:07:26.886486 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:26 crc kubenswrapper[4811]: I0122 09:07:26.886501 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:26 crc kubenswrapper[4811]: I0122 09:07:26.886511 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:26Z","lastTransitionTime":"2026-01-22T09:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:26 crc kubenswrapper[4811]: I0122 09:07:26.988065 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:26 crc kubenswrapper[4811]: I0122 09:07:26.988096 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:26 crc kubenswrapper[4811]: I0122 09:07:26.988104 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:26 crc kubenswrapper[4811]: I0122 09:07:26.988119 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:26 crc kubenswrapper[4811]: I0122 09:07:26.988128 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:26Z","lastTransitionTime":"2026-01-22T09:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:26 crc kubenswrapper[4811]: I0122 09:07:26.991663 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bhj4l" Jan 22 09:07:26 crc kubenswrapper[4811]: E0122 09:07:26.991761 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bhj4l" podUID="de4b38a0-0c7a-4693-9f92-40fefd6bc9b4" Jan 22 09:07:26 crc kubenswrapper[4811]: I0122 09:07:26.992578 4811 scope.go:117] "RemoveContainer" containerID="c4ae4bac915c177448079f710c8f3f5cb8ee9368318699d1500a8e2ec2075fb5" Jan 22 09:07:26 crc kubenswrapper[4811]: E0122 09:07:26.992879 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-274vf_openshift-ovn-kubernetes(1cd0f0db-de53-47c0-9b45-2ce8b37392a3)\"" pod="openshift-ovn-kubernetes/ovnkube-node-274vf" podUID="1cd0f0db-de53-47c0-9b45-2ce8b37392a3" Jan 22 09:07:27 crc kubenswrapper[4811]: I0122 09:07:27.017583 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 15:01:54.927987845 +0000 UTC Jan 22 09:07:27 crc kubenswrapper[4811]: I0122 09:07:27.090184 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:27 crc kubenswrapper[4811]: I0122 09:07:27.090232 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:27 crc kubenswrapper[4811]: I0122 09:07:27.090242 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:27 crc kubenswrapper[4811]: I0122 09:07:27.090253 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:27 crc kubenswrapper[4811]: I0122 09:07:27.090262 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:27Z","lastTransitionTime":"2026-01-22T09:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:27 crc kubenswrapper[4811]: I0122 09:07:27.191696 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:27 crc kubenswrapper[4811]: I0122 09:07:27.191723 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:27 crc kubenswrapper[4811]: I0122 09:07:27.191732 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:27 crc kubenswrapper[4811]: I0122 09:07:27.191742 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:27 crc kubenswrapper[4811]: I0122 09:07:27.191750 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:27Z","lastTransitionTime":"2026-01-22T09:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:27 crc kubenswrapper[4811]: I0122 09:07:27.293014 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:27 crc kubenswrapper[4811]: I0122 09:07:27.293066 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:27 crc kubenswrapper[4811]: I0122 09:07:27.293076 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:27 crc kubenswrapper[4811]: I0122 09:07:27.293087 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:27 crc kubenswrapper[4811]: I0122 09:07:27.293095 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:27Z","lastTransitionTime":"2026-01-22T09:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:27 crc kubenswrapper[4811]: I0122 09:07:27.394400 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:27 crc kubenswrapper[4811]: I0122 09:07:27.394423 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:27 crc kubenswrapper[4811]: I0122 09:07:27.394433 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:27 crc kubenswrapper[4811]: I0122 09:07:27.394443 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:27 crc kubenswrapper[4811]: I0122 09:07:27.394452 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:27Z","lastTransitionTime":"2026-01-22T09:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:27 crc kubenswrapper[4811]: I0122 09:07:27.496518 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:27 crc kubenswrapper[4811]: I0122 09:07:27.496784 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:27 crc kubenswrapper[4811]: I0122 09:07:27.496849 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:27 crc kubenswrapper[4811]: I0122 09:07:27.496944 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:27 crc kubenswrapper[4811]: I0122 09:07:27.497000 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:27Z","lastTransitionTime":"2026-01-22T09:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:27 crc kubenswrapper[4811]: I0122 09:07:27.598925 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:27 crc kubenswrapper[4811]: I0122 09:07:27.598963 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:27 crc kubenswrapper[4811]: I0122 09:07:27.598971 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:27 crc kubenswrapper[4811]: I0122 09:07:27.598984 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:27 crc kubenswrapper[4811]: I0122 09:07:27.598994 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:27Z","lastTransitionTime":"2026-01-22T09:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:27 crc kubenswrapper[4811]: I0122 09:07:27.700531 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:27 crc kubenswrapper[4811]: I0122 09:07:27.700567 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:27 crc kubenswrapper[4811]: I0122 09:07:27.700576 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:27 crc kubenswrapper[4811]: I0122 09:07:27.700590 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:27 crc kubenswrapper[4811]: I0122 09:07:27.700599 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:27Z","lastTransitionTime":"2026-01-22T09:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:27 crc kubenswrapper[4811]: I0122 09:07:27.802347 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:27 crc kubenswrapper[4811]: I0122 09:07:27.802378 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:27 crc kubenswrapper[4811]: I0122 09:07:27.802386 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:27 crc kubenswrapper[4811]: I0122 09:07:27.802424 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:27 crc kubenswrapper[4811]: I0122 09:07:27.802433 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:27Z","lastTransitionTime":"2026-01-22T09:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:27 crc kubenswrapper[4811]: I0122 09:07:27.904142 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:27 crc kubenswrapper[4811]: I0122 09:07:27.904164 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:27 crc kubenswrapper[4811]: I0122 09:07:27.904172 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:27 crc kubenswrapper[4811]: I0122 09:07:27.904181 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:27 crc kubenswrapper[4811]: I0122 09:07:27.904187 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:27Z","lastTransitionTime":"2026-01-22T09:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:27 crc kubenswrapper[4811]: I0122 09:07:27.991970 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 09:07:27 crc kubenswrapper[4811]: I0122 09:07:27.991976 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:07:27 crc kubenswrapper[4811]: I0122 09:07:27.992055 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 09:07:27 crc kubenswrapper[4811]: E0122 09:07:27.992270 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 09:07:27 crc kubenswrapper[4811]: E0122 09:07:27.992336 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 09:07:27 crc kubenswrapper[4811]: E0122 09:07:27.992431 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 09:07:28 crc kubenswrapper[4811]: I0122 09:07:28.006113 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:28 crc kubenswrapper[4811]: I0122 09:07:28.006143 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:28 crc kubenswrapper[4811]: I0122 09:07:28.006151 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:28 crc kubenswrapper[4811]: I0122 09:07:28.006161 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:28 crc kubenswrapper[4811]: I0122 09:07:28.006187 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:28Z","lastTransitionTime":"2026-01-22T09:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:28 crc kubenswrapper[4811]: I0122 09:07:28.018300 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 09:00:06.099186819 +0000 UTC Jan 22 09:07:28 crc kubenswrapper[4811]: I0122 09:07:28.107738 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:28 crc kubenswrapper[4811]: I0122 09:07:28.107759 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:28 crc kubenswrapper[4811]: I0122 09:07:28.107767 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:28 crc kubenswrapper[4811]: I0122 09:07:28.107778 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:28 crc kubenswrapper[4811]: I0122 09:07:28.107787 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:28Z","lastTransitionTime":"2026-01-22T09:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:28 crc kubenswrapper[4811]: I0122 09:07:28.134959 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 09:07:28 crc kubenswrapper[4811]: I0122 09:07:28.134988 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 09:07:28 crc kubenswrapper[4811]: I0122 09:07:28.134997 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 09:07:28 crc kubenswrapper[4811]: I0122 09:07:28.135009 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 09:07:28 crc kubenswrapper[4811]: I0122 09:07:28.135016 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T09:07:28Z","lastTransitionTime":"2026-01-22T09:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 09:07:28 crc kubenswrapper[4811]: I0122 09:07:28.163854 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-s9hrh"] Jan 22 09:07:28 crc kubenswrapper[4811]: I0122 09:07:28.164185 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-s9hrh" Jan 22 09:07:28 crc kubenswrapper[4811]: I0122 09:07:28.165791 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 22 09:07:28 crc kubenswrapper[4811]: I0122 09:07:28.166001 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 22 09:07:28 crc kubenswrapper[4811]: I0122 09:07:28.166242 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 22 09:07:28 crc kubenswrapper[4811]: I0122 09:07:28.166591 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 22 09:07:28 crc kubenswrapper[4811]: I0122 09:07:28.236359 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/bcd99a19-fddc-484e-b47f-70174e2aac95-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-s9hrh\" (UID: \"bcd99a19-fddc-484e-b47f-70174e2aac95\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-s9hrh" Jan 22 09:07:28 crc kubenswrapper[4811]: I0122 09:07:28.236405 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bcd99a19-fddc-484e-b47f-70174e2aac95-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-s9hrh\" (UID: \"bcd99a19-fddc-484e-b47f-70174e2aac95\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-s9hrh" Jan 22 09:07:28 crc kubenswrapper[4811]: I0122 09:07:28.236433 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bcd99a19-fddc-484e-b47f-70174e2aac95-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-s9hrh\" (UID: \"bcd99a19-fddc-484e-b47f-70174e2aac95\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-s9hrh" Jan 22 09:07:28 crc kubenswrapper[4811]: I0122 09:07:28.236516 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bcd99a19-fddc-484e-b47f-70174e2aac95-service-ca\") pod \"cluster-version-operator-5c965bbfc6-s9hrh\" (UID: \"bcd99a19-fddc-484e-b47f-70174e2aac95\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-s9hrh" Jan 22 09:07:28 crc kubenswrapper[4811]: I0122 09:07:28.236544 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/bcd99a19-fddc-484e-b47f-70174e2aac95-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-s9hrh\" (UID: \"bcd99a19-fddc-484e-b47f-70174e2aac95\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-s9hrh" Jan 22 09:07:28 crc kubenswrapper[4811]: I0122 09:07:28.337692 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bcd99a19-fddc-484e-b47f-70174e2aac95-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-s9hrh\" (UID: \"bcd99a19-fddc-484e-b47f-70174e2aac95\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-s9hrh" Jan 22 09:07:28 crc kubenswrapper[4811]: I0122 09:07:28.337727 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bcd99a19-fddc-484e-b47f-70174e2aac95-service-ca\") pod \"cluster-version-operator-5c965bbfc6-s9hrh\" (UID: \"bcd99a19-fddc-484e-b47f-70174e2aac95\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-s9hrh" Jan 22 09:07:28 crc kubenswrapper[4811]: I0122 09:07:28.337744 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/bcd99a19-fddc-484e-b47f-70174e2aac95-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-s9hrh\" (UID: \"bcd99a19-fddc-484e-b47f-70174e2aac95\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-s9hrh" Jan 22 09:07:28 crc kubenswrapper[4811]: I0122 09:07:28.337773 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/bcd99a19-fddc-484e-b47f-70174e2aac95-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-s9hrh\" (UID: \"bcd99a19-fddc-484e-b47f-70174e2aac95\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-s9hrh" Jan 22 09:07:28 crc kubenswrapper[4811]: I0122 09:07:28.337807 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bcd99a19-fddc-484e-b47f-70174e2aac95-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-s9hrh\" (UID: \"bcd99a19-fddc-484e-b47f-70174e2aac95\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-s9hrh" Jan 22 09:07:28 crc kubenswrapper[4811]: I0122 09:07:28.337973 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/bcd99a19-fddc-484e-b47f-70174e2aac95-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-s9hrh\" (UID: \"bcd99a19-fddc-484e-b47f-70174e2aac95\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-s9hrh" Jan 22 09:07:28 crc kubenswrapper[4811]: I0122 09:07:28.338030 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/bcd99a19-fddc-484e-b47f-70174e2aac95-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-s9hrh\" (UID: \"bcd99a19-fddc-484e-b47f-70174e2aac95\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-s9hrh" Jan 22 09:07:28 crc kubenswrapper[4811]: I0122 09:07:28.338461 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bcd99a19-fddc-484e-b47f-70174e2aac95-service-ca\") pod \"cluster-version-operator-5c965bbfc6-s9hrh\" (UID: \"bcd99a19-fddc-484e-b47f-70174e2aac95\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-s9hrh" Jan 22 09:07:28 crc kubenswrapper[4811]: I0122 09:07:28.342356 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bcd99a19-fddc-484e-b47f-70174e2aac95-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-s9hrh\" (UID: \"bcd99a19-fddc-484e-b47f-70174e2aac95\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-s9hrh" Jan 22 09:07:28 crc kubenswrapper[4811]: I0122 09:07:28.349923 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bcd99a19-fddc-484e-b47f-70174e2aac95-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-s9hrh\" (UID: \"bcd99a19-fddc-484e-b47f-70174e2aac95\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-s9hrh" Jan 22 09:07:28 crc kubenswrapper[4811]: I0122 09:07:28.473839 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-s9hrh" Jan 22 09:07:28 crc kubenswrapper[4811]: I0122 09:07:28.991692 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bhj4l" Jan 22 09:07:28 crc kubenswrapper[4811]: E0122 09:07:28.991779 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bhj4l" podUID="de4b38a0-0c7a-4693-9f92-40fefd6bc9b4" Jan 22 09:07:29 crc kubenswrapper[4811]: I0122 09:07:29.018934 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 00:41:47.206750831 +0000 UTC Jan 22 09:07:29 crc kubenswrapper[4811]: I0122 09:07:29.018989 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 22 09:07:29 crc kubenswrapper[4811]: I0122 09:07:29.025968 4811 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 22 09:07:29 crc kubenswrapper[4811]: I0122 09:07:29.370089 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-s9hrh" event={"ID":"bcd99a19-fddc-484e-b47f-70174e2aac95","Type":"ContainerStarted","Data":"81c17599b228a8e4b7cbdceb83d4e4f35d003a752c91cded4b6a96a80c16e1a3"} Jan 22 09:07:29 crc kubenswrapper[4811]: I0122 09:07:29.370131 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-s9hrh" event={"ID":"bcd99a19-fddc-484e-b47f-70174e2aac95","Type":"ContainerStarted","Data":"3df9df3a8a26049f7725baf592b22c4126b69caf6f070d9b3bd0229aee73afec"} Jan 22 09:07:29 crc kubenswrapper[4811]: I0122 09:07:29.380236 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-s9hrh" podStartSLOduration=72.380226736 podStartE2EDuration="1m12.380226736s" podCreationTimestamp="2026-01-22 09:06:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:07:29.379730667 +0000 UTC m=+93.701917791" watchObservedRunningTime="2026-01-22 09:07:29.380226736 +0000 UTC m=+93.702413858" Jan 22 09:07:29 crc kubenswrapper[4811]: I0122 09:07:29.991995 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 09:07:29 crc kubenswrapper[4811]: I0122 09:07:29.992075 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:07:29 crc kubenswrapper[4811]: E0122 09:07:29.992095 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 09:07:29 crc kubenswrapper[4811]: I0122 09:07:29.991995 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 09:07:29 crc kubenswrapper[4811]: E0122 09:07:29.992171 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 09:07:29 crc kubenswrapper[4811]: E0122 09:07:29.992245 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 09:07:30 crc kubenswrapper[4811]: I0122 09:07:30.991500 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bhj4l" Jan 22 09:07:30 crc kubenswrapper[4811]: E0122 09:07:30.991589 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bhj4l" podUID="de4b38a0-0c7a-4693-9f92-40fefd6bc9b4" Jan 22 09:07:31 crc kubenswrapper[4811]: I0122 09:07:31.991195 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 09:07:31 crc kubenswrapper[4811]: I0122 09:07:31.991193 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 09:07:31 crc kubenswrapper[4811]: E0122 09:07:31.991285 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 09:07:31 crc kubenswrapper[4811]: E0122 09:07:31.991381 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 09:07:31 crc kubenswrapper[4811]: I0122 09:07:31.991448 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:07:31 crc kubenswrapper[4811]: E0122 09:07:31.991512 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 09:07:32 crc kubenswrapper[4811]: I0122 09:07:32.991500 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bhj4l" Jan 22 09:07:32 crc kubenswrapper[4811]: E0122 09:07:32.991606 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bhj4l" podUID="de4b38a0-0c7a-4693-9f92-40fefd6bc9b4" Jan 22 09:07:33 crc kubenswrapper[4811]: I0122 09:07:33.991086 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 09:07:33 crc kubenswrapper[4811]: E0122 09:07:33.991202 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 09:07:33 crc kubenswrapper[4811]: I0122 09:07:33.991090 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:07:33 crc kubenswrapper[4811]: I0122 09:07:33.991248 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 09:07:33 crc kubenswrapper[4811]: E0122 09:07:33.991273 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 09:07:33 crc kubenswrapper[4811]: E0122 09:07:33.991383 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 09:07:34 crc kubenswrapper[4811]: I0122 09:07:34.991762 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bhj4l" Jan 22 09:07:34 crc kubenswrapper[4811]: E0122 09:07:34.991966 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bhj4l" podUID="de4b38a0-0c7a-4693-9f92-40fefd6bc9b4" Jan 22 09:07:35 crc kubenswrapper[4811]: I0122 09:07:35.294409 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/de4b38a0-0c7a-4693-9f92-40fefd6bc9b4-metrics-certs\") pod \"network-metrics-daemon-bhj4l\" (UID: \"de4b38a0-0c7a-4693-9f92-40fefd6bc9b4\") " pod="openshift-multus/network-metrics-daemon-bhj4l" Jan 22 09:07:35 crc kubenswrapper[4811]: E0122 09:07:35.294599 4811 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 09:07:35 crc kubenswrapper[4811]: E0122 09:07:35.294717 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/de4b38a0-0c7a-4693-9f92-40fefd6bc9b4-metrics-certs podName:de4b38a0-0c7a-4693-9f92-40fefd6bc9b4 nodeName:}" failed. No retries permitted until 2026-01-22 09:08:39.294693191 +0000 UTC m=+163.616880314 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/de4b38a0-0c7a-4693-9f92-40fefd6bc9b4-metrics-certs") pod "network-metrics-daemon-bhj4l" (UID: "de4b38a0-0c7a-4693-9f92-40fefd6bc9b4") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 09:07:35 crc kubenswrapper[4811]: I0122 09:07:35.991670 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:07:35 crc kubenswrapper[4811]: I0122 09:07:35.992857 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 09:07:35 crc kubenswrapper[4811]: I0122 09:07:35.992983 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 09:07:35 crc kubenswrapper[4811]: E0122 09:07:35.993068 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 09:07:35 crc kubenswrapper[4811]: E0122 09:07:35.993169 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 09:07:35 crc kubenswrapper[4811]: E0122 09:07:35.993488 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 09:07:36 crc kubenswrapper[4811]: I0122 09:07:36.991989 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bhj4l" Jan 22 09:07:36 crc kubenswrapper[4811]: E0122 09:07:36.992268 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bhj4l" podUID="de4b38a0-0c7a-4693-9f92-40fefd6bc9b4" Jan 22 09:07:37 crc kubenswrapper[4811]: I0122 09:07:37.991460 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:07:37 crc kubenswrapper[4811]: I0122 09:07:37.991489 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 09:07:37 crc kubenswrapper[4811]: E0122 09:07:37.991548 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 09:07:37 crc kubenswrapper[4811]: I0122 09:07:37.991598 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 09:07:37 crc kubenswrapper[4811]: E0122 09:07:37.991697 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 09:07:37 crc kubenswrapper[4811]: E0122 09:07:37.991726 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 09:07:38 crc kubenswrapper[4811]: I0122 09:07:38.991154 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bhj4l" Jan 22 09:07:38 crc kubenswrapper[4811]: E0122 09:07:38.991261 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bhj4l" podUID="de4b38a0-0c7a-4693-9f92-40fefd6bc9b4" Jan 22 09:07:39 crc kubenswrapper[4811]: I0122 09:07:39.991983 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 09:07:39 crc kubenswrapper[4811]: I0122 09:07:39.992034 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 09:07:39 crc kubenswrapper[4811]: E0122 09:07:39.992087 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 09:07:39 crc kubenswrapper[4811]: I0122 09:07:39.992119 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:07:39 crc kubenswrapper[4811]: E0122 09:07:39.992244 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 09:07:39 crc kubenswrapper[4811]: E0122 09:07:39.992319 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 09:07:40 crc kubenswrapper[4811]: I0122 09:07:40.991692 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bhj4l" Jan 22 09:07:40 crc kubenswrapper[4811]: E0122 09:07:40.991788 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bhj4l" podUID="de4b38a0-0c7a-4693-9f92-40fefd6bc9b4" Jan 22 09:07:40 crc kubenswrapper[4811]: I0122 09:07:40.992320 4811 scope.go:117] "RemoveContainer" containerID="c4ae4bac915c177448079f710c8f3f5cb8ee9368318699d1500a8e2ec2075fb5" Jan 22 09:07:40 crc kubenswrapper[4811]: E0122 09:07:40.992467 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-274vf_openshift-ovn-kubernetes(1cd0f0db-de53-47c0-9b45-2ce8b37392a3)\"" pod="openshift-ovn-kubernetes/ovnkube-node-274vf" podUID="1cd0f0db-de53-47c0-9b45-2ce8b37392a3" Jan 22 09:07:41 crc kubenswrapper[4811]: I0122 09:07:41.991520 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 09:07:41 crc kubenswrapper[4811]: I0122 09:07:41.991611 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:07:41 crc kubenswrapper[4811]: E0122 09:07:41.991659 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 09:07:41 crc kubenswrapper[4811]: E0122 09:07:41.991837 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 09:07:41 crc kubenswrapper[4811]: I0122 09:07:41.991889 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 09:07:41 crc kubenswrapper[4811]: E0122 09:07:41.992037 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 09:07:42 crc kubenswrapper[4811]: I0122 09:07:42.991875 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bhj4l" Jan 22 09:07:42 crc kubenswrapper[4811]: E0122 09:07:42.992113 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bhj4l" podUID="de4b38a0-0c7a-4693-9f92-40fefd6bc9b4" Jan 22 09:07:43 crc kubenswrapper[4811]: I0122 09:07:43.991958 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 09:07:43 crc kubenswrapper[4811]: I0122 09:07:43.992012 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:07:43 crc kubenswrapper[4811]: I0122 09:07:43.991961 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 09:07:43 crc kubenswrapper[4811]: E0122 09:07:43.992068 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 09:07:43 crc kubenswrapper[4811]: E0122 09:07:43.992164 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 09:07:43 crc kubenswrapper[4811]: E0122 09:07:43.992234 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 09:07:44 crc kubenswrapper[4811]: I0122 09:07:44.991660 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bhj4l" Jan 22 09:07:44 crc kubenswrapper[4811]: E0122 09:07:44.991750 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bhj4l" podUID="de4b38a0-0c7a-4693-9f92-40fefd6bc9b4" Jan 22 09:07:45 crc kubenswrapper[4811]: I0122 09:07:45.991188 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 09:07:45 crc kubenswrapper[4811]: I0122 09:07:45.991254 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:07:45 crc kubenswrapper[4811]: E0122 09:07:45.991968 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 09:07:45 crc kubenswrapper[4811]: I0122 09:07:45.992010 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 09:07:45 crc kubenswrapper[4811]: E0122 09:07:45.992112 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 09:07:45 crc kubenswrapper[4811]: E0122 09:07:45.992160 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 09:07:46 crc kubenswrapper[4811]: I0122 09:07:46.991878 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bhj4l" Jan 22 09:07:46 crc kubenswrapper[4811]: E0122 09:07:46.991985 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bhj4l" podUID="de4b38a0-0c7a-4693-9f92-40fefd6bc9b4" Jan 22 09:07:47 crc kubenswrapper[4811]: I0122 09:07:47.991323 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 09:07:47 crc kubenswrapper[4811]: I0122 09:07:47.992376 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:07:47 crc kubenswrapper[4811]: I0122 09:07:47.992687 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 09:07:47 crc kubenswrapper[4811]: E0122 09:07:47.992815 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 09:07:47 crc kubenswrapper[4811]: E0122 09:07:47.992930 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 09:07:47 crc kubenswrapper[4811]: E0122 09:07:47.993064 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 09:07:48 crc kubenswrapper[4811]: I0122 09:07:48.991863 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bhj4l" Jan 22 09:07:48 crc kubenswrapper[4811]: E0122 09:07:48.991965 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bhj4l" podUID="de4b38a0-0c7a-4693-9f92-40fefd6bc9b4" Jan 22 09:07:49 crc kubenswrapper[4811]: I0122 09:07:49.991062 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 09:07:49 crc kubenswrapper[4811]: I0122 09:07:49.991089 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 09:07:49 crc kubenswrapper[4811]: E0122 09:07:49.991185 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 09:07:49 crc kubenswrapper[4811]: I0122 09:07:49.991213 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:07:49 crc kubenswrapper[4811]: E0122 09:07:49.991320 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 09:07:49 crc kubenswrapper[4811]: E0122 09:07:49.991419 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 09:07:50 crc kubenswrapper[4811]: I0122 09:07:50.991106 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bhj4l" Jan 22 09:07:50 crc kubenswrapper[4811]: E0122 09:07:50.991314 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bhj4l" podUID="de4b38a0-0c7a-4693-9f92-40fefd6bc9b4" Jan 22 09:07:51 crc kubenswrapper[4811]: I0122 09:07:51.423697 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-kfqgt_f2555861-d1bb-4f21-be4a-165ed9212932/kube-multus/1.log" Jan 22 09:07:51 crc kubenswrapper[4811]: I0122 09:07:51.424194 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-kfqgt_f2555861-d1bb-4f21-be4a-165ed9212932/kube-multus/0.log" Jan 22 09:07:51 crc kubenswrapper[4811]: I0122 09:07:51.424242 4811 generic.go:334] "Generic (PLEG): container finished" podID="f2555861-d1bb-4f21-be4a-165ed9212932" containerID="c2e4b2026355f189cbe6a2d70999613aa1b5868f0b38c25e834e42eda1b41088" exitCode=1 Jan 22 09:07:51 crc kubenswrapper[4811]: I0122 09:07:51.424275 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-kfqgt" event={"ID":"f2555861-d1bb-4f21-be4a-165ed9212932","Type":"ContainerDied","Data":"c2e4b2026355f189cbe6a2d70999613aa1b5868f0b38c25e834e42eda1b41088"} Jan 22 09:07:51 crc kubenswrapper[4811]: I0122 09:07:51.424311 4811 scope.go:117] "RemoveContainer" containerID="6ececfc4b2e4f2a120e7a3307235d606db00733e1a5631b6e19b5312afc8e8ff" Jan 22 09:07:51 crc kubenswrapper[4811]: I0122 09:07:51.424844 4811 scope.go:117] "RemoveContainer" containerID="c2e4b2026355f189cbe6a2d70999613aa1b5868f0b38c25e834e42eda1b41088" Jan 22 09:07:51 crc kubenswrapper[4811]: E0122 09:07:51.425020 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-kfqgt_openshift-multus(f2555861-d1bb-4f21-be4a-165ed9212932)\"" pod="openshift-multus/multus-kfqgt" podUID="f2555861-d1bb-4f21-be4a-165ed9212932" Jan 22 09:07:51 crc kubenswrapper[4811]: I0122 09:07:51.991375 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 09:07:51 crc kubenswrapper[4811]: I0122 09:07:51.991408 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 09:07:51 crc kubenswrapper[4811]: E0122 09:07:51.991470 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 09:07:51 crc kubenswrapper[4811]: I0122 09:07:51.991591 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:07:51 crc kubenswrapper[4811]: E0122 09:07:51.991815 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 09:07:51 crc kubenswrapper[4811]: E0122 09:07:51.991826 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 09:07:52 crc kubenswrapper[4811]: I0122 09:07:52.430299 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-kfqgt_f2555861-d1bb-4f21-be4a-165ed9212932/kube-multus/1.log" Jan 22 09:07:52 crc kubenswrapper[4811]: I0122 09:07:52.991902 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bhj4l" Jan 22 09:07:52 crc kubenswrapper[4811]: E0122 09:07:52.992007 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bhj4l" podUID="de4b38a0-0c7a-4693-9f92-40fefd6bc9b4" Jan 22 09:07:53 crc kubenswrapper[4811]: I0122 09:07:53.991221 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 09:07:53 crc kubenswrapper[4811]: E0122 09:07:53.991321 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 09:07:53 crc kubenswrapper[4811]: I0122 09:07:53.991370 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:07:53 crc kubenswrapper[4811]: I0122 09:07:53.991432 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 09:07:53 crc kubenswrapper[4811]: E0122 09:07:53.991528 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 09:07:53 crc kubenswrapper[4811]: E0122 09:07:53.991731 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 09:07:53 crc kubenswrapper[4811]: I0122 09:07:53.992400 4811 scope.go:117] "RemoveContainer" containerID="c4ae4bac915c177448079f710c8f3f5cb8ee9368318699d1500a8e2ec2075fb5" Jan 22 09:07:53 crc kubenswrapper[4811]: E0122 09:07:53.992596 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-274vf_openshift-ovn-kubernetes(1cd0f0db-de53-47c0-9b45-2ce8b37392a3)\"" pod="openshift-ovn-kubernetes/ovnkube-node-274vf" podUID="1cd0f0db-de53-47c0-9b45-2ce8b37392a3" Jan 22 09:07:54 crc kubenswrapper[4811]: I0122 09:07:54.991481 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bhj4l" Jan 22 09:07:54 crc kubenswrapper[4811]: E0122 09:07:54.991580 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bhj4l" podUID="de4b38a0-0c7a-4693-9f92-40fefd6bc9b4" Jan 22 09:07:55 crc kubenswrapper[4811]: I0122 09:07:55.991482 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 09:07:55 crc kubenswrapper[4811]: E0122 09:07:55.998232 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 09:07:55 crc kubenswrapper[4811]: I0122 09:07:55.998261 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 09:07:55 crc kubenswrapper[4811]: I0122 09:07:55.998358 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:07:55 crc kubenswrapper[4811]: E0122 09:07:55.998447 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 09:07:55 crc kubenswrapper[4811]: E0122 09:07:55.998708 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 09:07:56 crc kubenswrapper[4811]: E0122 09:07:56.007736 4811 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 22 09:07:56 crc kubenswrapper[4811]: E0122 09:07:56.064834 4811 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 22 09:07:56 crc kubenswrapper[4811]: I0122 09:07:56.991234 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bhj4l" Jan 22 09:07:56 crc kubenswrapper[4811]: E0122 09:07:56.991362 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bhj4l" podUID="de4b38a0-0c7a-4693-9f92-40fefd6bc9b4" Jan 22 09:07:57 crc kubenswrapper[4811]: I0122 09:07:57.991505 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:07:57 crc kubenswrapper[4811]: I0122 09:07:57.991515 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 09:07:57 crc kubenswrapper[4811]: I0122 09:07:57.991515 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 09:07:57 crc kubenswrapper[4811]: E0122 09:07:57.992701 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 09:07:57 crc kubenswrapper[4811]: E0122 09:07:57.992795 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 09:07:57 crc kubenswrapper[4811]: E0122 09:07:57.992952 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 09:07:58 crc kubenswrapper[4811]: I0122 09:07:58.991457 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bhj4l" Jan 22 09:07:58 crc kubenswrapper[4811]: E0122 09:07:58.991594 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bhj4l" podUID="de4b38a0-0c7a-4693-9f92-40fefd6bc9b4" Jan 22 09:07:59 crc kubenswrapper[4811]: I0122 09:07:59.991764 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 09:07:59 crc kubenswrapper[4811]: I0122 09:07:59.991785 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 09:07:59 crc kubenswrapper[4811]: E0122 09:07:59.991881 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 09:07:59 crc kubenswrapper[4811]: E0122 09:07:59.991944 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 09:07:59 crc kubenswrapper[4811]: I0122 09:07:59.991973 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:07:59 crc kubenswrapper[4811]: E0122 09:07:59.992013 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 09:08:00 crc kubenswrapper[4811]: I0122 09:08:00.991254 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bhj4l" Jan 22 09:08:00 crc kubenswrapper[4811]: E0122 09:08:00.991365 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bhj4l" podUID="de4b38a0-0c7a-4693-9f92-40fefd6bc9b4" Jan 22 09:08:01 crc kubenswrapper[4811]: E0122 09:08:01.065477 4811 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 22 09:08:01 crc kubenswrapper[4811]: I0122 09:08:01.991450 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 09:08:01 crc kubenswrapper[4811]: I0122 09:08:01.991450 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 09:08:01 crc kubenswrapper[4811]: E0122 09:08:01.991603 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 09:08:01 crc kubenswrapper[4811]: E0122 09:08:01.991550 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 09:08:01 crc kubenswrapper[4811]: I0122 09:08:01.991461 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:08:01 crc kubenswrapper[4811]: E0122 09:08:01.991689 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 09:08:02 crc kubenswrapper[4811]: I0122 09:08:02.991678 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bhj4l" Jan 22 09:08:02 crc kubenswrapper[4811]: E0122 09:08:02.991781 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bhj4l" podUID="de4b38a0-0c7a-4693-9f92-40fefd6bc9b4" Jan 22 09:08:03 crc kubenswrapper[4811]: I0122 09:08:03.991655 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:08:03 crc kubenswrapper[4811]: E0122 09:08:03.991753 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 09:08:03 crc kubenswrapper[4811]: I0122 09:08:03.991668 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 09:08:03 crc kubenswrapper[4811]: E0122 09:08:03.991825 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 09:08:03 crc kubenswrapper[4811]: I0122 09:08:03.991671 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 09:08:03 crc kubenswrapper[4811]: E0122 09:08:03.991889 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 09:08:04 crc kubenswrapper[4811]: I0122 09:08:04.991727 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bhj4l" Jan 22 09:08:04 crc kubenswrapper[4811]: I0122 09:08:04.991966 4811 scope.go:117] "RemoveContainer" containerID="c2e4b2026355f189cbe6a2d70999613aa1b5868f0b38c25e834e42eda1b41088" Jan 22 09:08:04 crc kubenswrapper[4811]: E0122 09:08:04.993501 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bhj4l" podUID="de4b38a0-0c7a-4693-9f92-40fefd6bc9b4" Jan 22 09:08:04 crc kubenswrapper[4811]: I0122 09:08:04.993961 4811 scope.go:117] "RemoveContainer" containerID="c4ae4bac915c177448079f710c8f3f5cb8ee9368318699d1500a8e2ec2075fb5" Jan 22 09:08:05 crc kubenswrapper[4811]: I0122 09:08:05.466177 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-kfqgt_f2555861-d1bb-4f21-be4a-165ed9212932/kube-multus/1.log" Jan 22 09:08:05 crc kubenswrapper[4811]: I0122 09:08:05.466272 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-kfqgt" event={"ID":"f2555861-d1bb-4f21-be4a-165ed9212932","Type":"ContainerStarted","Data":"387bd2033a6e44d48e86120e4dee106a7b97b0d54769314cb0c0424e36c0d88e"} Jan 22 09:08:05 crc kubenswrapper[4811]: I0122 09:08:05.468542 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-274vf_1cd0f0db-de53-47c0-9b45-2ce8b37392a3/ovnkube-controller/3.log" Jan 22 09:08:05 crc kubenswrapper[4811]: I0122 09:08:05.470309 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-274vf" event={"ID":"1cd0f0db-de53-47c0-9b45-2ce8b37392a3","Type":"ContainerStarted","Data":"5a45c9cc109c4284fd86db1bd72102f9fc2778b85d49eb651d9debff48557f02"} Jan 22 09:08:05 crc kubenswrapper[4811]: I0122 09:08:05.470935 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-274vf" Jan 22 09:08:05 crc kubenswrapper[4811]: I0122 09:08:05.671931 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-274vf" podStartSLOduration=107.671896101 podStartE2EDuration="1m47.671896101s" podCreationTimestamp="2026-01-22 09:06:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:08:05.504166109 +0000 UTC m=+129.826353232" watchObservedRunningTime="2026-01-22 09:08:05.671896101 +0000 UTC m=+129.994083223" Jan 22 09:08:05 crc kubenswrapper[4811]: I0122 09:08:05.675566 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-bhj4l"] Jan 22 09:08:05 crc kubenswrapper[4811]: I0122 09:08:05.675685 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bhj4l" Jan 22 09:08:05 crc kubenswrapper[4811]: E0122 09:08:05.675785 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bhj4l" podUID="de4b38a0-0c7a-4693-9f92-40fefd6bc9b4" Jan 22 09:08:05 crc kubenswrapper[4811]: I0122 09:08:05.992079 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 09:08:05 crc kubenswrapper[4811]: I0122 09:08:05.992107 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 09:08:05 crc kubenswrapper[4811]: I0122 09:08:05.992135 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:08:05 crc kubenswrapper[4811]: E0122 09:08:05.992806 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 09:08:05 crc kubenswrapper[4811]: E0122 09:08:05.992864 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 09:08:05 crc kubenswrapper[4811]: E0122 09:08:05.992973 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 09:08:06 crc kubenswrapper[4811]: E0122 09:08:06.065889 4811 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 22 09:08:07 crc kubenswrapper[4811]: I0122 09:08:07.991103 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 09:08:07 crc kubenswrapper[4811]: I0122 09:08:07.991144 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 09:08:07 crc kubenswrapper[4811]: I0122 09:08:07.991201 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:08:07 crc kubenswrapper[4811]: E0122 09:08:07.991246 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 09:08:07 crc kubenswrapper[4811]: I0122 09:08:07.991310 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bhj4l" Jan 22 09:08:07 crc kubenswrapper[4811]: E0122 09:08:07.991437 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 09:08:07 crc kubenswrapper[4811]: E0122 09:08:07.991502 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bhj4l" podUID="de4b38a0-0c7a-4693-9f92-40fefd6bc9b4" Jan 22 09:08:07 crc kubenswrapper[4811]: E0122 09:08:07.991578 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 09:08:09 crc kubenswrapper[4811]: I0122 09:08:09.991236 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 09:08:09 crc kubenswrapper[4811]: I0122 09:08:09.991273 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bhj4l" Jan 22 09:08:09 crc kubenswrapper[4811]: E0122 09:08:09.991411 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 09:08:09 crc kubenswrapper[4811]: I0122 09:08:09.991458 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:08:09 crc kubenswrapper[4811]: I0122 09:08:09.991587 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 09:08:09 crc kubenswrapper[4811]: E0122 09:08:09.991584 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bhj4l" podUID="de4b38a0-0c7a-4693-9f92-40fefd6bc9b4" Jan 22 09:08:09 crc kubenswrapper[4811]: E0122 09:08:09.991725 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 09:08:09 crc kubenswrapper[4811]: E0122 09:08:09.991786 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 09:08:11 crc kubenswrapper[4811]: I0122 09:08:11.991300 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 09:08:11 crc kubenswrapper[4811]: I0122 09:08:11.991367 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 09:08:11 crc kubenswrapper[4811]: I0122 09:08:11.991482 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:08:11 crc kubenswrapper[4811]: I0122 09:08:11.991480 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bhj4l" Jan 22 09:08:11 crc kubenswrapper[4811]: I0122 09:08:11.994045 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 22 09:08:11 crc kubenswrapper[4811]: I0122 09:08:11.994229 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 22 09:08:11 crc kubenswrapper[4811]: I0122 09:08:11.994296 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 22 09:08:11 crc kubenswrapper[4811]: I0122 09:08:11.994306 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 22 09:08:11 crc kubenswrapper[4811]: I0122 09:08:11.994400 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 22 09:08:11 crc kubenswrapper[4811]: I0122 09:08:11.994772 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.630215 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.655455 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-md7dt"] Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.655850 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-md7dt" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.656142 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-hs4pm"] Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.656494 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-hs4pm" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.658058 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-rx42r"] Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.658399 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-rx42r" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.658498 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.659736 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-jhptg"] Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.660491 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-jjjjp"] Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.661208 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6l9qr"] Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.661856 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6l9qr" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.662457 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-jhptg" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.662748 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-jjjjp" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.663428 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-gmwq7"] Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.664839 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-gmwq7" Jan 22 09:08:19 crc kubenswrapper[4811]: W0122 09:08:19.666687 4811 reflector.go:561] object-"openshift-machine-api"/"machine-api-operator-images": failed to list *v1.ConfigMap: configmaps "machine-api-operator-images" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-machine-api": no relationship found between node 'crc' and this object Jan 22 09:08:19 crc kubenswrapper[4811]: E0122 09:08:19.666720 4811 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"machine-api-operator-images\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"machine-api-operator-images\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-machine-api\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 22 09:08:19 crc kubenswrapper[4811]: W0122 09:08:19.670135 4811 reflector.go:561] object-"openshift-console"/"default-dockercfg-chnjx": failed to list *v1.Secret: secrets "default-dockercfg-chnjx" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-console": no relationship found between node 'crc' and this object Jan 22 09:08:19 crc kubenswrapper[4811]: E0122 09:08:19.670200 4811 reflector.go:158] "Unhandled Error" err="object-\"openshift-console\"/\"default-dockercfg-chnjx\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"default-dockercfg-chnjx\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-console\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 22 09:08:19 crc kubenswrapper[4811]: W0122 09:08:19.670719 4811 reflector.go:561] object-"openshift-machine-api"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-machine-api": no relationship found between node 'crc' and this object Jan 22 09:08:19 crc kubenswrapper[4811]: E0122 09:08:19.670745 4811 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-machine-api\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 22 09:08:19 crc kubenswrapper[4811]: W0122 09:08:19.671142 4811 reflector.go:561] object-"openshift-machine-api"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-machine-api": no relationship found between node 'crc' and this object Jan 22 09:08:19 crc kubenswrapper[4811]: E0122 09:08:19.671172 4811 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-machine-api\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.671186 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 22 09:08:19 crc kubenswrapper[4811]: W0122 09:08:19.671234 4811 reflector.go:561] object-"openshift-console"/"console-dockercfg-f62pw": failed to list *v1.Secret: secrets "console-dockercfg-f62pw" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-console": no relationship found between node 'crc' and this object Jan 22 09:08:19 crc kubenswrapper[4811]: E0122 09:08:19.671247 4811 reflector.go:158] "Unhandled Error" err="object-\"openshift-console\"/\"console-dockercfg-f62pw\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"console-dockercfg-f62pw\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-console\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 22 09:08:19 crc kubenswrapper[4811]: W0122 09:08:19.671272 4811 reflector.go:561] object-"openshift-console"/"trusted-ca-bundle": failed to list *v1.ConfigMap: configmaps "trusted-ca-bundle" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-console": no relationship found between node 'crc' and this object Jan 22 09:08:19 crc kubenswrapper[4811]: E0122 09:08:19.671282 4811 reflector.go:158] "Unhandled Error" err="object-\"openshift-console\"/\"trusted-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"trusted-ca-bundle\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-console\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 22 09:08:19 crc kubenswrapper[4811]: W0122 09:08:19.671330 4811 reflector.go:561] object-"openshift-console"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-console": no relationship found between node 'crc' and this object Jan 22 09:08:19 crc kubenswrapper[4811]: W0122 09:08:19.671338 4811 reflector.go:561] object-"openshift-cluster-samples-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-cluster-samples-operator": no relationship found between node 'crc' and this object Jan 22 09:08:19 crc kubenswrapper[4811]: E0122 09:08:19.671343 4811 reflector.go:158] "Unhandled Error" err="object-\"openshift-console\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-console\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 22 09:08:19 crc kubenswrapper[4811]: E0122 09:08:19.671361 4811 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-samples-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-cluster-samples-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 22 09:08:19 crc kubenswrapper[4811]: W0122 09:08:19.671424 4811 reflector.go:561] object-"openshift-authentication"/"v4-0-config-user-template-login": failed to list *v1.Secret: secrets "v4-0-config-user-template-login" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-authentication": no relationship found between node 'crc' and this object Jan 22 09:08:19 crc kubenswrapper[4811]: E0122 09:08:19.671436 4811 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-user-template-login\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"v4-0-config-user-template-login\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-authentication\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 22 09:08:19 crc kubenswrapper[4811]: W0122 09:08:19.671484 4811 reflector.go:561] object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-cluster-samples-operator": no relationship found between node 'crc' and this object Jan 22 09:08:19 crc kubenswrapper[4811]: E0122 09:08:19.671497 4811 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-samples-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-cluster-samples-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 22 09:08:19 crc kubenswrapper[4811]: W0122 09:08:19.671541 4811 reflector.go:561] object-"openshift-console"/"console-oauth-config": failed to list *v1.Secret: secrets "console-oauth-config" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-console": no relationship found between node 'crc' and this object Jan 22 09:08:19 crc kubenswrapper[4811]: E0122 09:08:19.671551 4811 reflector.go:158] "Unhandled Error" err="object-\"openshift-console\"/\"console-oauth-config\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"console-oauth-config\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-console\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 22 09:08:19 crc kubenswrapper[4811]: W0122 09:08:19.671554 4811 reflector.go:561] object-"openshift-console"/"console-config": failed to list *v1.ConfigMap: configmaps "console-config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-console": no relationship found between node 'crc' and this object Jan 22 09:08:19 crc kubenswrapper[4811]: E0122 09:08:19.671573 4811 reflector.go:158] "Unhandled Error" err="object-\"openshift-console\"/\"console-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"console-config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-console\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.671642 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.671670 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.671761 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 22 09:08:19 crc kubenswrapper[4811]: W0122 09:08:19.671791 4811 reflector.go:561] object-"openshift-machine-api"/"machine-api-operator-tls": failed to list *v1.Secret: secrets "machine-api-operator-tls" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-machine-api": no relationship found between node 'crc' and this object Jan 22 09:08:19 crc kubenswrapper[4811]: W0122 09:08:19.671810 4811 reflector.go:561] object-"openshift-console"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-console": no relationship found between node 'crc' and this object Jan 22 09:08:19 crc kubenswrapper[4811]: E0122 09:08:19.671825 4811 reflector.go:158] "Unhandled Error" err="object-\"openshift-console\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-console\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 22 09:08:19 crc kubenswrapper[4811]: E0122 09:08:19.671807 4811 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"machine-api-operator-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"machine-api-operator-tls\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-machine-api\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 22 09:08:19 crc kubenswrapper[4811]: W0122 09:08:19.671847 4811 reflector.go:561] object-"openshift-apiserver"/"config": failed to list *v1.ConfigMap: configmaps "config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Jan 22 09:08:19 crc kubenswrapper[4811]: E0122 09:08:19.671858 4811 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 22 09:08:19 crc kubenswrapper[4811]: W0122 09:08:19.671860 4811 reflector.go:561] object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w": failed to list *v1.Secret: secrets "cluster-samples-operator-dockercfg-xpp9w" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-cluster-samples-operator": no relationship found between node 'crc' and this object Jan 22 09:08:19 crc kubenswrapper[4811]: E0122 09:08:19.671877 4811 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-samples-operator\"/\"cluster-samples-operator-dockercfg-xpp9w\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cluster-samples-operator-dockercfg-xpp9w\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-cluster-samples-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 22 09:08:19 crc kubenswrapper[4811]: W0122 09:08:19.671864 4811 reflector.go:561] object-"openshift-machine-api"/"kube-rbac-proxy": failed to list *v1.ConfigMap: configmaps "kube-rbac-proxy" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-machine-api": no relationship found between node 'crc' and this object Jan 22 09:08:19 crc kubenswrapper[4811]: E0122 09:08:19.671896 4811 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"kube-rbac-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-rbac-proxy\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-machine-api\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 22 09:08:19 crc kubenswrapper[4811]: W0122 09:08:19.671935 4811 reflector.go:561] object-"openshift-authentication"/"v4-0-config-system-router-certs": failed to list *v1.Secret: secrets "v4-0-config-system-router-certs" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-authentication": no relationship found between node 'crc' and this object Jan 22 09:08:19 crc kubenswrapper[4811]: E0122 09:08:19.671944 4811 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-system-router-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"v4-0-config-system-router-certs\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-authentication\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 22 09:08:19 crc kubenswrapper[4811]: W0122 09:08:19.671977 4811 reflector.go:561] object-"openshift-apiserver"/"etcd-client": failed to list *v1.Secret: secrets "etcd-client" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Jan 22 09:08:19 crc kubenswrapper[4811]: E0122 09:08:19.671992 4811 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"etcd-client\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"etcd-client\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 22 09:08:19 crc kubenswrapper[4811]: W0122 09:08:19.672025 4811 reflector.go:561] object-"openshift-cluster-samples-operator"/"samples-operator-tls": failed to list *v1.Secret: secrets "samples-operator-tls" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-cluster-samples-operator": no relationship found between node 'crc' and this object Jan 22 09:08:19 crc kubenswrapper[4811]: E0122 09:08:19.672034 4811 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-samples-operator\"/\"samples-operator-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"samples-operator-tls\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-cluster-samples-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 22 09:08:19 crc kubenswrapper[4811]: W0122 09:08:19.672041 4811 reflector.go:561] object-"openshift-console"/"service-ca": failed to list *v1.ConfigMap: configmaps "service-ca" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-console": no relationship found between node 'crc' and this object Jan 22 09:08:19 crc kubenswrapper[4811]: E0122 09:08:19.672054 4811 reflector.go:158] "Unhandled Error" err="object-\"openshift-console\"/\"service-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"service-ca\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-console\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 22 09:08:19 crc kubenswrapper[4811]: W0122 09:08:19.672067 4811 reflector.go:561] object-"openshift-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Jan 22 09:08:19 crc kubenswrapper[4811]: E0122 09:08:19.672080 4811 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.672174 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.672268 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4wjd4"] Jan 22 09:08:19 crc kubenswrapper[4811]: W0122 09:08:19.672556 4811 reflector.go:561] object-"openshift-console"/"console-serving-cert": failed to list *v1.Secret: secrets "console-serving-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-console": no relationship found between node 'crc' and this object Jan 22 09:08:19 crc kubenswrapper[4811]: E0122 09:08:19.672691 4811 reflector.go:158] "Unhandled Error" err="object-\"openshift-console\"/\"console-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"console-serving-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-console\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.672604 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4wjd4" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.672839 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-8fflv"] Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.673351 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8fflv" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.673548 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-776jv"] Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.673978 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-776jv" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.674078 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-knkwh"] Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.674385 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-knkwh" Jan 22 09:08:19 crc kubenswrapper[4811]: W0122 09:08:19.676290 4811 reflector.go:561] object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff": failed to list *v1.Secret: secrets "openshift-apiserver-sa-dockercfg-djjff" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Jan 22 09:08:19 crc kubenswrapper[4811]: W0122 09:08:19.676321 4811 reflector.go:561] object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7": failed to list *v1.Secret: secrets "machine-api-operator-dockercfg-mfbb7" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-machine-api": no relationship found between node 'crc' and this object Jan 22 09:08:19 crc kubenswrapper[4811]: E0122 09:08:19.676345 4811 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"machine-api-operator-dockercfg-mfbb7\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"machine-api-operator-dockercfg-mfbb7\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-machine-api\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 22 09:08:19 crc kubenswrapper[4811]: E0122 09:08:19.676318 4811 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"openshift-apiserver-sa-dockercfg-djjff\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"openshift-apiserver-sa-dockercfg-djjff\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 22 09:08:19 crc kubenswrapper[4811]: W0122 09:08:19.676483 4811 reflector.go:561] object-"openshift-console"/"oauth-serving-cert": failed to list *v1.ConfigMap: configmaps "oauth-serving-cert" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-console": no relationship found between node 'crc' and this object Jan 22 09:08:19 crc kubenswrapper[4811]: E0122 09:08:19.676504 4811 reflector.go:158] "Unhandled Error" err="object-\"openshift-console\"/\"oauth-serving-cert\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"oauth-serving-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-console\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.676551 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 22 09:08:19 crc kubenswrapper[4811]: W0122 09:08:19.676728 4811 reflector.go:561] object-"openshift-apiserver"/"audit-1": failed to list *v1.ConfigMap: configmaps "audit-1" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Jan 22 09:08:19 crc kubenswrapper[4811]: E0122 09:08:19.676752 4811 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"audit-1\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"audit-1\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.677077 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-bssrl"] Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.677453 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-bssrl" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.677676 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-2mphl"] Jan 22 09:08:19 crc kubenswrapper[4811]: W0122 09:08:19.677754 4811 reflector.go:561] object-"openshift-apiserver"/"image-import-ca": failed to list *v1.ConfigMap: configmaps "image-import-ca" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Jan 22 09:08:19 crc kubenswrapper[4811]: E0122 09:08:19.677779 4811 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"image-import-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"image-import-ca\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.677968 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-2mphl" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.678443 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-trtbx"] Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.679025 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-vx8k5"] Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.679429 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-vx8k5" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.679678 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-sxpvs"] Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.679973 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-sxpvs" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.680076 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-trtbx" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.682471 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-nczwv"] Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.682763 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-4ftvq"] Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.683076 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4ftvq" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.683205 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.683243 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-nczwv" Jan 22 09:08:19 crc kubenswrapper[4811]: W0122 09:08:19.683870 4811 reflector.go:561] object-"openshift-apiserver"/"etcd-serving-ca": failed to list *v1.ConfigMap: configmaps "etcd-serving-ca" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Jan 22 09:08:19 crc kubenswrapper[4811]: E0122 09:08:19.683950 4811 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"etcd-serving-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"etcd-serving-ca\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 22 09:08:19 crc kubenswrapper[4811]: W0122 09:08:19.684057 4811 reflector.go:561] object-"openshift-apiserver"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Jan 22 09:08:19 crc kubenswrapper[4811]: E0122 09:08:19.684088 4811 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.684210 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.685882 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-2hm84"] Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.686241 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-2hm84" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.704360 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-j6dqt"] Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.705070 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-j6dqt" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.705536 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.705773 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.722228 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-md7dt"] Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.722422 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.722715 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.722786 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.722865 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.722890 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.722949 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.722964 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.722999 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.723038 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.723066 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.723121 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.723173 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.723207 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.723269 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.723287 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.723351 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.723432 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.723454 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.723547 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.723590 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.723668 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.723361 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.723747 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.723771 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.723796 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.723844 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.723871 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.723895 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.723938 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.723850 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.724060 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.724124 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.724139 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.724088 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.724089 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.724306 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.724918 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nljqw"] Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.725483 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nljqw" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.729593 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.730300 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.730436 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.730572 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.731841 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.732162 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.732291 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.732398 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.732513 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.733227 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.733720 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.734086 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-rx42r"] Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.737442 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wvhkk"] Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.738205 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wvhkk" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.739953 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.740084 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.740214 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.740314 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.742883 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.743027 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.745182 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.745600 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.746978 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.747093 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-jjjjp"] Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.747740 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-2jrk9"] Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.748484 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2jrk9" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.749133 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.750359 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.750835 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-j9b4b"] Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.751397 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-j9b4b" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.751830 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.754387 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/189dea5f-ae62-4140-bd03-14a548c51684-etcd-ca\") pod \"etcd-operator-b45778765-sxpvs\" (UID: \"189dea5f-ae62-4140-bd03-14a548c51684\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sxpvs" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.754414 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/db487354-8574-45b6-b639-4d4afc8a7698-trusted-ca\") pod \"console-operator-58897d9998-2mphl\" (UID: \"db487354-8574-45b6-b639-4d4afc8a7698\") " pod="openshift-console-operator/console-operator-58897d9998-2mphl" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.754439 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/799c1556-92e6-43b8-a620-c7211a2ce813-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-vx8k5\" (UID: \"799c1556-92e6-43b8-a620-c7211a2ce813\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vx8k5" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.754455 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rpfg\" (UniqueName: \"kubernetes.io/projected/05a61535-90ca-4127-991a-0b3f1c110f5a-kube-api-access-6rpfg\") pod \"kube-storage-version-migrator-operator-b67b599dd-nljqw\" (UID: \"05a61535-90ca-4127-991a-0b3f1c110f5a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nljqw" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.754485 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8a9d91fa-d887-4128-af43-cfe3cad79784-images\") pod \"machine-api-operator-5694c8668f-rx42r\" (UID: \"8a9d91fa-d887-4128-af43-cfe3cad79784\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-rx42r" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.754501 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80529f80-c8f8-4bc5-83a6-eb19a23401f0-config\") pod \"openshift-apiserver-operator-796bbdcf4f-knkwh\" (UID: \"80529f80-c8f8-4bc5-83a6-eb19a23401f0\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-knkwh" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.754516 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5pj4\" (UniqueName: \"kubernetes.io/projected/80529f80-c8f8-4bc5-83a6-eb19a23401f0-kube-api-access-b5pj4\") pod \"openshift-apiserver-operator-796bbdcf4f-knkwh\" (UID: \"80529f80-c8f8-4bc5-83a6-eb19a23401f0\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-knkwh" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.754536 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c9b6feb2-2c7a-41ea-b160-f461f05057d4-metrics-tls\") pod \"dns-operator-744455d44c-2hm84\" (UID: \"c9b6feb2-2c7a-41ea-b160-f461f05057d4\") " pod="openshift-dns-operator/dns-operator-744455d44c-2hm84" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.754560 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c910b62e-8a7e-4a3d-b8b0-f90384ec999f-etcd-client\") pod \"apiserver-7bbb656c7d-776jv\" (UID: \"c910b62e-8a7e-4a3d-b8b0-f90384ec999f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-776jv" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.754573 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmndl\" (UniqueName: \"kubernetes.io/projected/c910b62e-8a7e-4a3d-b8b0-f90384ec999f-kube-api-access-bmndl\") pod \"apiserver-7bbb656c7d-776jv\" (UID: \"c910b62e-8a7e-4a3d-b8b0-f90384ec999f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-776jv" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.754589 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/13db87c4-7297-4265-879d-07ad09539aba-available-featuregates\") pod \"openshift-config-operator-7777fb866f-bssrl\" (UID: \"13db87c4-7297-4265-879d-07ad09539aba\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-bssrl" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.754603 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/799c1556-92e6-43b8-a620-c7211a2ce813-serving-cert\") pod \"controller-manager-879f6c89f-vx8k5\" (UID: \"799c1556-92e6-43b8-a620-c7211a2ce813\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vx8k5" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.754616 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p795n\" (UniqueName: \"kubernetes.io/projected/f58eb1d8-bb02-4af7-857c-138518c5bbf2-kube-api-access-p795n\") pod \"console-f9d7485db-jhptg\" (UID: \"f58eb1d8-bb02-4af7-857c-138518c5bbf2\") " pod="openshift-console/console-f9d7485db-jhptg" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.754652 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/189dea5f-ae62-4140-bd03-14a548c51684-etcd-service-ca\") pod \"etcd-operator-b45778765-sxpvs\" (UID: \"189dea5f-ae62-4140-bd03-14a548c51684\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sxpvs" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.754676 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7285d1e4-79ce-4ada-b15f-b1df68271703-config\") pod \"route-controller-manager-6576b87f9c-8fflv\" (UID: \"7285d1e4-79ce-4ada-b15f-b1df68271703\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8fflv" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.754703 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jmhk\" (UniqueName: \"kubernetes.io/projected/13db87c4-7297-4265-879d-07ad09539aba-kube-api-access-7jmhk\") pod \"openshift-config-operator-7777fb866f-bssrl\" (UID: \"13db87c4-7297-4265-879d-07ad09539aba\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-bssrl" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.754718 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db487354-8574-45b6-b639-4d4afc8a7698-config\") pod \"console-operator-58897d9998-2mphl\" (UID: \"db487354-8574-45b6-b639-4d4afc8a7698\") " pod="openshift-console-operator/console-operator-58897d9998-2mphl" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.754731 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f58eb1d8-bb02-4af7-857c-138518c5bbf2-oauth-serving-cert\") pod \"console-f9d7485db-jhptg\" (UID: \"f58eb1d8-bb02-4af7-857c-138518c5bbf2\") " pod="openshift-console/console-f9d7485db-jhptg" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.754748 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/c0eff054-47bc-4d81-acb9-06ef98b170fa-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-4wjd4\" (UID: \"c0eff054-47bc-4d81-acb9-06ef98b170fa\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4wjd4" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.754763 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/80529f80-c8f8-4bc5-83a6-eb19a23401f0-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-knkwh\" (UID: \"80529f80-c8f8-4bc5-83a6-eb19a23401f0\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-knkwh" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.754779 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/189dea5f-ae62-4140-bd03-14a548c51684-serving-cert\") pod \"etcd-operator-b45778765-sxpvs\" (UID: \"189dea5f-ae62-4140-bd03-14a548c51684\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sxpvs" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.754803 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hlwc\" (UniqueName: \"kubernetes.io/projected/0d71c966-5dcc-4f11-b21e-8c60ba5b7b57-kube-api-access-4hlwc\") pod \"cluster-samples-operator-665b6dd947-6l9qr\" (UID: \"0d71c966-5dcc-4f11-b21e-8c60ba5b7b57\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6l9qr" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.754818 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c910b62e-8a7e-4a3d-b8b0-f90384ec999f-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-776jv\" (UID: \"c910b62e-8a7e-4a3d-b8b0-f90384ec999f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-776jv" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.754835 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ed39dbae-8bd1-43b9-aec3-4fb84807d65d-metrics-tls\") pod \"ingress-operator-5b745b69d9-4ftvq\" (UID: \"ed39dbae-8bd1-43b9-aec3-4fb84807d65d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4ftvq" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.754853 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-msb22\" (UniqueName: \"kubernetes.io/projected/189dea5f-ae62-4140-bd03-14a548c51684-kube-api-access-msb22\") pod \"etcd-operator-b45778765-sxpvs\" (UID: \"189dea5f-ae62-4140-bd03-14a548c51684\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sxpvs" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.754870 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/c910b62e-8a7e-4a3d-b8b0-f90384ec999f-encryption-config\") pod \"apiserver-7bbb656c7d-776jv\" (UID: \"c910b62e-8a7e-4a3d-b8b0-f90384ec999f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-776jv" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.754889 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/189dea5f-ae62-4140-bd03-14a548c51684-etcd-client\") pod \"etcd-operator-b45778765-sxpvs\" (UID: \"189dea5f-ae62-4140-bd03-14a548c51684\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sxpvs" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.754906 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c0eff054-47bc-4d81-acb9-06ef98b170fa-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-4wjd4\" (UID: \"c0eff054-47bc-4d81-acb9-06ef98b170fa\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4wjd4" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.754924 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hb448\" (UniqueName: \"kubernetes.io/projected/ab671388-f736-4a76-a421-ba3413830807-kube-api-access-hb448\") pod \"authentication-operator-69f744f599-md7dt\" (UID: \"ab671388-f736-4a76-a421-ba3413830807\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-md7dt" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.754943 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/0d71c966-5dcc-4f11-b21e-8c60ba5b7b57-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-6l9qr\" (UID: \"0d71c966-5dcc-4f11-b21e-8c60ba5b7b57\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6l9qr" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.754958 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c910b62e-8a7e-4a3d-b8b0-f90384ec999f-audit-dir\") pod \"apiserver-7bbb656c7d-776jv\" (UID: \"c910b62e-8a7e-4a3d-b8b0-f90384ec999f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-776jv" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.754974 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.754983 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ab671388-f736-4a76-a421-ba3413830807-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-md7dt\" (UID: \"ab671388-f736-4a76-a421-ba3413830807\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-md7dt" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.754999 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05a61535-90ca-4127-991a-0b3f1c110f5a-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-nljqw\" (UID: \"05a61535-90ca-4127-991a-0b3f1c110f5a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nljqw" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.755015 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/05a61535-90ca-4127-991a-0b3f1c110f5a-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-nljqw\" (UID: \"05a61535-90ca-4127-991a-0b3f1c110f5a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nljqw" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.755067 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ab671388-f736-4a76-a421-ba3413830807-serving-cert\") pod \"authentication-operator-69f744f599-md7dt\" (UID: \"ab671388-f736-4a76-a421-ba3413830807\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-md7dt" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.755088 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a9d91fa-d887-4128-af43-cfe3cad79784-config\") pod \"machine-api-operator-5694c8668f-rx42r\" (UID: \"8a9d91fa-d887-4128-af43-cfe3cad79784\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-rx42r" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.755104 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7285d1e4-79ce-4ada-b15f-b1df68271703-client-ca\") pod \"route-controller-manager-6576b87f9c-8fflv\" (UID: \"7285d1e4-79ce-4ada-b15f-b1df68271703\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8fflv" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.755127 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnz52\" (UniqueName: \"kubernetes.io/projected/7285d1e4-79ce-4ada-b15f-b1df68271703-kube-api-access-lnz52\") pod \"route-controller-manager-6576b87f9c-8fflv\" (UID: \"7285d1e4-79ce-4ada-b15f-b1df68271703\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8fflv" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.755205 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c0eff054-47bc-4d81-acb9-06ef98b170fa-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-4wjd4\" (UID: \"c0eff054-47bc-4d81-acb9-06ef98b170fa\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4wjd4" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.755225 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wchm\" (UniqueName: \"kubernetes.io/projected/ed39dbae-8bd1-43b9-aec3-4fb84807d65d-kube-api-access-2wchm\") pod \"ingress-operator-5b745b69d9-4ftvq\" (UID: \"ed39dbae-8bd1-43b9-aec3-4fb84807d65d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4ftvq" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.755243 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzrjc\" (UniqueName: \"kubernetes.io/projected/8a9d91fa-d887-4128-af43-cfe3cad79784-kube-api-access-rzrjc\") pod \"machine-api-operator-5694c8668f-rx42r\" (UID: \"8a9d91fa-d887-4128-af43-cfe3cad79784\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-rx42r" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.755267 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f58eb1d8-bb02-4af7-857c-138518c5bbf2-console-config\") pod \"console-f9d7485db-jhptg\" (UID: \"f58eb1d8-bb02-4af7-857c-138518c5bbf2\") " pod="openshift-console/console-f9d7485db-jhptg" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.755312 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/799c1556-92e6-43b8-a620-c7211a2ce813-client-ca\") pod \"controller-manager-879f6c89f-vx8k5\" (UID: \"799c1556-92e6-43b8-a620-c7211a2ce813\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vx8k5" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.755337 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c910b62e-8a7e-4a3d-b8b0-f90384ec999f-audit-policies\") pod \"apiserver-7bbb656c7d-776jv\" (UID: \"c910b62e-8a7e-4a3d-b8b0-f90384ec999f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-776jv" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.755355 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/db487354-8574-45b6-b639-4d4afc8a7698-serving-cert\") pod \"console-operator-58897d9998-2mphl\" (UID: \"db487354-8574-45b6-b639-4d4afc8a7698\") " pod="openshift-console-operator/console-operator-58897d9998-2mphl" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.755380 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ed39dbae-8bd1-43b9-aec3-4fb84807d65d-bound-sa-token\") pod \"ingress-operator-5b745b69d9-4ftvq\" (UID: \"ed39dbae-8bd1-43b9-aec3-4fb84807d65d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4ftvq" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.755403 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f58eb1d8-bb02-4af7-857c-138518c5bbf2-service-ca\") pod \"console-f9d7485db-jhptg\" (UID: \"f58eb1d8-bb02-4af7-857c-138518c5bbf2\") " pod="openshift-console/console-f9d7485db-jhptg" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.755441 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/c910b62e-8a7e-4a3d-b8b0-f90384ec999f-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-776jv\" (UID: \"c910b62e-8a7e-4a3d-b8b0-f90384ec999f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-776jv" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.755461 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ed39dbae-8bd1-43b9-aec3-4fb84807d65d-trusted-ca\") pod \"ingress-operator-5b745b69d9-4ftvq\" (UID: \"ed39dbae-8bd1-43b9-aec3-4fb84807d65d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4ftvq" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.755488 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c910b62e-8a7e-4a3d-b8b0-f90384ec999f-serving-cert\") pod \"apiserver-7bbb656c7d-776jv\" (UID: \"c910b62e-8a7e-4a3d-b8b0-f90384ec999f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-776jv" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.755505 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/13db87c4-7297-4265-879d-07ad09539aba-serving-cert\") pod \"openshift-config-operator-7777fb866f-bssrl\" (UID: \"13db87c4-7297-4265-879d-07ad09539aba\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-bssrl" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.755522 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bn9mc\" (UniqueName: \"kubernetes.io/projected/117d7039-2cd9-4ee9-9272-923cd05c3565-kube-api-access-bn9mc\") pod \"downloads-7954f5f757-hs4pm\" (UID: \"117d7039-2cd9-4ee9-9272-923cd05c3565\") " pod="openshift-console/downloads-7954f5f757-hs4pm" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.755543 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29m59\" (UniqueName: \"kubernetes.io/projected/db487354-8574-45b6-b639-4d4afc8a7698-kube-api-access-29m59\") pod \"console-operator-58897d9998-2mphl\" (UID: \"db487354-8574-45b6-b639-4d4afc8a7698\") " pod="openshift-console-operator/console-operator-58897d9998-2mphl" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.755559 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f58eb1d8-bb02-4af7-857c-138518c5bbf2-console-oauth-config\") pod \"console-f9d7485db-jhptg\" (UID: \"f58eb1d8-bb02-4af7-857c-138518c5bbf2\") " pod="openshift-console/console-f9d7485db-jhptg" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.755577 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/189dea5f-ae62-4140-bd03-14a548c51684-config\") pod \"etcd-operator-b45778765-sxpvs\" (UID: \"189dea5f-ae62-4140-bd03-14a548c51684\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sxpvs" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.755604 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f58eb1d8-bb02-4af7-857c-138518c5bbf2-console-serving-cert\") pod \"console-f9d7485db-jhptg\" (UID: \"f58eb1d8-bb02-4af7-857c-138518c5bbf2\") " pod="openshift-console/console-f9d7485db-jhptg" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.755645 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slbc2\" (UniqueName: \"kubernetes.io/projected/c9b6feb2-2c7a-41ea-b160-f461f05057d4-kube-api-access-slbc2\") pod \"dns-operator-744455d44c-2hm84\" (UID: \"c9b6feb2-2c7a-41ea-b160-f461f05057d4\") " pod="openshift-dns-operator/dns-operator-744455d44c-2hm84" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.755664 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab671388-f736-4a76-a421-ba3413830807-config\") pod \"authentication-operator-69f744f599-md7dt\" (UID: \"ab671388-f736-4a76-a421-ba3413830807\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-md7dt" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.755681 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ab671388-f736-4a76-a421-ba3413830807-service-ca-bundle\") pod \"authentication-operator-69f744f599-md7dt\" (UID: \"ab671388-f736-4a76-a421-ba3413830807\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-md7dt" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.755713 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/799c1556-92e6-43b8-a620-c7211a2ce813-config\") pod \"controller-manager-879f6c89f-vx8k5\" (UID: \"799c1556-92e6-43b8-a620-c7211a2ce813\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vx8k5" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.755731 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kr89x\" (UniqueName: \"kubernetes.io/projected/799c1556-92e6-43b8-a620-c7211a2ce813-kube-api-access-kr89x\") pod \"controller-manager-879f6c89f-vx8k5\" (UID: \"799c1556-92e6-43b8-a620-c7211a2ce813\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vx8k5" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.755749 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7285d1e4-79ce-4ada-b15f-b1df68271703-serving-cert\") pod \"route-controller-manager-6576b87f9c-8fflv\" (UID: \"7285d1e4-79ce-4ada-b15f-b1df68271703\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8fflv" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.755800 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-742d7\" (UniqueName: \"kubernetes.io/projected/c0eff054-47bc-4d81-acb9-06ef98b170fa-kube-api-access-742d7\") pod \"cluster-image-registry-operator-dc59b4c8b-4wjd4\" (UID: \"c0eff054-47bc-4d81-acb9-06ef98b170fa\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4wjd4" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.755820 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/8a9d91fa-d887-4128-af43-cfe3cad79784-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-rx42r\" (UID: \"8a9d91fa-d887-4128-af43-cfe3cad79784\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-rx42r" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.755838 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f58eb1d8-bb02-4af7-857c-138518c5bbf2-trusted-ca-bundle\") pod \"console-f9d7485db-jhptg\" (UID: \"f58eb1d8-bb02-4af7-857c-138518c5bbf2\") " pod="openshift-console/console-f9d7485db-jhptg" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.768145 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-kj6fk"] Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.770389 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-l8phl"] Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.771831 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-l8phl" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.772167 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-kj6fk" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.782163 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-78zjg"] Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.783178 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-78zjg" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.783380 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.783888 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.789392 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.798906 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.799867 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qnxwq"] Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.800670 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qnxwq" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.801610 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-zwkzn"] Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.802517 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-9pxnj"] Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.802959 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-fcb4z"] Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.803236 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-9pxnj" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.803368 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-zwkzn" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.803684 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fcb4z" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.803911 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-zzx6v"] Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.804260 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-zzx6v" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.804463 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wcwwm"] Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.804837 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wcwwm" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.805368 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lzdbl"] Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.805859 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lzdbl" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.806863 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-gmwq7"] Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.810556 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6l9qr"] Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.812323 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4wjd4"] Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.813049 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-6jdmv"] Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.813682 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-6jdmv" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.814979 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-sxpvs"] Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.816606 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5w8tw"] Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.817197 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-vx8k5"] Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.817262 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5w8tw" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.817862 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-nczwv"] Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.819656 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-2hm84"] Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.820569 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-776jv"] Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.821592 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-5w7fh"] Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.822056 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-5w7fh" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.822555 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484540-kbb7d"] Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.822971 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484540-kbb7d" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.823120 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.824138 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-8fflv"] Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.825146 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-9624f"] Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.825524 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-9624f" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.826177 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wvhkk"] Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.827689 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-2mphl"] Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.828784 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-hs4pm"] Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.829813 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-j9b4b"] Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.830019 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-knkwh"] Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.830936 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-jhptg"] Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.834081 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-zzx6v"] Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.835188 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nljqw"] Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.836699 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-kj6fk"] Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.837763 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.838104 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qnxwq"] Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.839472 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lzdbl"] Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.840610 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-2jrk9"] Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.841773 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-bssrl"] Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.842692 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-fcb4z"] Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.843660 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-9pxnj"] Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.844836 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-4ftvq"] Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.845471 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-j6dqt"] Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.846141 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-78zjg"] Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.846902 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-zwkzn"] Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.848224 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484540-kbb7d"] Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.853649 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-6jdmv"] Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.855071 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-8fxth"] Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.857884 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7jmhk\" (UniqueName: \"kubernetes.io/projected/13db87c4-7297-4265-879d-07ad09539aba-kube-api-access-7jmhk\") pod \"openshift-config-operator-7777fb866f-bssrl\" (UID: \"13db87c4-7297-4265-879d-07ad09539aba\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-bssrl" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.857956 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db487354-8574-45b6-b639-4d4afc8a7698-config\") pod \"console-operator-58897d9998-2mphl\" (UID: \"db487354-8574-45b6-b639-4d4afc8a7698\") " pod="openshift-console-operator/console-operator-58897d9998-2mphl" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.857981 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f58eb1d8-bb02-4af7-857c-138518c5bbf2-oauth-serving-cert\") pod \"console-f9d7485db-jhptg\" (UID: \"f58eb1d8-bb02-4af7-857c-138518c5bbf2\") " pod="openshift-console/console-f9d7485db-jhptg" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.858139 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/c0eff054-47bc-4d81-acb9-06ef98b170fa-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-4wjd4\" (UID: \"c0eff054-47bc-4d81-acb9-06ef98b170fa\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4wjd4" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.858279 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/80529f80-c8f8-4bc5-83a6-eb19a23401f0-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-knkwh\" (UID: \"80529f80-c8f8-4bc5-83a6-eb19a23401f0\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-knkwh" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.858308 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/189dea5f-ae62-4140-bd03-14a548c51684-serving-cert\") pod \"etcd-operator-b45778765-sxpvs\" (UID: \"189dea5f-ae62-4140-bd03-14a548c51684\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sxpvs" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.858388 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hlwc\" (UniqueName: \"kubernetes.io/projected/0d71c966-5dcc-4f11-b21e-8c60ba5b7b57-kube-api-access-4hlwc\") pod \"cluster-samples-operator-665b6dd947-6l9qr\" (UID: \"0d71c966-5dcc-4f11-b21e-8c60ba5b7b57\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6l9qr" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.858409 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c910b62e-8a7e-4a3d-b8b0-f90384ec999f-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-776jv\" (UID: \"c910b62e-8a7e-4a3d-b8b0-f90384ec999f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-776jv" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.858430 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-msb22\" (UniqueName: \"kubernetes.io/projected/189dea5f-ae62-4140-bd03-14a548c51684-kube-api-access-msb22\") pod \"etcd-operator-b45778765-sxpvs\" (UID: \"189dea5f-ae62-4140-bd03-14a548c51684\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sxpvs" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.858517 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ed39dbae-8bd1-43b9-aec3-4fb84807d65d-metrics-tls\") pod \"ingress-operator-5b745b69d9-4ftvq\" (UID: \"ed39dbae-8bd1-43b9-aec3-4fb84807d65d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4ftvq" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.858570 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/c910b62e-8a7e-4a3d-b8b0-f90384ec999f-encryption-config\") pod \"apiserver-7bbb656c7d-776jv\" (UID: \"c910b62e-8a7e-4a3d-b8b0-f90384ec999f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-776jv" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.858608 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/189dea5f-ae62-4140-bd03-14a548c51684-etcd-client\") pod \"etcd-operator-b45778765-sxpvs\" (UID: \"189dea5f-ae62-4140-bd03-14a548c51684\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sxpvs" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.858650 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c0eff054-47bc-4d81-acb9-06ef98b170fa-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-4wjd4\" (UID: \"c0eff054-47bc-4d81-acb9-06ef98b170fa\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4wjd4" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.858676 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/0d71c966-5dcc-4f11-b21e-8c60ba5b7b57-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-6l9qr\" (UID: \"0d71c966-5dcc-4f11-b21e-8c60ba5b7b57\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6l9qr" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.858732 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hb448\" (UniqueName: \"kubernetes.io/projected/ab671388-f736-4a76-a421-ba3413830807-kube-api-access-hb448\") pod \"authentication-operator-69f744f599-md7dt\" (UID: \"ab671388-f736-4a76-a421-ba3413830807\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-md7dt" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.858822 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c910b62e-8a7e-4a3d-b8b0-f90384ec999f-audit-dir\") pod \"apiserver-7bbb656c7d-776jv\" (UID: \"c910b62e-8a7e-4a3d-b8b0-f90384ec999f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-776jv" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.858848 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ab671388-f736-4a76-a421-ba3413830807-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-md7dt\" (UID: \"ab671388-f736-4a76-a421-ba3413830807\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-md7dt" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.858985 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05a61535-90ca-4127-991a-0b3f1c110f5a-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-nljqw\" (UID: \"05a61535-90ca-4127-991a-0b3f1c110f5a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nljqw" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.859031 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/05a61535-90ca-4127-991a-0b3f1c110f5a-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-nljqw\" (UID: \"05a61535-90ca-4127-991a-0b3f1c110f5a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nljqw" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.859135 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7285d1e4-79ce-4ada-b15f-b1df68271703-client-ca\") pod \"route-controller-manager-6576b87f9c-8fflv\" (UID: \"7285d1e4-79ce-4ada-b15f-b1df68271703\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8fflv" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.859167 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnz52\" (UniqueName: \"kubernetes.io/projected/7285d1e4-79ce-4ada-b15f-b1df68271703-kube-api-access-lnz52\") pod \"route-controller-manager-6576b87f9c-8fflv\" (UID: \"7285d1e4-79ce-4ada-b15f-b1df68271703\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8fflv" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.859215 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ab671388-f736-4a76-a421-ba3413830807-serving-cert\") pod \"authentication-operator-69f744f599-md7dt\" (UID: \"ab671388-f736-4a76-a421-ba3413830807\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-md7dt" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.859292 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a9d91fa-d887-4128-af43-cfe3cad79784-config\") pod \"machine-api-operator-5694c8668f-rx42r\" (UID: \"8a9d91fa-d887-4128-af43-cfe3cad79784\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-rx42r" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.859339 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-8fxth" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.859352 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rzrjc\" (UniqueName: \"kubernetes.io/projected/8a9d91fa-d887-4128-af43-cfe3cad79784-kube-api-access-rzrjc\") pod \"machine-api-operator-5694c8668f-rx42r\" (UID: \"8a9d91fa-d887-4128-af43-cfe3cad79784\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-rx42r" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.859401 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f58eb1d8-bb02-4af7-857c-138518c5bbf2-console-config\") pod \"console-f9d7485db-jhptg\" (UID: \"f58eb1d8-bb02-4af7-857c-138518c5bbf2\") " pod="openshift-console/console-f9d7485db-jhptg" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.859421 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c0eff054-47bc-4d81-acb9-06ef98b170fa-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-4wjd4\" (UID: \"c0eff054-47bc-4d81-acb9-06ef98b170fa\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4wjd4" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.859485 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2wchm\" (UniqueName: \"kubernetes.io/projected/ed39dbae-8bd1-43b9-aec3-4fb84807d65d-kube-api-access-2wchm\") pod \"ingress-operator-5b745b69d9-4ftvq\" (UID: \"ed39dbae-8bd1-43b9-aec3-4fb84807d65d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4ftvq" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.859524 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/799c1556-92e6-43b8-a620-c7211a2ce813-client-ca\") pod \"controller-manager-879f6c89f-vx8k5\" (UID: \"799c1556-92e6-43b8-a620-c7211a2ce813\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vx8k5" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.859551 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/db487354-8574-45b6-b639-4d4afc8a7698-serving-cert\") pod \"console-operator-58897d9998-2mphl\" (UID: \"db487354-8574-45b6-b639-4d4afc8a7698\") " pod="openshift-console-operator/console-operator-58897d9998-2mphl" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.859686 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ed39dbae-8bd1-43b9-aec3-4fb84807d65d-bound-sa-token\") pod \"ingress-operator-5b745b69d9-4ftvq\" (UID: \"ed39dbae-8bd1-43b9-aec3-4fb84807d65d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4ftvq" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.859820 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f58eb1d8-bb02-4af7-857c-138518c5bbf2-service-ca\") pod \"console-f9d7485db-jhptg\" (UID: \"f58eb1d8-bb02-4af7-857c-138518c5bbf2\") " pod="openshift-console/console-f9d7485db-jhptg" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.859917 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c910b62e-8a7e-4a3d-b8b0-f90384ec999f-audit-policies\") pod \"apiserver-7bbb656c7d-776jv\" (UID: \"c910b62e-8a7e-4a3d-b8b0-f90384ec999f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-776jv" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.859923 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-v2nnf"] Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.859944 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/c910b62e-8a7e-4a3d-b8b0-f90384ec999f-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-776jv\" (UID: \"c910b62e-8a7e-4a3d-b8b0-f90384ec999f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-776jv" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.859993 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ed39dbae-8bd1-43b9-aec3-4fb84807d65d-trusted-ca\") pod \"ingress-operator-5b745b69d9-4ftvq\" (UID: \"ed39dbae-8bd1-43b9-aec3-4fb84807d65d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4ftvq" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.860025 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c910b62e-8a7e-4a3d-b8b0-f90384ec999f-serving-cert\") pod \"apiserver-7bbb656c7d-776jv\" (UID: \"c910b62e-8a7e-4a3d-b8b0-f90384ec999f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-776jv" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.860043 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/13db87c4-7297-4265-879d-07ad09539aba-serving-cert\") pod \"openshift-config-operator-7777fb866f-bssrl\" (UID: \"13db87c4-7297-4265-879d-07ad09539aba\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-bssrl" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.860089 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bn9mc\" (UniqueName: \"kubernetes.io/projected/117d7039-2cd9-4ee9-9272-923cd05c3565-kube-api-access-bn9mc\") pod \"downloads-7954f5f757-hs4pm\" (UID: \"117d7039-2cd9-4ee9-9272-923cd05c3565\") " pod="openshift-console/downloads-7954f5f757-hs4pm" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.860164 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29m59\" (UniqueName: \"kubernetes.io/projected/db487354-8574-45b6-b639-4d4afc8a7698-kube-api-access-29m59\") pod \"console-operator-58897d9998-2mphl\" (UID: \"db487354-8574-45b6-b639-4d4afc8a7698\") " pod="openshift-console-operator/console-operator-58897d9998-2mphl" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.860223 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f58eb1d8-bb02-4af7-857c-138518c5bbf2-console-serving-cert\") pod \"console-f9d7485db-jhptg\" (UID: \"f58eb1d8-bb02-4af7-857c-138518c5bbf2\") " pod="openshift-console/console-f9d7485db-jhptg" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.860268 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f58eb1d8-bb02-4af7-857c-138518c5bbf2-console-oauth-config\") pod \"console-f9d7485db-jhptg\" (UID: \"f58eb1d8-bb02-4af7-857c-138518c5bbf2\") " pod="openshift-console/console-f9d7485db-jhptg" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.860289 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/189dea5f-ae62-4140-bd03-14a548c51684-config\") pod \"etcd-operator-b45778765-sxpvs\" (UID: \"189dea5f-ae62-4140-bd03-14a548c51684\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sxpvs" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.860352 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-slbc2\" (UniqueName: \"kubernetes.io/projected/c9b6feb2-2c7a-41ea-b160-f461f05057d4-kube-api-access-slbc2\") pod \"dns-operator-744455d44c-2hm84\" (UID: \"c9b6feb2-2c7a-41ea-b160-f461f05057d4\") " pod="openshift-dns-operator/dns-operator-744455d44c-2hm84" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.860435 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab671388-f736-4a76-a421-ba3413830807-config\") pod \"authentication-operator-69f744f599-md7dt\" (UID: \"ab671388-f736-4a76-a421-ba3413830807\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-md7dt" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.860461 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ab671388-f736-4a76-a421-ba3413830807-service-ca-bundle\") pod \"authentication-operator-69f744f599-md7dt\" (UID: \"ab671388-f736-4a76-a421-ba3413830807\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-md7dt" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.860547 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/799c1556-92e6-43b8-a620-c7211a2ce813-config\") pod \"controller-manager-879f6c89f-vx8k5\" (UID: \"799c1556-92e6-43b8-a620-c7211a2ce813\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vx8k5" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.861855 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db487354-8574-45b6-b639-4d4afc8a7698-config\") pod \"console-operator-58897d9998-2mphl\" (UID: \"db487354-8574-45b6-b639-4d4afc8a7698\") " pod="openshift-console-operator/console-operator-58897d9998-2mphl" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.860603 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kr89x\" (UniqueName: \"kubernetes.io/projected/799c1556-92e6-43b8-a620-c7211a2ce813-kube-api-access-kr89x\") pod \"controller-manager-879f6c89f-vx8k5\" (UID: \"799c1556-92e6-43b8-a620-c7211a2ce813\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vx8k5" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.861962 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7285d1e4-79ce-4ada-b15f-b1df68271703-serving-cert\") pod \"route-controller-manager-6576b87f9c-8fflv\" (UID: \"7285d1e4-79ce-4ada-b15f-b1df68271703\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8fflv" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.862028 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-742d7\" (UniqueName: \"kubernetes.io/projected/c0eff054-47bc-4d81-acb9-06ef98b170fa-kube-api-access-742d7\") pod \"cluster-image-registry-operator-dc59b4c8b-4wjd4\" (UID: \"c0eff054-47bc-4d81-acb9-06ef98b170fa\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4wjd4" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.862115 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/8a9d91fa-d887-4128-af43-cfe3cad79784-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-rx42r\" (UID: \"8a9d91fa-d887-4128-af43-cfe3cad79784\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-rx42r" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.862139 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f58eb1d8-bb02-4af7-857c-138518c5bbf2-trusted-ca-bundle\") pod \"console-f9d7485db-jhptg\" (UID: \"f58eb1d8-bb02-4af7-857c-138518c5bbf2\") " pod="openshift-console/console-f9d7485db-jhptg" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.864117 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c910b62e-8a7e-4a3d-b8b0-f90384ec999f-audit-policies\") pod \"apiserver-7bbb656c7d-776jv\" (UID: \"c910b62e-8a7e-4a3d-b8b0-f90384ec999f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-776jv" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.862446 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/189dea5f-ae62-4140-bd03-14a548c51684-etcd-ca\") pod \"etcd-operator-b45778765-sxpvs\" (UID: \"189dea5f-ae62-4140-bd03-14a548c51684\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sxpvs" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.864246 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ab671388-f736-4a76-a421-ba3413830807-service-ca-bundle\") pod \"authentication-operator-69f744f599-md7dt\" (UID: \"ab671388-f736-4a76-a421-ba3413830807\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-md7dt" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.864263 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/db487354-8574-45b6-b639-4d4afc8a7698-trusted-ca\") pod \"console-operator-58897d9998-2mphl\" (UID: \"db487354-8574-45b6-b639-4d4afc8a7698\") " pod="openshift-console-operator/console-operator-58897d9998-2mphl" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.864289 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/799c1556-92e6-43b8-a620-c7211a2ce813-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-vx8k5\" (UID: \"799c1556-92e6-43b8-a620-c7211a2ce813\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vx8k5" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.864306 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rpfg\" (UniqueName: \"kubernetes.io/projected/05a61535-90ca-4127-991a-0b3f1c110f5a-kube-api-access-6rpfg\") pod \"kube-storage-version-migrator-operator-b67b599dd-nljqw\" (UID: \"05a61535-90ca-4127-991a-0b3f1c110f5a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nljqw" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.864337 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8a9d91fa-d887-4128-af43-cfe3cad79784-images\") pod \"machine-api-operator-5694c8668f-rx42r\" (UID: \"8a9d91fa-d887-4128-af43-cfe3cad79784\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-rx42r" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.864377 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80529f80-c8f8-4bc5-83a6-eb19a23401f0-config\") pod \"openshift-apiserver-operator-796bbdcf4f-knkwh\" (UID: \"80529f80-c8f8-4bc5-83a6-eb19a23401f0\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-knkwh" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.864393 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5pj4\" (UniqueName: \"kubernetes.io/projected/80529f80-c8f8-4bc5-83a6-eb19a23401f0-kube-api-access-b5pj4\") pod \"openshift-apiserver-operator-796bbdcf4f-knkwh\" (UID: \"80529f80-c8f8-4bc5-83a6-eb19a23401f0\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-knkwh" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.864436 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c9b6feb2-2c7a-41ea-b160-f461f05057d4-metrics-tls\") pod \"dns-operator-744455d44c-2hm84\" (UID: \"c9b6feb2-2c7a-41ea-b160-f461f05057d4\") " pod="openshift-dns-operator/dns-operator-744455d44c-2hm84" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.864480 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmndl\" (UniqueName: \"kubernetes.io/projected/c910b62e-8a7e-4a3d-b8b0-f90384ec999f-kube-api-access-bmndl\") pod \"apiserver-7bbb656c7d-776jv\" (UID: \"c910b62e-8a7e-4a3d-b8b0-f90384ec999f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-776jv" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.864499 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/13db87c4-7297-4265-879d-07ad09539aba-available-featuregates\") pod \"openshift-config-operator-7777fb866f-bssrl\" (UID: \"13db87c4-7297-4265-879d-07ad09539aba\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-bssrl" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.864536 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/799c1556-92e6-43b8-a620-c7211a2ce813-serving-cert\") pod \"controller-manager-879f6c89f-vx8k5\" (UID: \"799c1556-92e6-43b8-a620-c7211a2ce813\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vx8k5" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.864554 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c910b62e-8a7e-4a3d-b8b0-f90384ec999f-etcd-client\") pod \"apiserver-7bbb656c7d-776jv\" (UID: \"c910b62e-8a7e-4a3d-b8b0-f90384ec999f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-776jv" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.864716 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/189dea5f-ae62-4140-bd03-14a548c51684-etcd-ca\") pod \"etcd-operator-b45778765-sxpvs\" (UID: \"189dea5f-ae62-4140-bd03-14a548c51684\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sxpvs" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.864571 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/189dea5f-ae62-4140-bd03-14a548c51684-etcd-service-ca\") pod \"etcd-operator-b45778765-sxpvs\" (UID: \"189dea5f-ae62-4140-bd03-14a548c51684\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sxpvs" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.864926 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p795n\" (UniqueName: \"kubernetes.io/projected/f58eb1d8-bb02-4af7-857c-138518c5bbf2-kube-api-access-p795n\") pod \"console-f9d7485db-jhptg\" (UID: \"f58eb1d8-bb02-4af7-857c-138518c5bbf2\") " pod="openshift-console/console-f9d7485db-jhptg" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.864961 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7285d1e4-79ce-4ada-b15f-b1df68271703-config\") pod \"route-controller-manager-6576b87f9c-8fflv\" (UID: \"7285d1e4-79ce-4ada-b15f-b1df68271703\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8fflv" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.866315 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7285d1e4-79ce-4ada-b15f-b1df68271703-config\") pod \"route-controller-manager-6576b87f9c-8fflv\" (UID: \"7285d1e4-79ce-4ada-b15f-b1df68271703\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8fflv" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.866358 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/db487354-8574-45b6-b639-4d4afc8a7698-trusted-ca\") pod \"console-operator-58897d9998-2mphl\" (UID: \"db487354-8574-45b6-b639-4d4afc8a7698\") " pod="openshift-console-operator/console-operator-58897d9998-2mphl" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.866677 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/189dea5f-ae62-4140-bd03-14a548c51684-etcd-service-ca\") pod \"etcd-operator-b45778765-sxpvs\" (UID: \"189dea5f-ae62-4140-bd03-14a548c51684\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sxpvs" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.866682 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/13db87c4-7297-4265-879d-07ad09539aba-available-featuregates\") pod \"openshift-config-operator-7777fb866f-bssrl\" (UID: \"13db87c4-7297-4265-879d-07ad09539aba\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-bssrl" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.866801 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ab671388-f736-4a76-a421-ba3413830807-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-md7dt\" (UID: \"ab671388-f736-4a76-a421-ba3413830807\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-md7dt" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.867258 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c910b62e-8a7e-4a3d-b8b0-f90384ec999f-audit-dir\") pod \"apiserver-7bbb656c7d-776jv\" (UID: \"c910b62e-8a7e-4a3d-b8b0-f90384ec999f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-776jv" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.867495 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/799c1556-92e6-43b8-a620-c7211a2ce813-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-vx8k5\" (UID: \"799c1556-92e6-43b8-a620-c7211a2ce813\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vx8k5" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.867640 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab671388-f736-4a76-a421-ba3413830807-config\") pod \"authentication-operator-69f744f599-md7dt\" (UID: \"ab671388-f736-4a76-a421-ba3413830807\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-md7dt" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.868872 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80529f80-c8f8-4bc5-83a6-eb19a23401f0-config\") pod \"openshift-apiserver-operator-796bbdcf4f-knkwh\" (UID: \"80529f80-c8f8-4bc5-83a6-eb19a23401f0\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-knkwh" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.870066 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/799c1556-92e6-43b8-a620-c7211a2ce813-serving-cert\") pod \"controller-manager-879f6c89f-vx8k5\" (UID: \"799c1556-92e6-43b8-a620-c7211a2ce813\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vx8k5" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.870391 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7285d1e4-79ce-4ada-b15f-b1df68271703-client-ca\") pod \"route-controller-manager-6576b87f9c-8fflv\" (UID: \"7285d1e4-79ce-4ada-b15f-b1df68271703\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8fflv" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.870863 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/189dea5f-ae62-4140-bd03-14a548c51684-serving-cert\") pod \"etcd-operator-b45778765-sxpvs\" (UID: \"189dea5f-ae62-4140-bd03-14a548c51684\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sxpvs" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.870920 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/189dea5f-ae62-4140-bd03-14a548c51684-config\") pod \"etcd-operator-b45778765-sxpvs\" (UID: \"189dea5f-ae62-4140-bd03-14a548c51684\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sxpvs" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.871535 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/80529f80-c8f8-4bc5-83a6-eb19a23401f0-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-knkwh\" (UID: \"80529f80-c8f8-4bc5-83a6-eb19a23401f0\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-knkwh" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.871837 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c910b62e-8a7e-4a3d-b8b0-f90384ec999f-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-776jv\" (UID: \"c910b62e-8a7e-4a3d-b8b0-f90384ec999f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-776jv" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.873061 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/c910b62e-8a7e-4a3d-b8b0-f90384ec999f-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-776jv\" (UID: \"c910b62e-8a7e-4a3d-b8b0-f90384ec999f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-776jv" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.873608 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/db487354-8574-45b6-b639-4d4afc8a7698-serving-cert\") pod \"console-operator-58897d9998-2mphl\" (UID: \"db487354-8574-45b6-b639-4d4afc8a7698\") " pod="openshift-console-operator/console-operator-58897d9998-2mphl" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.874201 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-9624f"] Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.874620 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-v2nnf"] Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.874785 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5w8tw"] Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.874873 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wcwwm"] Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.875243 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-5w7fh"] Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.874289 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-v2nnf" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.874282 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/189dea5f-ae62-4140-bd03-14a548c51684-etcd-client\") pod \"etcd-operator-b45778765-sxpvs\" (UID: \"189dea5f-ae62-4140-bd03-14a548c51684\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sxpvs" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.877201 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.878038 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ab671388-f736-4a76-a421-ba3413830807-serving-cert\") pod \"authentication-operator-69f744f599-md7dt\" (UID: \"ab671388-f736-4a76-a421-ba3413830807\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-md7dt" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.878494 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/799c1556-92e6-43b8-a620-c7211a2ce813-config\") pod \"controller-manager-879f6c89f-vx8k5\" (UID: \"799c1556-92e6-43b8-a620-c7211a2ce813\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vx8k5" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.879095 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7285d1e4-79ce-4ada-b15f-b1df68271703-serving-cert\") pod \"route-controller-manager-6576b87f9c-8fflv\" (UID: \"7285d1e4-79ce-4ada-b15f-b1df68271703\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8fflv" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.880884 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/799c1556-92e6-43b8-a620-c7211a2ce813-client-ca\") pod \"controller-manager-879f6c89f-vx8k5\" (UID: \"799c1556-92e6-43b8-a620-c7211a2ce813\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vx8k5" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.880944 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/c910b62e-8a7e-4a3d-b8b0-f90384ec999f-encryption-config\") pod \"apiserver-7bbb656c7d-776jv\" (UID: \"c910b62e-8a7e-4a3d-b8b0-f90384ec999f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-776jv" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.881885 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.881936 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/13db87c4-7297-4265-879d-07ad09539aba-serving-cert\") pod \"openshift-config-operator-7777fb866f-bssrl\" (UID: \"13db87c4-7297-4265-879d-07ad09539aba\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-bssrl" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.883660 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ed39dbae-8bd1-43b9-aec3-4fb84807d65d-trusted-ca\") pod \"ingress-operator-5b745b69d9-4ftvq\" (UID: \"ed39dbae-8bd1-43b9-aec3-4fb84807d65d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4ftvq" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.887877 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c910b62e-8a7e-4a3d-b8b0-f90384ec999f-etcd-client\") pod \"apiserver-7bbb656c7d-776jv\" (UID: \"c910b62e-8a7e-4a3d-b8b0-f90384ec999f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-776jv" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.888008 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/c0eff054-47bc-4d81-acb9-06ef98b170fa-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-4wjd4\" (UID: \"c0eff054-47bc-4d81-acb9-06ef98b170fa\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4wjd4" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.888118 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c0eff054-47bc-4d81-acb9-06ef98b170fa-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-4wjd4\" (UID: \"c0eff054-47bc-4d81-acb9-06ef98b170fa\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4wjd4" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.889541 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-vrdwc"] Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.890006 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c910b62e-8a7e-4a3d-b8b0-f90384ec999f-serving-cert\") pod \"apiserver-7bbb656c7d-776jv\" (UID: \"c910b62e-8a7e-4a3d-b8b0-f90384ec999f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-776jv" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.890535 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-vrdwc" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.893181 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-vrdwc"] Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.901760 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.906763 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ed39dbae-8bd1-43b9-aec3-4fb84807d65d-metrics-tls\") pod \"ingress-operator-5b745b69d9-4ftvq\" (UID: \"ed39dbae-8bd1-43b9-aec3-4fb84807d65d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4ftvq" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.917561 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.937122 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.958223 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.977309 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 22 09:08:19 crc kubenswrapper[4811]: I0122 09:08:19.998782 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 22 09:08:20 crc kubenswrapper[4811]: I0122 09:08:20.017070 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 22 09:08:20 crc kubenswrapper[4811]: I0122 09:08:20.029899 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c9b6feb2-2c7a-41ea-b160-f461f05057d4-metrics-tls\") pod \"dns-operator-744455d44c-2hm84\" (UID: \"c9b6feb2-2c7a-41ea-b160-f461f05057d4\") " pod="openshift-dns-operator/dns-operator-744455d44c-2hm84" Jan 22 09:08:20 crc kubenswrapper[4811]: I0122 09:08:20.037216 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 22 09:08:20 crc kubenswrapper[4811]: I0122 09:08:20.057258 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 22 09:08:20 crc kubenswrapper[4811]: I0122 09:08:20.078115 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 22 09:08:20 crc kubenswrapper[4811]: I0122 09:08:20.098040 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 22 09:08:20 crc kubenswrapper[4811]: I0122 09:08:20.118414 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 22 09:08:20 crc kubenswrapper[4811]: I0122 09:08:20.137352 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 22 09:08:20 crc kubenswrapper[4811]: I0122 09:08:20.157560 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 22 09:08:20 crc kubenswrapper[4811]: I0122 09:08:20.177607 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 22 09:08:20 crc kubenswrapper[4811]: I0122 09:08:20.197100 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 22 09:08:20 crc kubenswrapper[4811]: I0122 09:08:20.217467 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 22 09:08:20 crc kubenswrapper[4811]: I0122 09:08:20.224168 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05a61535-90ca-4127-991a-0b3f1c110f5a-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-nljqw\" (UID: \"05a61535-90ca-4127-991a-0b3f1c110f5a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nljqw" Jan 22 09:08:20 crc kubenswrapper[4811]: I0122 09:08:20.237511 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 22 09:08:20 crc kubenswrapper[4811]: I0122 09:08:20.246051 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/05a61535-90ca-4127-991a-0b3f1c110f5a-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-nljqw\" (UID: \"05a61535-90ca-4127-991a-0b3f1c110f5a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nljqw" Jan 22 09:08:20 crc kubenswrapper[4811]: I0122 09:08:20.257449 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 22 09:08:20 crc kubenswrapper[4811]: I0122 09:08:20.277389 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 22 09:08:20 crc kubenswrapper[4811]: I0122 09:08:20.337619 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 22 09:08:20 crc kubenswrapper[4811]: I0122 09:08:20.357900 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 22 09:08:20 crc kubenswrapper[4811]: I0122 09:08:20.377970 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 22 09:08:20 crc kubenswrapper[4811]: I0122 09:08:20.397901 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 22 09:08:20 crc kubenswrapper[4811]: I0122 09:08:20.417845 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 22 09:08:20 crc kubenswrapper[4811]: I0122 09:08:20.437644 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 22 09:08:20 crc kubenswrapper[4811]: I0122 09:08:20.457741 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 22 09:08:20 crc kubenswrapper[4811]: I0122 09:08:20.477248 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 22 09:08:20 crc kubenswrapper[4811]: I0122 09:08:20.497340 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 22 09:08:20 crc kubenswrapper[4811]: I0122 09:08:20.517818 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 22 09:08:20 crc kubenswrapper[4811]: I0122 09:08:20.536930 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 22 09:08:20 crc kubenswrapper[4811]: I0122 09:08:20.558085 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 22 09:08:20 crc kubenswrapper[4811]: I0122 09:08:20.577123 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 22 09:08:20 crc kubenswrapper[4811]: I0122 09:08:20.597135 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 22 09:08:20 crc kubenswrapper[4811]: I0122 09:08:20.617170 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 22 09:08:20 crc kubenswrapper[4811]: I0122 09:08:20.637208 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 22 09:08:20 crc kubenswrapper[4811]: I0122 09:08:20.656957 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 22 09:08:20 crc kubenswrapper[4811]: I0122 09:08:20.678007 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 22 09:08:20 crc kubenswrapper[4811]: I0122 09:08:20.697389 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 22 09:08:20 crc kubenswrapper[4811]: I0122 09:08:20.717338 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 22 09:08:20 crc kubenswrapper[4811]: I0122 09:08:20.736900 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 22 09:08:20 crc kubenswrapper[4811]: I0122 09:08:20.757222 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 22 09:08:20 crc kubenswrapper[4811]: I0122 09:08:20.777730 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 22 09:08:20 crc kubenswrapper[4811]: I0122 09:08:20.797107 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 22 09:08:20 crc kubenswrapper[4811]: I0122 09:08:20.816742 4811 request.go:700] Waited for 1.015747394s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dolm-operator-serviceaccount-dockercfg-rq7zk&limit=500&resourceVersion=0 Jan 22 09:08:20 crc kubenswrapper[4811]: I0122 09:08:20.817472 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 22 09:08:20 crc kubenswrapper[4811]: I0122 09:08:20.837007 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 22 09:08:20 crc kubenswrapper[4811]: I0122 09:08:20.857676 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 22 09:08:20 crc kubenswrapper[4811]: E0122 09:08:20.863665 4811 secret.go:188] Couldn't get secret openshift-console/console-serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 22 09:08:20 crc kubenswrapper[4811]: E0122 09:08:20.863721 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f58eb1d8-bb02-4af7-857c-138518c5bbf2-console-serving-cert podName:f58eb1d8-bb02-4af7-857c-138518c5bbf2 nodeName:}" failed. No retries permitted until 2026-01-22 09:08:21.363703937 +0000 UTC m=+145.685891060 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/f58eb1d8-bb02-4af7-857c-138518c5bbf2-console-serving-cert") pod "console-f9d7485db-jhptg" (UID: "f58eb1d8-bb02-4af7-857c-138518c5bbf2") : failed to sync secret cache: timed out waiting for the condition Jan 22 09:08:20 crc kubenswrapper[4811]: E0122 09:08:20.863835 4811 secret.go:188] Couldn't get secret openshift-console/console-oauth-config: failed to sync secret cache: timed out waiting for the condition Jan 22 09:08:20 crc kubenswrapper[4811]: E0122 09:08:20.863875 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f58eb1d8-bb02-4af7-857c-138518c5bbf2-console-oauth-config podName:f58eb1d8-bb02-4af7-857c-138518c5bbf2 nodeName:}" failed. No retries permitted until 2026-01-22 09:08:21.363865551 +0000 UTC m=+145.686052674 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "console-oauth-config" (UniqueName: "kubernetes.io/secret/f58eb1d8-bb02-4af7-857c-138518c5bbf2-console-oauth-config") pod "console-f9d7485db-jhptg" (UID: "f58eb1d8-bb02-4af7-857c-138518c5bbf2") : failed to sync secret cache: timed out waiting for the condition Jan 22 09:08:20 crc kubenswrapper[4811]: E0122 09:08:20.863875 4811 secret.go:188] Couldn't get secret openshift-machine-api/machine-api-operator-tls: failed to sync secret cache: timed out waiting for the condition Jan 22 09:08:20 crc kubenswrapper[4811]: E0122 09:08:20.863937 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a9d91fa-d887-4128-af43-cfe3cad79784-machine-api-operator-tls podName:8a9d91fa-d887-4128-af43-cfe3cad79784 nodeName:}" failed. No retries permitted until 2026-01-22 09:08:21.363909034 +0000 UTC m=+145.686096156 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/8a9d91fa-d887-4128-af43-cfe3cad79784-machine-api-operator-tls") pod "machine-api-operator-5694c8668f-rx42r" (UID: "8a9d91fa-d887-4128-af43-cfe3cad79784") : failed to sync secret cache: timed out waiting for the condition Jan 22 09:08:20 crc kubenswrapper[4811]: E0122 09:08:20.864018 4811 configmap.go:193] Couldn't get configMap openshift-console/trusted-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Jan 22 09:08:20 crc kubenswrapper[4811]: E0122 09:08:20.864105 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f58eb1d8-bb02-4af7-857c-138518c5bbf2-trusted-ca-bundle podName:f58eb1d8-bb02-4af7-857c-138518c5bbf2 nodeName:}" failed. No retries permitted until 2026-01-22 09:08:21.364095145 +0000 UTC m=+145.686282267 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/f58eb1d8-bb02-4af7-857c-138518c5bbf2-trusted-ca-bundle") pod "console-f9d7485db-jhptg" (UID: "f58eb1d8-bb02-4af7-857c-138518c5bbf2") : failed to sync configmap cache: timed out waiting for the condition Jan 22 09:08:20 crc kubenswrapper[4811]: E0122 09:08:20.866476 4811 configmap.go:193] Couldn't get configMap openshift-machine-api/machine-api-operator-images: failed to sync configmap cache: timed out waiting for the condition Jan 22 09:08:20 crc kubenswrapper[4811]: E0122 09:08:20.866514 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8a9d91fa-d887-4128-af43-cfe3cad79784-images podName:8a9d91fa-d887-4128-af43-cfe3cad79784 nodeName:}" failed. No retries permitted until 2026-01-22 09:08:21.366504267 +0000 UTC m=+145.688691390 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/8a9d91fa-d887-4128-af43-cfe3cad79784-images") pod "machine-api-operator-5694c8668f-rx42r" (UID: "8a9d91fa-d887-4128-af43-cfe3cad79784") : failed to sync configmap cache: timed out waiting for the condition Jan 22 09:08:20 crc kubenswrapper[4811]: E0122 09:08:20.867606 4811 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Jan 22 09:08:20 crc kubenswrapper[4811]: E0122 09:08:20.868340 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8a9d91fa-d887-4128-af43-cfe3cad79784-config podName:8a9d91fa-d887-4128-af43-cfe3cad79784 nodeName:}" failed. No retries permitted until 2026-01-22 09:08:21.368327646 +0000 UTC m=+145.690514768 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/8a9d91fa-d887-4128-af43-cfe3cad79784-config") pod "machine-api-operator-5694c8668f-rx42r" (UID: "8a9d91fa-d887-4128-af43-cfe3cad79784") : failed to sync configmap cache: timed out waiting for the condition Jan 22 09:08:20 crc kubenswrapper[4811]: E0122 09:08:20.868361 4811 secret.go:188] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: failed to sync secret cache: timed out waiting for the condition Jan 22 09:08:20 crc kubenswrapper[4811]: E0122 09:08:20.868371 4811 configmap.go:193] Couldn't get configMap openshift-console/service-ca: failed to sync configmap cache: timed out waiting for the condition Jan 22 09:08:20 crc kubenswrapper[4811]: E0122 09:08:20.868393 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0d71c966-5dcc-4f11-b21e-8c60ba5b7b57-samples-operator-tls podName:0d71c966-5dcc-4f11-b21e-8c60ba5b7b57 nodeName:}" failed. No retries permitted until 2026-01-22 09:08:21.368384623 +0000 UTC m=+145.690571746 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/0d71c966-5dcc-4f11-b21e-8c60ba5b7b57-samples-operator-tls") pod "cluster-samples-operator-665b6dd947-6l9qr" (UID: "0d71c966-5dcc-4f11-b21e-8c60ba5b7b57") : failed to sync secret cache: timed out waiting for the condition Jan 22 09:08:20 crc kubenswrapper[4811]: E0122 09:08:20.868403 4811 configmap.go:193] Couldn't get configMap openshift-console/oauth-serving-cert: failed to sync configmap cache: timed out waiting for the condition Jan 22 09:08:20 crc kubenswrapper[4811]: E0122 09:08:20.868405 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f58eb1d8-bb02-4af7-857c-138518c5bbf2-service-ca podName:f58eb1d8-bb02-4af7-857c-138518c5bbf2 nodeName:}" failed. No retries permitted until 2026-01-22 09:08:21.368399601 +0000 UTC m=+145.690586725 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "service-ca" (UniqueName: "kubernetes.io/configmap/f58eb1d8-bb02-4af7-857c-138518c5bbf2-service-ca") pod "console-f9d7485db-jhptg" (UID: "f58eb1d8-bb02-4af7-857c-138518c5bbf2") : failed to sync configmap cache: timed out waiting for the condition Jan 22 09:08:20 crc kubenswrapper[4811]: E0122 09:08:20.868426 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f58eb1d8-bb02-4af7-857c-138518c5bbf2-oauth-serving-cert podName:f58eb1d8-bb02-4af7-857c-138518c5bbf2 nodeName:}" failed. No retries permitted until 2026-01-22 09:08:21.368420561 +0000 UTC m=+145.690607684 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "oauth-serving-cert" (UniqueName: "kubernetes.io/configmap/f58eb1d8-bb02-4af7-857c-138518c5bbf2-oauth-serving-cert") pod "console-f9d7485db-jhptg" (UID: "f58eb1d8-bb02-4af7-857c-138518c5bbf2") : failed to sync configmap cache: timed out waiting for the condition Jan 22 09:08:20 crc kubenswrapper[4811]: E0122 09:08:20.870144 4811 configmap.go:193] Couldn't get configMap openshift-console/console-config: failed to sync configmap cache: timed out waiting for the condition Jan 22 09:08:20 crc kubenswrapper[4811]: E0122 09:08:20.870268 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f58eb1d8-bb02-4af7-857c-138518c5bbf2-console-config podName:f58eb1d8-bb02-4af7-857c-138518c5bbf2 nodeName:}" failed. No retries permitted until 2026-01-22 09:08:21.370258507 +0000 UTC m=+145.692445630 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "console-config" (UniqueName: "kubernetes.io/configmap/f58eb1d8-bb02-4af7-857c-138518c5bbf2-console-config") pod "console-f9d7485db-jhptg" (UID: "f58eb1d8-bb02-4af7-857c-138518c5bbf2") : failed to sync configmap cache: timed out waiting for the condition Jan 22 09:08:20 crc kubenswrapper[4811]: I0122 09:08:20.877539 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 22 09:08:20 crc kubenswrapper[4811]: I0122 09:08:20.897651 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 22 09:08:20 crc kubenswrapper[4811]: I0122 09:08:20.917908 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 22 09:08:20 crc kubenswrapper[4811]: I0122 09:08:20.937506 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 22 09:08:20 crc kubenswrapper[4811]: I0122 09:08:20.957749 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 22 09:08:20 crc kubenswrapper[4811]: I0122 09:08:20.977688 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 22 09:08:20 crc kubenswrapper[4811]: I0122 09:08:20.998225 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 22 09:08:21 crc kubenswrapper[4811]: I0122 09:08:21.017670 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 22 09:08:21 crc kubenswrapper[4811]: I0122 09:08:21.038856 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 22 09:08:21 crc kubenswrapper[4811]: I0122 09:08:21.057503 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 22 09:08:21 crc kubenswrapper[4811]: I0122 09:08:21.077894 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 22 09:08:21 crc kubenswrapper[4811]: I0122 09:08:21.097854 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 22 09:08:21 crc kubenswrapper[4811]: I0122 09:08:21.122219 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 22 09:08:21 crc kubenswrapper[4811]: I0122 09:08:21.137452 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 22 09:08:21 crc kubenswrapper[4811]: I0122 09:08:21.157235 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 22 09:08:21 crc kubenswrapper[4811]: I0122 09:08:21.177870 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 22 09:08:21 crc kubenswrapper[4811]: I0122 09:08:21.197753 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 22 09:08:21 crc kubenswrapper[4811]: I0122 09:08:21.218304 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 22 09:08:21 crc kubenswrapper[4811]: I0122 09:08:21.237824 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 22 09:08:21 crc kubenswrapper[4811]: I0122 09:08:21.257718 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 22 09:08:21 crc kubenswrapper[4811]: I0122 09:08:21.277194 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 22 09:08:21 crc kubenswrapper[4811]: I0122 09:08:21.297356 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 22 09:08:21 crc kubenswrapper[4811]: I0122 09:08:21.317669 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 22 09:08:21 crc kubenswrapper[4811]: I0122 09:08:21.337804 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 22 09:08:21 crc kubenswrapper[4811]: I0122 09:08:21.357604 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 22 09:08:21 crc kubenswrapper[4811]: I0122 09:08:21.377133 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/8a9d91fa-d887-4128-af43-cfe3cad79784-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-rx42r\" (UID: \"8a9d91fa-d887-4128-af43-cfe3cad79784\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-rx42r" Jan 22 09:08:21 crc kubenswrapper[4811]: I0122 09:08:21.377240 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f58eb1d8-bb02-4af7-857c-138518c5bbf2-trusted-ca-bundle\") pod \"console-f9d7485db-jhptg\" (UID: \"f58eb1d8-bb02-4af7-857c-138518c5bbf2\") " pod="openshift-console/console-f9d7485db-jhptg" Jan 22 09:08:21 crc kubenswrapper[4811]: I0122 09:08:21.377338 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8a9d91fa-d887-4128-af43-cfe3cad79784-images\") pod \"machine-api-operator-5694c8668f-rx42r\" (UID: \"8a9d91fa-d887-4128-af43-cfe3cad79784\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-rx42r" Jan 22 09:08:21 crc kubenswrapper[4811]: I0122 09:08:21.377451 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f58eb1d8-bb02-4af7-857c-138518c5bbf2-oauth-serving-cert\") pod \"console-f9d7485db-jhptg\" (UID: \"f58eb1d8-bb02-4af7-857c-138518c5bbf2\") " pod="openshift-console/console-f9d7485db-jhptg" Jan 22 09:08:21 crc kubenswrapper[4811]: I0122 09:08:21.377574 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/0d71c966-5dcc-4f11-b21e-8c60ba5b7b57-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-6l9qr\" (UID: \"0d71c966-5dcc-4f11-b21e-8c60ba5b7b57\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6l9qr" Jan 22 09:08:21 crc kubenswrapper[4811]: I0122 09:08:21.377683 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a9d91fa-d887-4128-af43-cfe3cad79784-config\") pod \"machine-api-operator-5694c8668f-rx42r\" (UID: \"8a9d91fa-d887-4128-af43-cfe3cad79784\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-rx42r" Jan 22 09:08:21 crc kubenswrapper[4811]: I0122 09:08:21.377780 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f58eb1d8-bb02-4af7-857c-138518c5bbf2-console-config\") pod \"console-f9d7485db-jhptg\" (UID: \"f58eb1d8-bb02-4af7-857c-138518c5bbf2\") " pod="openshift-console/console-f9d7485db-jhptg" Jan 22 09:08:21 crc kubenswrapper[4811]: I0122 09:08:21.377861 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f58eb1d8-bb02-4af7-857c-138518c5bbf2-service-ca\") pod \"console-f9d7485db-jhptg\" (UID: \"f58eb1d8-bb02-4af7-857c-138518c5bbf2\") " pod="openshift-console/console-f9d7485db-jhptg" Jan 22 09:08:21 crc kubenswrapper[4811]: I0122 09:08:21.377947 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f58eb1d8-bb02-4af7-857c-138518c5bbf2-console-serving-cert\") pod \"console-f9d7485db-jhptg\" (UID: \"f58eb1d8-bb02-4af7-857c-138518c5bbf2\") " pod="openshift-console/console-f9d7485db-jhptg" Jan 22 09:08:21 crc kubenswrapper[4811]: I0122 09:08:21.378024 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f58eb1d8-bb02-4af7-857c-138518c5bbf2-console-oauth-config\") pod \"console-f9d7485db-jhptg\" (UID: \"f58eb1d8-bb02-4af7-857c-138518c5bbf2\") " pod="openshift-console/console-f9d7485db-jhptg" Jan 22 09:08:21 crc kubenswrapper[4811]: I0122 09:08:21.378258 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 22 09:08:21 crc kubenswrapper[4811]: I0122 09:08:21.397217 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 22 09:08:21 crc kubenswrapper[4811]: I0122 09:08:21.417329 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 22 09:08:21 crc kubenswrapper[4811]: I0122 09:08:21.437333 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 22 09:08:21 crc kubenswrapper[4811]: I0122 09:08:21.457978 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 22 09:08:21 crc kubenswrapper[4811]: I0122 09:08:21.477167 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 22 09:08:21 crc kubenswrapper[4811]: I0122 09:08:21.497446 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 22 09:08:21 crc kubenswrapper[4811]: I0122 09:08:21.517543 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 22 09:08:21 crc kubenswrapper[4811]: I0122 09:08:21.549167 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-msb22\" (UniqueName: \"kubernetes.io/projected/189dea5f-ae62-4140-bd03-14a548c51684-kube-api-access-msb22\") pod \"etcd-operator-b45778765-sxpvs\" (UID: \"189dea5f-ae62-4140-bd03-14a548c51684\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sxpvs" Jan 22 09:08:21 crc kubenswrapper[4811]: I0122 09:08:21.557887 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 22 09:08:21 crc kubenswrapper[4811]: I0122 09:08:21.588252 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7jmhk\" (UniqueName: \"kubernetes.io/projected/13db87c4-7297-4265-879d-07ad09539aba-kube-api-access-7jmhk\") pod \"openshift-config-operator-7777fb866f-bssrl\" (UID: \"13db87c4-7297-4265-879d-07ad09539aba\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-bssrl" Jan 22 09:08:21 crc kubenswrapper[4811]: I0122 09:08:21.607916 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kr89x\" (UniqueName: \"kubernetes.io/projected/799c1556-92e6-43b8-a620-c7211a2ce813-kube-api-access-kr89x\") pod \"controller-manager-879f6c89f-vx8k5\" (UID: \"799c1556-92e6-43b8-a620-c7211a2ce813\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vx8k5" Jan 22 09:08:21 crc kubenswrapper[4811]: I0122 09:08:21.627783 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnz52\" (UniqueName: \"kubernetes.io/projected/7285d1e4-79ce-4ada-b15f-b1df68271703-kube-api-access-lnz52\") pod \"route-controller-manager-6576b87f9c-8fflv\" (UID: \"7285d1e4-79ce-4ada-b15f-b1df68271703\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8fflv" Jan 22 09:08:21 crc kubenswrapper[4811]: I0122 09:08:21.630651 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-bssrl" Jan 22 09:08:21 crc kubenswrapper[4811]: I0122 09:08:21.644540 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-vx8k5" Jan 22 09:08:21 crc kubenswrapper[4811]: I0122 09:08:21.649351 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2wchm\" (UniqueName: \"kubernetes.io/projected/ed39dbae-8bd1-43b9-aec3-4fb84807d65d-kube-api-access-2wchm\") pod \"ingress-operator-5b745b69d9-4ftvq\" (UID: \"ed39dbae-8bd1-43b9-aec3-4fb84807d65d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4ftvq" Jan 22 09:08:21 crc kubenswrapper[4811]: I0122 09:08:21.650797 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-sxpvs" Jan 22 09:08:21 crc kubenswrapper[4811]: I0122 09:08:21.669376 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c0eff054-47bc-4d81-acb9-06ef98b170fa-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-4wjd4\" (UID: \"c0eff054-47bc-4d81-acb9-06ef98b170fa\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4wjd4" Jan 22 09:08:21 crc kubenswrapper[4811]: I0122 09:08:21.692601 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ed39dbae-8bd1-43b9-aec3-4fb84807d65d-bound-sa-token\") pod \"ingress-operator-5b745b69d9-4ftvq\" (UID: \"ed39dbae-8bd1-43b9-aec3-4fb84807d65d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4ftvq" Jan 22 09:08:21 crc kubenswrapper[4811]: I0122 09:08:21.703453 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4ftvq" Jan 22 09:08:21 crc kubenswrapper[4811]: I0122 09:08:21.712327 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-slbc2\" (UniqueName: \"kubernetes.io/projected/c9b6feb2-2c7a-41ea-b160-f461f05057d4-kube-api-access-slbc2\") pod \"dns-operator-744455d44c-2hm84\" (UID: \"c9b6feb2-2c7a-41ea-b160-f461f05057d4\") " pod="openshift-dns-operator/dns-operator-744455d44c-2hm84" Jan 22 09:08:21 crc kubenswrapper[4811]: I0122 09:08:21.720745 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-2hm84" Jan 22 09:08:21 crc kubenswrapper[4811]: I0122 09:08:21.754615 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5pj4\" (UniqueName: \"kubernetes.io/projected/80529f80-c8f8-4bc5-83a6-eb19a23401f0-kube-api-access-b5pj4\") pod \"openshift-apiserver-operator-796bbdcf4f-knkwh\" (UID: \"80529f80-c8f8-4bc5-83a6-eb19a23401f0\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-knkwh" Jan 22 09:08:21 crc kubenswrapper[4811]: I0122 09:08:21.770385 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rpfg\" (UniqueName: \"kubernetes.io/projected/05a61535-90ca-4127-991a-0b3f1c110f5a-kube-api-access-6rpfg\") pod \"kube-storage-version-migrator-operator-b67b599dd-nljqw\" (UID: \"05a61535-90ca-4127-991a-0b3f1c110f5a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nljqw" Jan 22 09:08:21 crc kubenswrapper[4811]: I0122 09:08:21.789369 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 09:08:21 crc kubenswrapper[4811]: I0122 09:08:21.789411 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 09:08:21 crc kubenswrapper[4811]: I0122 09:08:21.794006 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmndl\" (UniqueName: \"kubernetes.io/projected/c910b62e-8a7e-4a3d-b8b0-f90384ec999f-kube-api-access-bmndl\") pod \"apiserver-7bbb656c7d-776jv\" (UID: \"c910b62e-8a7e-4a3d-b8b0-f90384ec999f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-776jv" Jan 22 09:08:21 crc kubenswrapper[4811]: I0122 09:08:21.794096 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 09:08:21 crc kubenswrapper[4811]: I0122 09:08:21.795476 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 09:08:21 crc kubenswrapper[4811]: I0122 09:08:21.818148 4811 request.go:700] Waited for 1.951225293s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/serviceaccounts/cluster-image-registry-operator/token Jan 22 09:08:21 crc kubenswrapper[4811]: I0122 09:08:21.834997 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-742d7\" (UniqueName: \"kubernetes.io/projected/c0eff054-47bc-4d81-acb9-06ef98b170fa-kube-api-access-742d7\") pod \"cluster-image-registry-operator-dc59b4c8b-4wjd4\" (UID: \"c0eff054-47bc-4d81-acb9-06ef98b170fa\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4wjd4" Jan 22 09:08:21 crc kubenswrapper[4811]: I0122 09:08:21.842283 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-bssrl"] Jan 22 09:08:21 crc kubenswrapper[4811]: I0122 09:08:21.843002 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-sxpvs"] Jan 22 09:08:21 crc kubenswrapper[4811]: W0122 09:08:21.855251 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod189dea5f_ae62_4140_bd03_14a548c51684.slice/crio-97261d3cdee859ada194871f7f8ca829b4643b800f36c0b5df54383f69bcd4a3 WatchSource:0}: Error finding container 97261d3cdee859ada194871f7f8ca829b4643b800f36c0b5df54383f69bcd4a3: Status 404 returned error can't find the container with id 97261d3cdee859ada194871f7f8ca829b4643b800f36c0b5df54383f69bcd4a3 Jan 22 09:08:21 crc kubenswrapper[4811]: I0122 09:08:21.862052 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-vx8k5"] Jan 22 09:08:21 crc kubenswrapper[4811]: I0122 09:08:21.891056 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4wjd4" Jan 22 09:08:21 crc kubenswrapper[4811]: I0122 09:08:21.895671 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hb448\" (UniqueName: \"kubernetes.io/projected/ab671388-f736-4a76-a421-ba3413830807-kube-api-access-hb448\") pod \"authentication-operator-69f744f599-md7dt\" (UID: \"ab671388-f736-4a76-a421-ba3413830807\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-md7dt" Jan 22 09:08:21 crc kubenswrapper[4811]: I0122 09:08:21.901474 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 22 09:08:21 crc kubenswrapper[4811]: I0122 09:08:21.901614 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8fflv" Jan 22 09:08:21 crc kubenswrapper[4811]: I0122 09:08:21.903298 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 09:08:21 crc kubenswrapper[4811]: I0122 09:08:21.911793 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-4ftvq"] Jan 22 09:08:21 crc kubenswrapper[4811]: I0122 09:08:21.917225 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 09:08:21 crc kubenswrapper[4811]: I0122 09:08:21.918311 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-776jv" Jan 22 09:08:21 crc kubenswrapper[4811]: I0122 09:08:21.922023 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 22 09:08:21 crc kubenswrapper[4811]: I0122 09:08:21.923807 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-knkwh" Jan 22 09:08:21 crc kubenswrapper[4811]: I0122 09:08:21.936302 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-2hm84"] Jan 22 09:08:21 crc kubenswrapper[4811]: I0122 09:08:21.955986 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29m59\" (UniqueName: \"kubernetes.io/projected/db487354-8574-45b6-b639-4d4afc8a7698-kube-api-access-29m59\") pod \"console-operator-58897d9998-2mphl\" (UID: \"db487354-8574-45b6-b639-4d4afc8a7698\") " pod="openshift-console-operator/console-operator-58897d9998-2mphl" Jan 22 09:08:21 crc kubenswrapper[4811]: I0122 09:08:21.958052 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 22 09:08:21 crc kubenswrapper[4811]: I0122 09:08:21.978424 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.008523 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.008641 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 22 09:08:22 crc kubenswrapper[4811]: E0122 09:08:22.008735 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 09:10:24.008716223 +0000 UTC m=+268.330903346 (durationBeforeRetry 2m2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.009092 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.009180 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.009863 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.020086 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.023834 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.030805 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nljqw" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.037486 4811 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.062798 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.074433 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-md7dt" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.114051 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/708ddef5-479e-44ef-a189-c41123a73bbe-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-gmwq7\" (UID: \"708ddef5-479e-44ef-a189-c41123a73bbe\") " pod="openshift-authentication/oauth-openshift-558db77b4-gmwq7" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.114082 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlj6w\" (UniqueName: \"kubernetes.io/projected/73533561-14fb-4481-872e-1b47096f9d30-kube-api-access-vlj6w\") pod \"image-registry-697d97f7c8-nczwv\" (UID: \"73533561-14fb-4481-872e-1b47096f9d30\") " pod="openshift-image-registry/image-registry-697d97f7c8-nczwv" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.114099 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/708ddef5-479e-44ef-a189-c41123a73bbe-audit-dir\") pod \"oauth-openshift-558db77b4-gmwq7\" (UID: \"708ddef5-479e-44ef-a189-c41123a73bbe\") " pod="openshift-authentication/oauth-openshift-558db77b4-gmwq7" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.114114 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/708ddef5-479e-44ef-a189-c41123a73bbe-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-gmwq7\" (UID: \"708ddef5-479e-44ef-a189-c41123a73bbe\") " pod="openshift-authentication/oauth-openshift-558db77b4-gmwq7" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.114131 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/78341d4a-0228-4056-ae13-9619ea5c4c35-auth-proxy-config\") pod \"machine-approver-56656f9798-trtbx\" (UID: \"78341d4a-0228-4056-ae13-9619ea5c4c35\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-trtbx" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.114163 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqs9m\" (UniqueName: \"kubernetes.io/projected/387d7c1a-1589-4377-9566-a45d8f498f38-kube-api-access-xqs9m\") pod \"apiserver-76f77b778f-jjjjp\" (UID: \"387d7c1a-1589-4377-9566-a45d8f498f38\") " pod="openshift-apiserver/apiserver-76f77b778f-jjjjp" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.114178 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/708ddef5-479e-44ef-a189-c41123a73bbe-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-gmwq7\" (UID: \"708ddef5-479e-44ef-a189-c41123a73bbe\") " pod="openshift-authentication/oauth-openshift-558db77b4-gmwq7" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.114196 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/73533561-14fb-4481-872e-1b47096f9d30-trusted-ca\") pod \"image-registry-697d97f7c8-nczwv\" (UID: \"73533561-14fb-4481-872e-1b47096f9d30\") " pod="openshift-image-registry/image-registry-697d97f7c8-nczwv" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.114211 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/73533561-14fb-4481-872e-1b47096f9d30-bound-sa-token\") pod \"image-registry-697d97f7c8-nczwv\" (UID: \"73533561-14fb-4481-872e-1b47096f9d30\") " pod="openshift-image-registry/image-registry-697d97f7c8-nczwv" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.114228 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/708ddef5-479e-44ef-a189-c41123a73bbe-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-gmwq7\" (UID: \"708ddef5-479e-44ef-a189-c41123a73bbe\") " pod="openshift-authentication/oauth-openshift-558db77b4-gmwq7" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.114245 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5dd00b53-587c-4992-87da-f3b59054b7ff-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-j6dqt\" (UID: \"5dd00b53-587c-4992-87da-f3b59054b7ff\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-j6dqt" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.114285 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/708ddef5-479e-44ef-a189-c41123a73bbe-audit-policies\") pod \"oauth-openshift-558db77b4-gmwq7\" (UID: \"708ddef5-479e-44ef-a189-c41123a73bbe\") " pod="openshift-authentication/oauth-openshift-558db77b4-gmwq7" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.114305 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/73533561-14fb-4481-872e-1b47096f9d30-ca-trust-extracted\") pod \"image-registry-697d97f7c8-nczwv\" (UID: \"73533561-14fb-4481-872e-1b47096f9d30\") " pod="openshift-image-registry/image-registry-697d97f7c8-nczwv" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.114318 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/78341d4a-0228-4056-ae13-9619ea5c4c35-machine-approver-tls\") pod \"machine-approver-56656f9798-trtbx\" (UID: \"78341d4a-0228-4056-ae13-9619ea5c4c35\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-trtbx" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.114335 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/387d7c1a-1589-4377-9566-a45d8f498f38-audit-dir\") pod \"apiserver-76f77b778f-jjjjp\" (UID: \"387d7c1a-1589-4377-9566-a45d8f498f38\") " pod="openshift-apiserver/apiserver-76f77b778f-jjjjp" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.114353 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/387d7c1a-1589-4377-9566-a45d8f498f38-encryption-config\") pod \"apiserver-76f77b778f-jjjjp\" (UID: \"387d7c1a-1589-4377-9566-a45d8f498f38\") " pod="openshift-apiserver/apiserver-76f77b778f-jjjjp" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.114370 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wz9vd\" (UniqueName: \"kubernetes.io/projected/5dd00b53-587c-4992-87da-f3b59054b7ff-kube-api-access-wz9vd\") pod \"openshift-controller-manager-operator-756b6f6bc6-j6dqt\" (UID: \"5dd00b53-587c-4992-87da-f3b59054b7ff\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-j6dqt" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.114383 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/387d7c1a-1589-4377-9566-a45d8f498f38-image-import-ca\") pod \"apiserver-76f77b778f-jjjjp\" (UID: \"387d7c1a-1589-4377-9566-a45d8f498f38\") " pod="openshift-apiserver/apiserver-76f77b778f-jjjjp" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.114419 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/73533561-14fb-4481-872e-1b47096f9d30-registry-certificates\") pod \"image-registry-697d97f7c8-nczwv\" (UID: \"73533561-14fb-4481-872e-1b47096f9d30\") " pod="openshift-image-registry/image-registry-697d97f7c8-nczwv" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.114448 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/387d7c1a-1589-4377-9566-a45d8f498f38-config\") pod \"apiserver-76f77b778f-jjjjp\" (UID: \"387d7c1a-1589-4377-9566-a45d8f498f38\") " pod="openshift-apiserver/apiserver-76f77b778f-jjjjp" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.114464 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/387d7c1a-1589-4377-9566-a45d8f498f38-etcd-serving-ca\") pod \"apiserver-76f77b778f-jjjjp\" (UID: \"387d7c1a-1589-4377-9566-a45d8f498f38\") " pod="openshift-apiserver/apiserver-76f77b778f-jjjjp" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.114481 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/73533561-14fb-4481-872e-1b47096f9d30-installation-pull-secrets\") pod \"image-registry-697d97f7c8-nczwv\" (UID: \"73533561-14fb-4481-872e-1b47096f9d30\") " pod="openshift-image-registry/image-registry-697d97f7c8-nczwv" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.114496 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/708ddef5-479e-44ef-a189-c41123a73bbe-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-gmwq7\" (UID: \"708ddef5-479e-44ef-a189-c41123a73bbe\") " pod="openshift-authentication/oauth-openshift-558db77b4-gmwq7" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.114510 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7z6s\" (UniqueName: \"kubernetes.io/projected/708ddef5-479e-44ef-a189-c41123a73bbe-kube-api-access-g7z6s\") pod \"oauth-openshift-558db77b4-gmwq7\" (UID: \"708ddef5-479e-44ef-a189-c41123a73bbe\") " pod="openshift-authentication/oauth-openshift-558db77b4-gmwq7" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.114533 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/387d7c1a-1589-4377-9566-a45d8f498f38-node-pullsecrets\") pod \"apiserver-76f77b778f-jjjjp\" (UID: \"387d7c1a-1589-4377-9566-a45d8f498f38\") " pod="openshift-apiserver/apiserver-76f77b778f-jjjjp" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.114546 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/708ddef5-479e-44ef-a189-c41123a73bbe-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-gmwq7\" (UID: \"708ddef5-479e-44ef-a189-c41123a73bbe\") " pod="openshift-authentication/oauth-openshift-558db77b4-gmwq7" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.114560 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/708ddef5-479e-44ef-a189-c41123a73bbe-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-gmwq7\" (UID: \"708ddef5-479e-44ef-a189-c41123a73bbe\") " pod="openshift-authentication/oauth-openshift-558db77b4-gmwq7" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.114574 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/387d7c1a-1589-4377-9566-a45d8f498f38-serving-cert\") pod \"apiserver-76f77b778f-jjjjp\" (UID: \"387d7c1a-1589-4377-9566-a45d8f498f38\") " pod="openshift-apiserver/apiserver-76f77b778f-jjjjp" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.114588 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/387d7c1a-1589-4377-9566-a45d8f498f38-etcd-client\") pod \"apiserver-76f77b778f-jjjjp\" (UID: \"387d7c1a-1589-4377-9566-a45d8f498f38\") " pod="openshift-apiserver/apiserver-76f77b778f-jjjjp" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.117743 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nczwv\" (UID: \"73533561-14fb-4481-872e-1b47096f9d30\") " pod="openshift-image-registry/image-registry-697d97f7c8-nczwv" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.117787 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/708ddef5-479e-44ef-a189-c41123a73bbe-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-gmwq7\" (UID: \"708ddef5-479e-44ef-a189-c41123a73bbe\") " pod="openshift-authentication/oauth-openshift-558db77b4-gmwq7" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.117809 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/708ddef5-479e-44ef-a189-c41123a73bbe-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-gmwq7\" (UID: \"708ddef5-479e-44ef-a189-c41123a73bbe\") " pod="openshift-authentication/oauth-openshift-558db77b4-gmwq7" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.117825 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/708ddef5-479e-44ef-a189-c41123a73bbe-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-gmwq7\" (UID: \"708ddef5-479e-44ef-a189-c41123a73bbe\") " pod="openshift-authentication/oauth-openshift-558db77b4-gmwq7" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.117842 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/708ddef5-479e-44ef-a189-c41123a73bbe-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-gmwq7\" (UID: \"708ddef5-479e-44ef-a189-c41123a73bbe\") " pod="openshift-authentication/oauth-openshift-558db77b4-gmwq7" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.117876 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/387d7c1a-1589-4377-9566-a45d8f498f38-audit\") pod \"apiserver-76f77b778f-jjjjp\" (UID: \"387d7c1a-1589-4377-9566-a45d8f498f38\") " pod="openshift-apiserver/apiserver-76f77b778f-jjjjp" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.117906 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/387d7c1a-1589-4377-9566-a45d8f498f38-trusted-ca-bundle\") pod \"apiserver-76f77b778f-jjjjp\" (UID: \"387d7c1a-1589-4377-9566-a45d8f498f38\") " pod="openshift-apiserver/apiserver-76f77b778f-jjjjp" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.117922 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/73533561-14fb-4481-872e-1b47096f9d30-registry-tls\") pod \"image-registry-697d97f7c8-nczwv\" (UID: \"73533561-14fb-4481-872e-1b47096f9d30\") " pod="openshift-image-registry/image-registry-697d97f7c8-nczwv" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.117935 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5dd00b53-587c-4992-87da-f3b59054b7ff-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-j6dqt\" (UID: \"5dd00b53-587c-4992-87da-f3b59054b7ff\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-j6dqt" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.117952 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78341d4a-0228-4056-ae13-9619ea5c4c35-config\") pod \"machine-approver-56656f9798-trtbx\" (UID: \"78341d4a-0228-4056-ae13-9619ea5c4c35\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-trtbx" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.117967 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chlnh\" (UniqueName: \"kubernetes.io/projected/78341d4a-0228-4056-ae13-9619ea5c4c35-kube-api-access-chlnh\") pod \"machine-approver-56656f9798-trtbx\" (UID: \"78341d4a-0228-4056-ae13-9619ea5c4c35\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-trtbx" Jan 22 09:08:22 crc kubenswrapper[4811]: E0122 09:08:22.118229 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:08:22.618218526 +0000 UTC m=+146.940405649 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nczwv" (UID: "73533561-14fb-4481-872e-1b47096f9d30") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.118440 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.119315 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a9d91fa-d887-4128-af43-cfe3cad79784-config\") pod \"machine-api-operator-5694c8668f-rx42r\" (UID: \"8a9d91fa-d887-4128-af43-cfe3cad79784\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-rx42r" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.130796 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-8fflv"] Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.137386 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.144949 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f58eb1d8-bb02-4af7-857c-138518c5bbf2-console-serving-cert\") pod \"console-f9d7485db-jhptg\" (UID: \"f58eb1d8-bb02-4af7-857c-138518c5bbf2\") " pod="openshift-console/console-f9d7485db-jhptg" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.159498 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.167268 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4wjd4"] Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.179434 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.201350 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 22 09:08:22 crc kubenswrapper[4811]: W0122 09:08:22.202167 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc0eff054_47bc_4d81_acb9_06ef98b170fa.slice/crio-f7f798f6b717b5964fa0214bc18eb95edc8b10995db1f86ab2f2806e664f9234 WatchSource:0}: Error finding container f7f798f6b717b5964fa0214bc18eb95edc8b10995db1f86ab2f2806e664f9234: Status 404 returned error can't find the container with id f7f798f6b717b5964fa0214bc18eb95edc8b10995db1f86ab2f2806e664f9234 Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.212661 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.218466 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.218604 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.218751 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/73533561-14fb-4481-872e-1b47096f9d30-registry-tls\") pod \"image-registry-697d97f7c8-nczwv\" (UID: \"73533561-14fb-4481-872e-1b47096f9d30\") " pod="openshift-image-registry/image-registry-697d97f7c8-nczwv" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.218779 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5dd00b53-587c-4992-87da-f3b59054b7ff-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-j6dqt\" (UID: \"5dd00b53-587c-4992-87da-f3b59054b7ff\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-j6dqt" Jan 22 09:08:22 crc kubenswrapper[4811]: E0122 09:08:22.219143 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 09:08:22.719117295 +0000 UTC m=+147.041304418 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.219873 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/387d7c1a-1589-4377-9566-a45d8f498f38-trusted-ca-bundle\") pod \"apiserver-76f77b778f-jjjjp\" (UID: \"387d7c1a-1589-4377-9566-a45d8f498f38\") " pod="openshift-apiserver/apiserver-76f77b778f-jjjjp" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.219920 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/80663d92-2281-4a3d-9232-f1fc19873d88-config-volume\") pod \"collect-profiles-29484540-kbb7d\" (UID: \"80663d92-2281-4a3d-9232-f1fc19873d88\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484540-kbb7d" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.219941 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78341d4a-0228-4056-ae13-9619ea5c4c35-config\") pod \"machine-approver-56656f9798-trtbx\" (UID: \"78341d4a-0228-4056-ae13-9619ea5c4c35\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-trtbx" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.219972 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-chlnh\" (UniqueName: \"kubernetes.io/projected/78341d4a-0228-4056-ae13-9619ea5c4c35-kube-api-access-chlnh\") pod \"machine-approver-56656f9798-trtbx\" (UID: \"78341d4a-0228-4056-ae13-9619ea5c4c35\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-trtbx" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.219991 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghb2p\" (UniqueName: \"kubernetes.io/projected/f382688c-fb9f-4169-b4eb-3466e08dbd7c-kube-api-access-ghb2p\") pod \"machine-config-operator-74547568cd-fcb4z\" (UID: \"f382688c-fb9f-4169-b4eb-3466e08dbd7c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fcb4z" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.220009 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/708ddef5-479e-44ef-a189-c41123a73bbe-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-gmwq7\" (UID: \"708ddef5-479e-44ef-a189-c41123a73bbe\") " pod="openshift-authentication/oauth-openshift-558db77b4-gmwq7" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.220025 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ccxnd\" (UniqueName: \"kubernetes.io/projected/1de7ea7e-b219-47b4-9ba9-ef3688eda036-kube-api-access-ccxnd\") pod \"csi-hostpathplugin-vrdwc\" (UID: \"1de7ea7e-b219-47b4-9ba9-ef3688eda036\") " pod="hostpath-provisioner/csi-hostpathplugin-vrdwc" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.220040 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vlj6w\" (UniqueName: \"kubernetes.io/projected/73533561-14fb-4481-872e-1b47096f9d30-kube-api-access-vlj6w\") pod \"image-registry-697d97f7c8-nczwv\" (UID: \"73533561-14fb-4481-872e-1b47096f9d30\") " pod="openshift-image-registry/image-registry-697d97f7c8-nczwv" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.220055 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/78341d4a-0228-4056-ae13-9619ea5c4c35-auth-proxy-config\") pod \"machine-approver-56656f9798-trtbx\" (UID: \"78341d4a-0228-4056-ae13-9619ea5c4c35\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-trtbx" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.220072 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/2716a2dc-25e2-4a62-8264-41d299b3cd55-default-certificate\") pod \"router-default-5444994796-l8phl\" (UID: \"2716a2dc-25e2-4a62-8264-41d299b3cd55\") " pod="openshift-ingress/router-default-5444994796-l8phl" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.220085 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/30038e4b-2056-44a4-b3f0-1ec1433a7b4d-certs\") pod \"machine-config-server-8fxth\" (UID: \"30038e4b-2056-44a4-b3f0-1ec1433a7b4d\") " pod="openshift-machine-config-operator/machine-config-server-8fxth" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.220104 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kk26t\" (UniqueName: \"kubernetes.io/projected/0803a721-b862-4696-a752-e5af589ced0b-kube-api-access-kk26t\") pod \"service-ca-9c57cc56f-6jdmv\" (UID: \"0803a721-b862-4696-a752-e5af589ced0b\") " pod="openshift-service-ca/service-ca-9c57cc56f-6jdmv" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.220131 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/708ddef5-479e-44ef-a189-c41123a73bbe-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-gmwq7\" (UID: \"708ddef5-479e-44ef-a189-c41123a73bbe\") " pod="openshift-authentication/oauth-openshift-558db77b4-gmwq7" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.220147 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/1de7ea7e-b219-47b4-9ba9-ef3688eda036-plugins-dir\") pod \"csi-hostpathplugin-vrdwc\" (UID: \"1de7ea7e-b219-47b4-9ba9-ef3688eda036\") " pod="hostpath-provisioner/csi-hostpathplugin-vrdwc" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.220174 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/c488ec3a-dfdf-46dd-8f3f-1346232394d3-tmpfs\") pod \"packageserver-d55dfcdfc-lzdbl\" (UID: \"c488ec3a-dfdf-46dd-8f3f-1346232394d3\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lzdbl" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.220191 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldr5v\" (UniqueName: \"kubernetes.io/projected/8b7094aa-cc4a-49eb-be77-715a4efbc1d0-kube-api-access-ldr5v\") pod \"control-plane-machine-set-operator-78cbb6b69f-9pxnj\" (UID: \"8b7094aa-cc4a-49eb-be77-715a4efbc1d0\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-9pxnj" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.220207 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0d9fa190-3073-4b2b-a348-f9a8e1f994b2-proxy-tls\") pod \"machine-config-controller-84d6567774-2jrk9\" (UID: \"0d9fa190-3073-4b2b-a348-f9a8e1f994b2\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2jrk9" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.220234 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/73533561-14fb-4481-872e-1b47096f9d30-trusted-ca\") pod \"image-registry-697d97f7c8-nczwv\" (UID: \"73533561-14fb-4481-872e-1b47096f9d30\") " pod="openshift-image-registry/image-registry-697d97f7c8-nczwv" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.220268 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/73533561-14fb-4481-872e-1b47096f9d30-bound-sa-token\") pod \"image-registry-697d97f7c8-nczwv\" (UID: \"73533561-14fb-4481-872e-1b47096f9d30\") " pod="openshift-image-registry/image-registry-697d97f7c8-nczwv" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.220286 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2716a2dc-25e2-4a62-8264-41d299b3cd55-service-ca-bundle\") pod \"router-default-5444994796-l8phl\" (UID: \"2716a2dc-25e2-4a62-8264-41d299b3cd55\") " pod="openshift-ingress/router-default-5444994796-l8phl" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.220301 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/30038e4b-2056-44a4-b3f0-1ec1433a7b4d-node-bootstrap-token\") pod \"machine-config-server-8fxth\" (UID: \"30038e4b-2056-44a4-b3f0-1ec1433a7b4d\") " pod="openshift-machine-config-operator/machine-config-server-8fxth" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.220317 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/1de7ea7e-b219-47b4-9ba9-ef3688eda036-mountpoint-dir\") pod \"csi-hostpathplugin-vrdwc\" (UID: \"1de7ea7e-b219-47b4-9ba9-ef3688eda036\") " pod="hostpath-provisioner/csi-hostpathplugin-vrdwc" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.220350 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/708ddef5-479e-44ef-a189-c41123a73bbe-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-gmwq7\" (UID: \"708ddef5-479e-44ef-a189-c41123a73bbe\") " pod="openshift-authentication/oauth-openshift-558db77b4-gmwq7" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.220369 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e7343a26-0e21-4d61-ad1c-c3c5479a89e1-cert\") pod \"ingress-canary-9624f\" (UID: \"e7343a26-0e21-4d61-ad1c-c3c5479a89e1\") " pod="openshift-ingress-canary/ingress-canary-9624f" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.220387 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/535085bd-3682-4133-a5e6-e5f1149e7d24-profile-collector-cert\") pod \"olm-operator-6b444d44fb-wcwwm\" (UID: \"535085bd-3682-4133-a5e6-e5f1149e7d24\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wcwwm" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.220411 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/78341d4a-0228-4056-ae13-9619ea5c4c35-machine-approver-tls\") pod \"machine-approver-56656f9798-trtbx\" (UID: \"78341d4a-0228-4056-ae13-9619ea5c4c35\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-trtbx" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.220427 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f382688c-fb9f-4169-b4eb-3466e08dbd7c-auth-proxy-config\") pod \"machine-config-operator-74547568cd-fcb4z\" (UID: \"f382688c-fb9f-4169-b4eb-3466e08dbd7c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fcb4z" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.220443 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9df8d6f7-ac70-440a-94a9-e4ee69c104a2-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-kj6fk\" (UID: \"9df8d6f7-ac70-440a-94a9-e4ee69c104a2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-kj6fk" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.220457 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhx9d\" (UniqueName: \"kubernetes.io/projected/e7343a26-0e21-4d61-ad1c-c3c5479a89e1-kube-api-access-nhx9d\") pod \"ingress-canary-9624f\" (UID: \"e7343a26-0e21-4d61-ad1c-c3c5479a89e1\") " pod="openshift-ingress-canary/ingress-canary-9624f" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.220493 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/387d7c1a-1589-4377-9566-a45d8f498f38-encryption-config\") pod \"apiserver-76f77b778f-jjjjp\" (UID: \"387d7c1a-1589-4377-9566-a45d8f498f38\") " pod="openshift-apiserver/apiserver-76f77b778f-jjjjp" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.220507 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mths7\" (UniqueName: \"kubernetes.io/projected/2716a2dc-25e2-4a62-8264-41d299b3cd55-kube-api-access-mths7\") pod \"router-default-5444994796-l8phl\" (UID: \"2716a2dc-25e2-4a62-8264-41d299b3cd55\") " pod="openshift-ingress/router-default-5444994796-l8phl" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.220521 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c488ec3a-dfdf-46dd-8f3f-1346232394d3-webhook-cert\") pod \"packageserver-d55dfcdfc-lzdbl\" (UID: \"c488ec3a-dfdf-46dd-8f3f-1346232394d3\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lzdbl" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.220535 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/88b52253-3840-42b0-aa3c-d8708274dcfa-config-volume\") pod \"dns-default-v2nnf\" (UID: \"88b52253-3840-42b0-aa3c-d8708274dcfa\") " pod="openshift-dns/dns-default-v2nnf" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.220550 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wz9vd\" (UniqueName: \"kubernetes.io/projected/5dd00b53-587c-4992-87da-f3b59054b7ff-kube-api-access-wz9vd\") pod \"openshift-controller-manager-operator-756b6f6bc6-j6dqt\" (UID: \"5dd00b53-587c-4992-87da-f3b59054b7ff\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-j6dqt" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.220567 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cngll\" (UniqueName: \"kubernetes.io/projected/80663d92-2281-4a3d-9232-f1fc19873d88-kube-api-access-cngll\") pod \"collect-profiles-29484540-kbb7d\" (UID: \"80663d92-2281-4a3d-9232-f1fc19873d88\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484540-kbb7d" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.220583 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvvdc\" (UniqueName: \"kubernetes.io/projected/c488ec3a-dfdf-46dd-8f3f-1346232394d3-kube-api-access-qvvdc\") pod \"packageserver-d55dfcdfc-lzdbl\" (UID: \"c488ec3a-dfdf-46dd-8f3f-1346232394d3\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lzdbl" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.220598 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/cb22b2ae-6c13-482b-b827-5200e2be87ca-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-78zjg\" (UID: \"cb22b2ae-6c13-482b-b827-5200e2be87ca\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-78zjg" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.220645 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/73533561-14fb-4481-872e-1b47096f9d30-registry-certificates\") pod \"image-registry-697d97f7c8-nczwv\" (UID: \"73533561-14fb-4481-872e-1b47096f9d30\") " pod="openshift-image-registry/image-registry-697d97f7c8-nczwv" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.220700 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c47199c0-d608-4ece-913c-b164a4d16f21-profile-collector-cert\") pod \"catalog-operator-68c6474976-qnxwq\" (UID: \"c47199c0-d608-4ece-913c-b164a4d16f21\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qnxwq" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.220715 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/88b52253-3840-42b0-aa3c-d8708274dcfa-metrics-tls\") pod \"dns-default-v2nnf\" (UID: \"88b52253-3840-42b0-aa3c-d8708274dcfa\") " pod="openshift-dns/dns-default-v2nnf" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.220732 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/1de7ea7e-b219-47b4-9ba9-ef3688eda036-registration-dir\") pod \"csi-hostpathplugin-vrdwc\" (UID: \"1de7ea7e-b219-47b4-9ba9-ef3688eda036\") " pod="hostpath-provisioner/csi-hostpathplugin-vrdwc" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.220803 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/387d7c1a-1589-4377-9566-a45d8f498f38-config\") pod \"apiserver-76f77b778f-jjjjp\" (UID: \"387d7c1a-1589-4377-9566-a45d8f498f38\") " pod="openshift-apiserver/apiserver-76f77b778f-jjjjp" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.220818 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/387d7c1a-1589-4377-9566-a45d8f498f38-etcd-serving-ca\") pod \"apiserver-76f77b778f-jjjjp\" (UID: \"387d7c1a-1589-4377-9566-a45d8f498f38\") " pod="openshift-apiserver/apiserver-76f77b778f-jjjjp" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.220855 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/708ddef5-479e-44ef-a189-c41123a73bbe-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-gmwq7\" (UID: \"708ddef5-479e-44ef-a189-c41123a73bbe\") " pod="openshift-authentication/oauth-openshift-558db77b4-gmwq7" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.220870 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7z6s\" (UniqueName: \"kubernetes.io/projected/708ddef5-479e-44ef-a189-c41123a73bbe-kube-api-access-g7z6s\") pod \"oauth-openshift-558db77b4-gmwq7\" (UID: \"708ddef5-479e-44ef-a189-c41123a73bbe\") " pod="openshift-authentication/oauth-openshift-558db77b4-gmwq7" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.220887 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c488ec3a-dfdf-46dd-8f3f-1346232394d3-apiservice-cert\") pod \"packageserver-d55dfcdfc-lzdbl\" (UID: \"c488ec3a-dfdf-46dd-8f3f-1346232394d3\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lzdbl" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.220906 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c674e914-755f-48a4-97ca-03e1a69a021a-config\") pod \"kube-apiserver-operator-766d6c64bb-j9b4b\" (UID: \"c674e914-755f-48a4-97ca-03e1a69a021a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-j9b4b" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.220933 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c674e914-755f-48a4-97ca-03e1a69a021a-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-j9b4b\" (UID: \"c674e914-755f-48a4-97ca-03e1a69a021a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-j9b4b" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.220959 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/387d7c1a-1589-4377-9566-a45d8f498f38-serving-cert\") pod \"apiserver-76f77b778f-jjjjp\" (UID: \"387d7c1a-1589-4377-9566-a45d8f498f38\") " pod="openshift-apiserver/apiserver-76f77b778f-jjjjp" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.220974 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/708ddef5-479e-44ef-a189-c41123a73bbe-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-gmwq7\" (UID: \"708ddef5-479e-44ef-a189-c41123a73bbe\") " pod="openshift-authentication/oauth-openshift-558db77b4-gmwq7" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.221258 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9zm7\" (UniqueName: \"kubernetes.io/projected/c1be96c2-9c8f-4c6d-8dff-02e0898a963b-kube-api-access-b9zm7\") pod \"package-server-manager-789f6589d5-5w8tw\" (UID: \"c1be96c2-9c8f-4c6d-8dff-02e0898a963b\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5w8tw" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.221275 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/8b7094aa-cc4a-49eb-be77-715a4efbc1d0-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-9pxnj\" (UID: \"8b7094aa-cc4a-49eb-be77-715a4efbc1d0\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-9pxnj" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.221320 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/387d7c1a-1589-4377-9566-a45d8f498f38-etcd-client\") pod \"apiserver-76f77b778f-jjjjp\" (UID: \"387d7c1a-1589-4377-9566-a45d8f498f38\") " pod="openshift-apiserver/apiserver-76f77b778f-jjjjp" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.221335 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/708ddef5-479e-44ef-a189-c41123a73bbe-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-gmwq7\" (UID: \"708ddef5-479e-44ef-a189-c41123a73bbe\") " pod="openshift-authentication/oauth-openshift-558db77b4-gmwq7" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.221354 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lklt\" (UniqueName: \"kubernetes.io/projected/cb22b2ae-6c13-482b-b827-5200e2be87ca-kube-api-access-2lklt\") pod \"multus-admission-controller-857f4d67dd-78zjg\" (UID: \"cb22b2ae-6c13-482b-b827-5200e2be87ca\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-78zjg" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.221387 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/708ddef5-479e-44ef-a189-c41123a73bbe-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-gmwq7\" (UID: \"708ddef5-479e-44ef-a189-c41123a73bbe\") " pod="openshift-authentication/oauth-openshift-558db77b4-gmwq7" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.221405 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/708ddef5-479e-44ef-a189-c41123a73bbe-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-gmwq7\" (UID: \"708ddef5-479e-44ef-a189-c41123a73bbe\") " pod="openshift-authentication/oauth-openshift-558db77b4-gmwq7" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.221423 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f194f7f-c770-471d-8bcd-c2fd613e0b46-config\") pod \"kube-controller-manager-operator-78b949d7b-wvhkk\" (UID: \"9f194f7f-c770-471d-8bcd-c2fd613e0b46\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wvhkk" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.221452 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f382688c-fb9f-4169-b4eb-3466e08dbd7c-proxy-tls\") pod \"machine-config-operator-74547568cd-fcb4z\" (UID: \"f382688c-fb9f-4169-b4eb-3466e08dbd7c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fcb4z" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.221491 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/2716a2dc-25e2-4a62-8264-41d299b3cd55-stats-auth\") pod \"router-default-5444994796-l8phl\" (UID: \"2716a2dc-25e2-4a62-8264-41d299b3cd55\") " pod="openshift-ingress/router-default-5444994796-l8phl" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.221508 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l99th\" (UniqueName: \"kubernetes.io/projected/c47199c0-d608-4ece-913c-b164a4d16f21-kube-api-access-l99th\") pod \"catalog-operator-68c6474976-qnxwq\" (UID: \"c47199c0-d608-4ece-913c-b164a4d16f21\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qnxwq" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.221545 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4b5g\" (UniqueName: \"kubernetes.io/projected/42f53a84-4118-4088-9cb5-39a05774839a-kube-api-access-t4b5g\") pod \"migrator-59844c95c7-zwkzn\" (UID: \"42f53a84-4118-4088-9cb5-39a05774839a\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-zwkzn" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.221562 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkbdj\" (UniqueName: \"kubernetes.io/projected/30038e4b-2056-44a4-b3f0-1ec1433a7b4d-kube-api-access-dkbdj\") pod \"machine-config-server-8fxth\" (UID: \"30038e4b-2056-44a4-b3f0-1ec1433a7b4d\") " pod="openshift-machine-config-operator/machine-config-server-8fxth" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.221577 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/535085bd-3682-4133-a5e6-e5f1149e7d24-srv-cert\") pod \"olm-operator-6b444d44fb-wcwwm\" (UID: \"535085bd-3682-4133-a5e6-e5f1149e7d24\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wcwwm" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.221593 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/708ddef5-479e-44ef-a189-c41123a73bbe-audit-dir\") pod \"oauth-openshift-558db77b4-gmwq7\" (UID: \"708ddef5-479e-44ef-a189-c41123a73bbe\") " pod="openshift-authentication/oauth-openshift-558db77b4-gmwq7" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.221610 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/708ddef5-479e-44ef-a189-c41123a73bbe-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-gmwq7\" (UID: \"708ddef5-479e-44ef-a189-c41123a73bbe\") " pod="openshift-authentication/oauth-openshift-558db77b4-gmwq7" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.221638 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqs9m\" (UniqueName: \"kubernetes.io/projected/387d7c1a-1589-4377-9566-a45d8f498f38-kube-api-access-xqs9m\") pod \"apiserver-76f77b778f-jjjjp\" (UID: \"387d7c1a-1589-4377-9566-a45d8f498f38\") " pod="openshift-apiserver/apiserver-76f77b778f-jjjjp" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.221666 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/1de7ea7e-b219-47b4-9ba9-ef3688eda036-csi-data-dir\") pod \"csi-hostpathplugin-vrdwc\" (UID: \"1de7ea7e-b219-47b4-9ba9-ef3688eda036\") " pod="hostpath-provisioner/csi-hostpathplugin-vrdwc" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.221689 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2716a2dc-25e2-4a62-8264-41d299b3cd55-metrics-certs\") pod \"router-default-5444994796-l8phl\" (UID: \"2716a2dc-25e2-4a62-8264-41d299b3cd55\") " pod="openshift-ingress/router-default-5444994796-l8phl" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.221705 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c173016d-7244-468d-a84f-2ef0e2bf0258-config\") pod \"service-ca-operator-777779d784-5w7fh\" (UID: \"c173016d-7244-468d-a84f-2ef0e2bf0258\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-5w7fh" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.221735 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5dd00b53-587c-4992-87da-f3b59054b7ff-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-j6dqt\" (UID: \"5dd00b53-587c-4992-87da-f3b59054b7ff\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-j6dqt" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.221751 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/708ddef5-479e-44ef-a189-c41123a73bbe-audit-policies\") pod \"oauth-openshift-558db77b4-gmwq7\" (UID: \"708ddef5-479e-44ef-a189-c41123a73bbe\") " pod="openshift-authentication/oauth-openshift-558db77b4-gmwq7" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.221766 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0d9fa190-3073-4b2b-a348-f9a8e1f994b2-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-2jrk9\" (UID: \"0d9fa190-3073-4b2b-a348-f9a8e1f994b2\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2jrk9" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.221783 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/73533561-14fb-4481-872e-1b47096f9d30-ca-trust-extracted\") pod \"image-registry-697d97f7c8-nczwv\" (UID: \"73533561-14fb-4481-872e-1b47096f9d30\") " pod="openshift-image-registry/image-registry-697d97f7c8-nczwv" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.221797 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9f194f7f-c770-471d-8bcd-c2fd613e0b46-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-wvhkk\" (UID: \"9f194f7f-c770-471d-8bcd-c2fd613e0b46\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wvhkk" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.221829 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/387d7c1a-1589-4377-9566-a45d8f498f38-audit-dir\") pod \"apiserver-76f77b778f-jjjjp\" (UID: \"387d7c1a-1589-4377-9566-a45d8f498f38\") " pod="openshift-apiserver/apiserver-76f77b778f-jjjjp" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.221843 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/80663d92-2281-4a3d-9232-f1fc19873d88-secret-volume\") pod \"collect-profiles-29484540-kbb7d\" (UID: \"80663d92-2281-4a3d-9232-f1fc19873d88\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484540-kbb7d" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.221858 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f382688c-fb9f-4169-b4eb-3466e08dbd7c-images\") pod \"machine-config-operator-74547568cd-fcb4z\" (UID: \"f382688c-fb9f-4169-b4eb-3466e08dbd7c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fcb4z" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.221874 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/387d7c1a-1589-4377-9566-a45d8f498f38-image-import-ca\") pod \"apiserver-76f77b778f-jjjjp\" (UID: \"387d7c1a-1589-4377-9566-a45d8f498f38\") " pod="openshift-apiserver/apiserver-76f77b778f-jjjjp" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.221892 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzvlb\" (UniqueName: \"kubernetes.io/projected/0d9fa190-3073-4b2b-a348-f9a8e1f994b2-kube-api-access-rzvlb\") pod \"machine-config-controller-84d6567774-2jrk9\" (UID: \"0d9fa190-3073-4b2b-a348-f9a8e1f994b2\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2jrk9" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.221939 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/0803a721-b862-4696-a752-e5af589ced0b-signing-cabundle\") pod \"service-ca-9c57cc56f-6jdmv\" (UID: \"0803a721-b862-4696-a752-e5af589ced0b\") " pod="openshift-service-ca/service-ca-9c57cc56f-6jdmv" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.221965 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/30317a78-afdc-4c04-95b6-d2c8fedfb790-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-zzx6v\" (UID: \"30317a78-afdc-4c04-95b6-d2c8fedfb790\") " pod="openshift-marketplace/marketplace-operator-79b997595-zzx6v" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.222003 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrtm4\" (UniqueName: \"kubernetes.io/projected/88b52253-3840-42b0-aa3c-d8708274dcfa-kube-api-access-xrtm4\") pod \"dns-default-v2nnf\" (UID: \"88b52253-3840-42b0-aa3c-d8708274dcfa\") " pod="openshift-dns/dns-default-v2nnf" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.222067 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/1de7ea7e-b219-47b4-9ba9-ef3688eda036-socket-dir\") pod \"csi-hostpathplugin-vrdwc\" (UID: \"1de7ea7e-b219-47b4-9ba9-ef3688eda036\") " pod="hostpath-provisioner/csi-hostpathplugin-vrdwc" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.222086 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wh2t4\" (UniqueName: \"kubernetes.io/projected/c173016d-7244-468d-a84f-2ef0e2bf0258-kube-api-access-wh2t4\") pod \"service-ca-operator-777779d784-5w7fh\" (UID: \"c173016d-7244-468d-a84f-2ef0e2bf0258\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-5w7fh" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.222115 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/73533561-14fb-4481-872e-1b47096f9d30-installation-pull-secrets\") pod \"image-registry-697d97f7c8-nczwv\" (UID: \"73533561-14fb-4481-872e-1b47096f9d30\") " pod="openshift-image-registry/image-registry-697d97f7c8-nczwv" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.222167 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/c1be96c2-9c8f-4c6d-8dff-02e0898a963b-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-5w8tw\" (UID: \"c1be96c2-9c8f-4c6d-8dff-02e0898a963b\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5w8tw" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.222228 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xk7s\" (UniqueName: \"kubernetes.io/projected/30317a78-afdc-4c04-95b6-d2c8fedfb790-kube-api-access-7xk7s\") pod \"marketplace-operator-79b997595-zzx6v\" (UID: \"30317a78-afdc-4c04-95b6-d2c8fedfb790\") " pod="openshift-marketplace/marketplace-operator-79b997595-zzx6v" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.222246 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/0803a721-b862-4696-a752-e5af589ced0b-signing-key\") pod \"service-ca-9c57cc56f-6jdmv\" (UID: \"0803a721-b862-4696-a752-e5af589ced0b\") " pod="openshift-service-ca/service-ca-9c57cc56f-6jdmv" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.222275 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9f194f7f-c770-471d-8bcd-c2fd613e0b46-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-wvhkk\" (UID: \"9f194f7f-c770-471d-8bcd-c2fd613e0b46\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wvhkk" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.222294 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/387d7c1a-1589-4377-9566-a45d8f498f38-node-pullsecrets\") pod \"apiserver-76f77b778f-jjjjp\" (UID: \"387d7c1a-1589-4377-9566-a45d8f498f38\") " pod="openshift-apiserver/apiserver-76f77b778f-jjjjp" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.222312 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/708ddef5-479e-44ef-a189-c41123a73bbe-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-gmwq7\" (UID: \"708ddef5-479e-44ef-a189-c41123a73bbe\") " pod="openshift-authentication/oauth-openshift-558db77b4-gmwq7" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.222330 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9df8d6f7-ac70-440a-94a9-e4ee69c104a2-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-kj6fk\" (UID: \"9df8d6f7-ac70-440a-94a9-e4ee69c104a2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-kj6fk" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.222345 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9df8d6f7-ac70-440a-94a9-e4ee69c104a2-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-kj6fk\" (UID: \"9df8d6f7-ac70-440a-94a9-e4ee69c104a2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-kj6fk" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.222360 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvllp\" (UniqueName: \"kubernetes.io/projected/535085bd-3682-4133-a5e6-e5f1149e7d24-kube-api-access-mvllp\") pod \"olm-operator-6b444d44fb-wcwwm\" (UID: \"535085bd-3682-4133-a5e6-e5f1149e7d24\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wcwwm" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.222406 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nczwv\" (UID: \"73533561-14fb-4481-872e-1b47096f9d30\") " pod="openshift-image-registry/image-registry-697d97f7c8-nczwv" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.222425 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c173016d-7244-468d-a84f-2ef0e2bf0258-serving-cert\") pod \"service-ca-operator-777779d784-5w7fh\" (UID: \"c173016d-7244-468d-a84f-2ef0e2bf0258\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-5w7fh" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.222453 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/708ddef5-479e-44ef-a189-c41123a73bbe-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-gmwq7\" (UID: \"708ddef5-479e-44ef-a189-c41123a73bbe\") " pod="openshift-authentication/oauth-openshift-558db77b4-gmwq7" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.222503 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/387d7c1a-1589-4377-9566-a45d8f498f38-audit\") pod \"apiserver-76f77b778f-jjjjp\" (UID: \"387d7c1a-1589-4377-9566-a45d8f498f38\") " pod="openshift-apiserver/apiserver-76f77b778f-jjjjp" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.222518 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c47199c0-d608-4ece-913c-b164a4d16f21-srv-cert\") pod \"catalog-operator-68c6474976-qnxwq\" (UID: \"c47199c0-d608-4ece-913c-b164a4d16f21\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qnxwq" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.222545 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/30317a78-afdc-4c04-95b6-d2c8fedfb790-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-zzx6v\" (UID: \"30317a78-afdc-4c04-95b6-d2c8fedfb790\") " pod="openshift-marketplace/marketplace-operator-79b997595-zzx6v" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.222560 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c674e914-755f-48a4-97ca-03e1a69a021a-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-j9b4b\" (UID: \"c674e914-755f-48a4-97ca-03e1a69a021a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-j9b4b" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.222774 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/73533561-14fb-4481-872e-1b47096f9d30-registry-certificates\") pod \"image-registry-697d97f7c8-nczwv\" (UID: \"73533561-14fb-4481-872e-1b47096f9d30\") " pod="openshift-image-registry/image-registry-697d97f7c8-nczwv" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.224433 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/73533561-14fb-4481-872e-1b47096f9d30-registry-tls\") pod \"image-registry-697d97f7c8-nczwv\" (UID: \"73533561-14fb-4481-872e-1b47096f9d30\") " pod="openshift-image-registry/image-registry-697d97f7c8-nczwv" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.225250 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78341d4a-0228-4056-ae13-9619ea5c4c35-config\") pod \"machine-approver-56656f9798-trtbx\" (UID: \"78341d4a-0228-4056-ae13-9619ea5c4c35\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-trtbx" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.225274 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/387d7c1a-1589-4377-9566-a45d8f498f38-trusted-ca-bundle\") pod \"apiserver-76f77b778f-jjjjp\" (UID: \"387d7c1a-1589-4377-9566-a45d8f498f38\") " pod="openshift-apiserver/apiserver-76f77b778f-jjjjp" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.225958 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/78341d4a-0228-4056-ae13-9619ea5c4c35-auth-proxy-config\") pod \"machine-approver-56656f9798-trtbx\" (UID: \"78341d4a-0228-4056-ae13-9619ea5c4c35\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-trtbx" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.226578 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/73533561-14fb-4481-872e-1b47096f9d30-ca-trust-extracted\") pod \"image-registry-697d97f7c8-nczwv\" (UID: \"73533561-14fb-4481-872e-1b47096f9d30\") " pod="openshift-image-registry/image-registry-697d97f7c8-nczwv" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.226920 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/708ddef5-479e-44ef-a189-c41123a73bbe-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-gmwq7\" (UID: \"708ddef5-479e-44ef-a189-c41123a73bbe\") " pod="openshift-authentication/oauth-openshift-558db77b4-gmwq7" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.226936 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/387d7c1a-1589-4377-9566-a45d8f498f38-audit-dir\") pod \"apiserver-76f77b778f-jjjjp\" (UID: \"387d7c1a-1589-4377-9566-a45d8f498f38\") " pod="openshift-apiserver/apiserver-76f77b778f-jjjjp" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.228379 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5dd00b53-587c-4992-87da-f3b59054b7ff-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-j6dqt\" (UID: \"5dd00b53-587c-4992-87da-f3b59054b7ff\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-j6dqt" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.231218 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/708ddef5-479e-44ef-a189-c41123a73bbe-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-gmwq7\" (UID: \"708ddef5-479e-44ef-a189-c41123a73bbe\") " pod="openshift-authentication/oauth-openshift-558db77b4-gmwq7" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.231367 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/708ddef5-479e-44ef-a189-c41123a73bbe-audit-dir\") pod \"oauth-openshift-558db77b4-gmwq7\" (UID: \"708ddef5-479e-44ef-a189-c41123a73bbe\") " pod="openshift-authentication/oauth-openshift-558db77b4-gmwq7" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.234755 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/387d7c1a-1589-4377-9566-a45d8f498f38-node-pullsecrets\") pod \"apiserver-76f77b778f-jjjjp\" (UID: \"387d7c1a-1589-4377-9566-a45d8f498f38\") " pod="openshift-apiserver/apiserver-76f77b778f-jjjjp" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.243057 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/73533561-14fb-4481-872e-1b47096f9d30-trusted-ca\") pod \"image-registry-697d97f7c8-nczwv\" (UID: \"73533561-14fb-4481-872e-1b47096f9d30\") " pod="openshift-image-registry/image-registry-697d97f7c8-nczwv" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.234802 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/708ddef5-479e-44ef-a189-c41123a73bbe-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-gmwq7\" (UID: \"708ddef5-479e-44ef-a189-c41123a73bbe\") " pod="openshift-authentication/oauth-openshift-558db77b4-gmwq7" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.236582 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/387d7c1a-1589-4377-9566-a45d8f498f38-encryption-config\") pod \"apiserver-76f77b778f-jjjjp\" (UID: \"387d7c1a-1589-4377-9566-a45d8f498f38\") " pod="openshift-apiserver/apiserver-76f77b778f-jjjjp" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.242607 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/73533561-14fb-4481-872e-1b47096f9d30-installation-pull-secrets\") pod \"image-registry-697d97f7c8-nczwv\" (UID: \"73533561-14fb-4481-872e-1b47096f9d30\") " pod="openshift-image-registry/image-registry-697d97f7c8-nczwv" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.235468 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5dd00b53-587c-4992-87da-f3b59054b7ff-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-j6dqt\" (UID: \"5dd00b53-587c-4992-87da-f3b59054b7ff\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-j6dqt" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.243526 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/708ddef5-479e-44ef-a189-c41123a73bbe-audit-policies\") pod \"oauth-openshift-558db77b4-gmwq7\" (UID: \"708ddef5-479e-44ef-a189-c41123a73bbe\") " pod="openshift-authentication/oauth-openshift-558db77b4-gmwq7" Jan 22 09:08:22 crc kubenswrapper[4811]: E0122 09:08:22.243802 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:08:22.743788462 +0000 UTC m=+147.065975575 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nczwv" (UID: "73533561-14fb-4481-872e-1b47096f9d30") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.244561 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-2mphl" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.245714 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/708ddef5-479e-44ef-a189-c41123a73bbe-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-gmwq7\" (UID: \"708ddef5-479e-44ef-a189-c41123a73bbe\") " pod="openshift-authentication/oauth-openshift-558db77b4-gmwq7" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.248708 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.248857 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/708ddef5-479e-44ef-a189-c41123a73bbe-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-gmwq7\" (UID: \"708ddef5-479e-44ef-a189-c41123a73bbe\") " pod="openshift-authentication/oauth-openshift-558db77b4-gmwq7" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.249257 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/387d7c1a-1589-4377-9566-a45d8f498f38-etcd-client\") pod \"apiserver-76f77b778f-jjjjp\" (UID: \"387d7c1a-1589-4377-9566-a45d8f498f38\") " pod="openshift-apiserver/apiserver-76f77b778f-jjjjp" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.250455 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/708ddef5-479e-44ef-a189-c41123a73bbe-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-gmwq7\" (UID: \"708ddef5-479e-44ef-a189-c41123a73bbe\") " pod="openshift-authentication/oauth-openshift-558db77b4-gmwq7" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.251293 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/708ddef5-479e-44ef-a189-c41123a73bbe-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-gmwq7\" (UID: \"708ddef5-479e-44ef-a189-c41123a73bbe\") " pod="openshift-authentication/oauth-openshift-558db77b4-gmwq7" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.252671 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/387d7c1a-1589-4377-9566-a45d8f498f38-serving-cert\") pod \"apiserver-76f77b778f-jjjjp\" (UID: \"387d7c1a-1589-4377-9566-a45d8f498f38\") " pod="openshift-apiserver/apiserver-76f77b778f-jjjjp" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.255864 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/708ddef5-479e-44ef-a189-c41123a73bbe-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-gmwq7\" (UID: \"708ddef5-479e-44ef-a189-c41123a73bbe\") " pod="openshift-authentication/oauth-openshift-558db77b4-gmwq7" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.258116 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.258662 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f58eb1d8-bb02-4af7-857c-138518c5bbf2-oauth-serving-cert\") pod \"console-f9d7485db-jhptg\" (UID: \"f58eb1d8-bb02-4af7-857c-138518c5bbf2\") " pod="openshift-console/console-f9d7485db-jhptg" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.259855 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/708ddef5-479e-44ef-a189-c41123a73bbe-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-gmwq7\" (UID: \"708ddef5-479e-44ef-a189-c41123a73bbe\") " pod="openshift-authentication/oauth-openshift-558db77b4-gmwq7" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.259941 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/78341d4a-0228-4056-ae13-9619ea5c4c35-machine-approver-tls\") pod \"machine-approver-56656f9798-trtbx\" (UID: \"78341d4a-0228-4056-ae13-9619ea5c4c35\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-trtbx" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.270192 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/708ddef5-479e-44ef-a189-c41123a73bbe-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-gmwq7\" (UID: \"708ddef5-479e-44ef-a189-c41123a73bbe\") " pod="openshift-authentication/oauth-openshift-558db77b4-gmwq7" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.279412 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.287945 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/387d7c1a-1589-4377-9566-a45d8f498f38-etcd-serving-ca\") pod \"apiserver-76f77b778f-jjjjp\" (UID: \"387d7c1a-1589-4377-9566-a45d8f498f38\") " pod="openshift-apiserver/apiserver-76f77b778f-jjjjp" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.299252 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.305782 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/387d7c1a-1589-4377-9566-a45d8f498f38-audit\") pod \"apiserver-76f77b778f-jjjjp\" (UID: \"387d7c1a-1589-4377-9566-a45d8f498f38\") " pod="openshift-apiserver/apiserver-76f77b778f-jjjjp" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.319171 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.323001 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 09:08:22 crc kubenswrapper[4811]: E0122 09:08:22.323219 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 09:08:22.823193376 +0000 UTC m=+147.145380499 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.323267 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/1de7ea7e-b219-47b4-9ba9-ef3688eda036-socket-dir\") pod \"csi-hostpathplugin-vrdwc\" (UID: \"1de7ea7e-b219-47b4-9ba9-ef3688eda036\") " pod="hostpath-provisioner/csi-hostpathplugin-vrdwc" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.323301 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wh2t4\" (UniqueName: \"kubernetes.io/projected/c173016d-7244-468d-a84f-2ef0e2bf0258-kube-api-access-wh2t4\") pod \"service-ca-operator-777779d784-5w7fh\" (UID: \"c173016d-7244-468d-a84f-2ef0e2bf0258\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-5w7fh" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.323325 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/c1be96c2-9c8f-4c6d-8dff-02e0898a963b-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-5w8tw\" (UID: \"c1be96c2-9c8f-4c6d-8dff-02e0898a963b\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5w8tw" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.323344 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xk7s\" (UniqueName: \"kubernetes.io/projected/30317a78-afdc-4c04-95b6-d2c8fedfb790-kube-api-access-7xk7s\") pod \"marketplace-operator-79b997595-zzx6v\" (UID: \"30317a78-afdc-4c04-95b6-d2c8fedfb790\") " pod="openshift-marketplace/marketplace-operator-79b997595-zzx6v" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.323361 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/0803a721-b862-4696-a752-e5af589ced0b-signing-key\") pod \"service-ca-9c57cc56f-6jdmv\" (UID: \"0803a721-b862-4696-a752-e5af589ced0b\") " pod="openshift-service-ca/service-ca-9c57cc56f-6jdmv" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.323387 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9f194f7f-c770-471d-8bcd-c2fd613e0b46-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-wvhkk\" (UID: \"9f194f7f-c770-471d-8bcd-c2fd613e0b46\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wvhkk" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.323418 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9df8d6f7-ac70-440a-94a9-e4ee69c104a2-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-kj6fk\" (UID: \"9df8d6f7-ac70-440a-94a9-e4ee69c104a2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-kj6fk" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.323435 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvllp\" (UniqueName: \"kubernetes.io/projected/535085bd-3682-4133-a5e6-e5f1149e7d24-kube-api-access-mvllp\") pod \"olm-operator-6b444d44fb-wcwwm\" (UID: \"535085bd-3682-4133-a5e6-e5f1149e7d24\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wcwwm" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.323454 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9df8d6f7-ac70-440a-94a9-e4ee69c104a2-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-kj6fk\" (UID: \"9df8d6f7-ac70-440a-94a9-e4ee69c104a2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-kj6fk" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.323478 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nczwv\" (UID: \"73533561-14fb-4481-872e-1b47096f9d30\") " pod="openshift-image-registry/image-registry-697d97f7c8-nczwv" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.323496 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c173016d-7244-468d-a84f-2ef0e2bf0258-serving-cert\") pod \"service-ca-operator-777779d784-5w7fh\" (UID: \"c173016d-7244-468d-a84f-2ef0e2bf0258\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-5w7fh" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.323505 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/1de7ea7e-b219-47b4-9ba9-ef3688eda036-socket-dir\") pod \"csi-hostpathplugin-vrdwc\" (UID: \"1de7ea7e-b219-47b4-9ba9-ef3688eda036\") " pod="hostpath-provisioner/csi-hostpathplugin-vrdwc" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.323522 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c47199c0-d608-4ece-913c-b164a4d16f21-srv-cert\") pod \"catalog-operator-68c6474976-qnxwq\" (UID: \"c47199c0-d608-4ece-913c-b164a4d16f21\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qnxwq" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.326869 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/30317a78-afdc-4c04-95b6-d2c8fedfb790-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-zzx6v\" (UID: \"30317a78-afdc-4c04-95b6-d2c8fedfb790\") " pod="openshift-marketplace/marketplace-operator-79b997595-zzx6v" Jan 22 09:08:22 crc kubenswrapper[4811]: E0122 09:08:22.326884 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:08:22.82686532 +0000 UTC m=+147.149052444 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nczwv" (UID: "73533561-14fb-4481-872e-1b47096f9d30") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.326928 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c674e914-755f-48a4-97ca-03e1a69a021a-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-j9b4b\" (UID: \"c674e914-755f-48a4-97ca-03e1a69a021a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-j9b4b" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.326965 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/80663d92-2281-4a3d-9232-f1fc19873d88-config-volume\") pod \"collect-profiles-29484540-kbb7d\" (UID: \"80663d92-2281-4a3d-9232-f1fc19873d88\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484540-kbb7d" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.327009 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ghb2p\" (UniqueName: \"kubernetes.io/projected/f382688c-fb9f-4169-b4eb-3466e08dbd7c-kube-api-access-ghb2p\") pod \"machine-config-operator-74547568cd-fcb4z\" (UID: \"f382688c-fb9f-4169-b4eb-3466e08dbd7c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fcb4z" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.327032 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ccxnd\" (UniqueName: \"kubernetes.io/projected/1de7ea7e-b219-47b4-9ba9-ef3688eda036-kube-api-access-ccxnd\") pod \"csi-hostpathplugin-vrdwc\" (UID: \"1de7ea7e-b219-47b4-9ba9-ef3688eda036\") " pod="hostpath-provisioner/csi-hostpathplugin-vrdwc" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.327051 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/2716a2dc-25e2-4a62-8264-41d299b3cd55-default-certificate\") pod \"router-default-5444994796-l8phl\" (UID: \"2716a2dc-25e2-4a62-8264-41d299b3cd55\") " pod="openshift-ingress/router-default-5444994796-l8phl" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.327068 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/30038e4b-2056-44a4-b3f0-1ec1433a7b4d-certs\") pod \"machine-config-server-8fxth\" (UID: \"30038e4b-2056-44a4-b3f0-1ec1433a7b4d\") " pod="openshift-machine-config-operator/machine-config-server-8fxth" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.327109 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kk26t\" (UniqueName: \"kubernetes.io/projected/0803a721-b862-4696-a752-e5af589ced0b-kube-api-access-kk26t\") pod \"service-ca-9c57cc56f-6jdmv\" (UID: \"0803a721-b862-4696-a752-e5af589ced0b\") " pod="openshift-service-ca/service-ca-9c57cc56f-6jdmv" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.327146 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/1de7ea7e-b219-47b4-9ba9-ef3688eda036-plugins-dir\") pod \"csi-hostpathplugin-vrdwc\" (UID: \"1de7ea7e-b219-47b4-9ba9-ef3688eda036\") " pod="hostpath-provisioner/csi-hostpathplugin-vrdwc" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.327179 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0d9fa190-3073-4b2b-a348-f9a8e1f994b2-proxy-tls\") pod \"machine-config-controller-84d6567774-2jrk9\" (UID: \"0d9fa190-3073-4b2b-a348-f9a8e1f994b2\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2jrk9" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.327200 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/c488ec3a-dfdf-46dd-8f3f-1346232394d3-tmpfs\") pod \"packageserver-d55dfcdfc-lzdbl\" (UID: \"c488ec3a-dfdf-46dd-8f3f-1346232394d3\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lzdbl" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.327220 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldr5v\" (UniqueName: \"kubernetes.io/projected/8b7094aa-cc4a-49eb-be77-715a4efbc1d0-kube-api-access-ldr5v\") pod \"control-plane-machine-set-operator-78cbb6b69f-9pxnj\" (UID: \"8b7094aa-cc4a-49eb-be77-715a4efbc1d0\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-9pxnj" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.327248 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2716a2dc-25e2-4a62-8264-41d299b3cd55-service-ca-bundle\") pod \"router-default-5444994796-l8phl\" (UID: \"2716a2dc-25e2-4a62-8264-41d299b3cd55\") " pod="openshift-ingress/router-default-5444994796-l8phl" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.327265 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/30038e4b-2056-44a4-b3f0-1ec1433a7b4d-node-bootstrap-token\") pod \"machine-config-server-8fxth\" (UID: \"30038e4b-2056-44a4-b3f0-1ec1433a7b4d\") " pod="openshift-machine-config-operator/machine-config-server-8fxth" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.327287 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/1de7ea7e-b219-47b4-9ba9-ef3688eda036-mountpoint-dir\") pod \"csi-hostpathplugin-vrdwc\" (UID: \"1de7ea7e-b219-47b4-9ba9-ef3688eda036\") " pod="hostpath-provisioner/csi-hostpathplugin-vrdwc" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.327322 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e7343a26-0e21-4d61-ad1c-c3c5479a89e1-cert\") pod \"ingress-canary-9624f\" (UID: \"e7343a26-0e21-4d61-ad1c-c3c5479a89e1\") " pod="openshift-ingress-canary/ingress-canary-9624f" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.327341 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/535085bd-3682-4133-a5e6-e5f1149e7d24-profile-collector-cert\") pod \"olm-operator-6b444d44fb-wcwwm\" (UID: \"535085bd-3682-4133-a5e6-e5f1149e7d24\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wcwwm" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.327363 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f382688c-fb9f-4169-b4eb-3466e08dbd7c-auth-proxy-config\") pod \"machine-config-operator-74547568cd-fcb4z\" (UID: \"f382688c-fb9f-4169-b4eb-3466e08dbd7c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fcb4z" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.327379 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9df8d6f7-ac70-440a-94a9-e4ee69c104a2-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-kj6fk\" (UID: \"9df8d6f7-ac70-440a-94a9-e4ee69c104a2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-kj6fk" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.327399 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhx9d\" (UniqueName: \"kubernetes.io/projected/e7343a26-0e21-4d61-ad1c-c3c5479a89e1-kube-api-access-nhx9d\") pod \"ingress-canary-9624f\" (UID: \"e7343a26-0e21-4d61-ad1c-c3c5479a89e1\") " pod="openshift-ingress-canary/ingress-canary-9624f" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.327427 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mths7\" (UniqueName: \"kubernetes.io/projected/2716a2dc-25e2-4a62-8264-41d299b3cd55-kube-api-access-mths7\") pod \"router-default-5444994796-l8phl\" (UID: \"2716a2dc-25e2-4a62-8264-41d299b3cd55\") " pod="openshift-ingress/router-default-5444994796-l8phl" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.327445 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c488ec3a-dfdf-46dd-8f3f-1346232394d3-webhook-cert\") pod \"packageserver-d55dfcdfc-lzdbl\" (UID: \"c488ec3a-dfdf-46dd-8f3f-1346232394d3\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lzdbl" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.327460 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/88b52253-3840-42b0-aa3c-d8708274dcfa-config-volume\") pod \"dns-default-v2nnf\" (UID: \"88b52253-3840-42b0-aa3c-d8708274dcfa\") " pod="openshift-dns/dns-default-v2nnf" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.327487 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cngll\" (UniqueName: \"kubernetes.io/projected/80663d92-2281-4a3d-9232-f1fc19873d88-kube-api-access-cngll\") pod \"collect-profiles-29484540-kbb7d\" (UID: \"80663d92-2281-4a3d-9232-f1fc19873d88\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484540-kbb7d" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.327503 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvvdc\" (UniqueName: \"kubernetes.io/projected/c488ec3a-dfdf-46dd-8f3f-1346232394d3-kube-api-access-qvvdc\") pod \"packageserver-d55dfcdfc-lzdbl\" (UID: \"c488ec3a-dfdf-46dd-8f3f-1346232394d3\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lzdbl" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.327520 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/cb22b2ae-6c13-482b-b827-5200e2be87ca-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-78zjg\" (UID: \"cb22b2ae-6c13-482b-b827-5200e2be87ca\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-78zjg" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.327537 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c47199c0-d608-4ece-913c-b164a4d16f21-profile-collector-cert\") pod \"catalog-operator-68c6474976-qnxwq\" (UID: \"c47199c0-d608-4ece-913c-b164a4d16f21\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qnxwq" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.327551 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/88b52253-3840-42b0-aa3c-d8708274dcfa-metrics-tls\") pod \"dns-default-v2nnf\" (UID: \"88b52253-3840-42b0-aa3c-d8708274dcfa\") " pod="openshift-dns/dns-default-v2nnf" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.327567 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/1de7ea7e-b219-47b4-9ba9-ef3688eda036-registration-dir\") pod \"csi-hostpathplugin-vrdwc\" (UID: \"1de7ea7e-b219-47b4-9ba9-ef3688eda036\") " pod="hostpath-provisioner/csi-hostpathplugin-vrdwc" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.327618 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c488ec3a-dfdf-46dd-8f3f-1346232394d3-apiservice-cert\") pod \"packageserver-d55dfcdfc-lzdbl\" (UID: \"c488ec3a-dfdf-46dd-8f3f-1346232394d3\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lzdbl" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.327657 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c674e914-755f-48a4-97ca-03e1a69a021a-config\") pod \"kube-apiserver-operator-766d6c64bb-j9b4b\" (UID: \"c674e914-755f-48a4-97ca-03e1a69a021a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-j9b4b" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.327685 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c674e914-755f-48a4-97ca-03e1a69a021a-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-j9b4b\" (UID: \"c674e914-755f-48a4-97ca-03e1a69a021a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-j9b4b" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.327701 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9zm7\" (UniqueName: \"kubernetes.io/projected/c1be96c2-9c8f-4c6d-8dff-02e0898a963b-kube-api-access-b9zm7\") pod \"package-server-manager-789f6589d5-5w8tw\" (UID: \"c1be96c2-9c8f-4c6d-8dff-02e0898a963b\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5w8tw" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.327719 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/8b7094aa-cc4a-49eb-be77-715a4efbc1d0-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-9pxnj\" (UID: \"8b7094aa-cc4a-49eb-be77-715a4efbc1d0\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-9pxnj" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.327758 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2lklt\" (UniqueName: \"kubernetes.io/projected/cb22b2ae-6c13-482b-b827-5200e2be87ca-kube-api-access-2lklt\") pod \"multus-admission-controller-857f4d67dd-78zjg\" (UID: \"cb22b2ae-6c13-482b-b827-5200e2be87ca\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-78zjg" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.327786 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f194f7f-c770-471d-8bcd-c2fd613e0b46-config\") pod \"kube-controller-manager-operator-78b949d7b-wvhkk\" (UID: \"9f194f7f-c770-471d-8bcd-c2fd613e0b46\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wvhkk" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.327804 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f382688c-fb9f-4169-b4eb-3466e08dbd7c-proxy-tls\") pod \"machine-config-operator-74547568cd-fcb4z\" (UID: \"f382688c-fb9f-4169-b4eb-3466e08dbd7c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fcb4z" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.327826 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/2716a2dc-25e2-4a62-8264-41d299b3cd55-stats-auth\") pod \"router-default-5444994796-l8phl\" (UID: \"2716a2dc-25e2-4a62-8264-41d299b3cd55\") " pod="openshift-ingress/router-default-5444994796-l8phl" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.327844 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l99th\" (UniqueName: \"kubernetes.io/projected/c47199c0-d608-4ece-913c-b164a4d16f21-kube-api-access-l99th\") pod \"catalog-operator-68c6474976-qnxwq\" (UID: \"c47199c0-d608-4ece-913c-b164a4d16f21\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qnxwq" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.327876 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4b5g\" (UniqueName: \"kubernetes.io/projected/42f53a84-4118-4088-9cb5-39a05774839a-kube-api-access-t4b5g\") pod \"migrator-59844c95c7-zwkzn\" (UID: \"42f53a84-4118-4088-9cb5-39a05774839a\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-zwkzn" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.327902 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dkbdj\" (UniqueName: \"kubernetes.io/projected/30038e4b-2056-44a4-b3f0-1ec1433a7b4d-kube-api-access-dkbdj\") pod \"machine-config-server-8fxth\" (UID: \"30038e4b-2056-44a4-b3f0-1ec1433a7b4d\") " pod="openshift-machine-config-operator/machine-config-server-8fxth" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.327918 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/535085bd-3682-4133-a5e6-e5f1149e7d24-srv-cert\") pod \"olm-operator-6b444d44fb-wcwwm\" (UID: \"535085bd-3682-4133-a5e6-e5f1149e7d24\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wcwwm" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.327945 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/1de7ea7e-b219-47b4-9ba9-ef3688eda036-csi-data-dir\") pod \"csi-hostpathplugin-vrdwc\" (UID: \"1de7ea7e-b219-47b4-9ba9-ef3688eda036\") " pod="hostpath-provisioner/csi-hostpathplugin-vrdwc" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.327960 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2716a2dc-25e2-4a62-8264-41d299b3cd55-metrics-certs\") pod \"router-default-5444994796-l8phl\" (UID: \"2716a2dc-25e2-4a62-8264-41d299b3cd55\") " pod="openshift-ingress/router-default-5444994796-l8phl" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.327975 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c173016d-7244-468d-a84f-2ef0e2bf0258-config\") pod \"service-ca-operator-777779d784-5w7fh\" (UID: \"c173016d-7244-468d-a84f-2ef0e2bf0258\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-5w7fh" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.328000 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0d9fa190-3073-4b2b-a348-f9a8e1f994b2-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-2jrk9\" (UID: \"0d9fa190-3073-4b2b-a348-f9a8e1f994b2\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2jrk9" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.328025 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9f194f7f-c770-471d-8bcd-c2fd613e0b46-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-wvhkk\" (UID: \"9f194f7f-c770-471d-8bcd-c2fd613e0b46\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wvhkk" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.328051 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/80663d92-2281-4a3d-9232-f1fc19873d88-secret-volume\") pod \"collect-profiles-29484540-kbb7d\" (UID: \"80663d92-2281-4a3d-9232-f1fc19873d88\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484540-kbb7d" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.328067 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f382688c-fb9f-4169-b4eb-3466e08dbd7c-images\") pod \"machine-config-operator-74547568cd-fcb4z\" (UID: \"f382688c-fb9f-4169-b4eb-3466e08dbd7c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fcb4z" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.328084 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rzvlb\" (UniqueName: \"kubernetes.io/projected/0d9fa190-3073-4b2b-a348-f9a8e1f994b2-kube-api-access-rzvlb\") pod \"machine-config-controller-84d6567774-2jrk9\" (UID: \"0d9fa190-3073-4b2b-a348-f9a8e1f994b2\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2jrk9" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.328107 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/0803a721-b862-4696-a752-e5af589ced0b-signing-cabundle\") pod \"service-ca-9c57cc56f-6jdmv\" (UID: \"0803a721-b862-4696-a752-e5af589ced0b\") " pod="openshift-service-ca/service-ca-9c57cc56f-6jdmv" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.328138 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/30317a78-afdc-4c04-95b6-d2c8fedfb790-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-zzx6v\" (UID: \"30317a78-afdc-4c04-95b6-d2c8fedfb790\") " pod="openshift-marketplace/marketplace-operator-79b997595-zzx6v" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.328163 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xrtm4\" (UniqueName: \"kubernetes.io/projected/88b52253-3840-42b0-aa3c-d8708274dcfa-kube-api-access-xrtm4\") pod \"dns-default-v2nnf\" (UID: \"88b52253-3840-42b0-aa3c-d8708274dcfa\") " pod="openshift-dns/dns-default-v2nnf" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.328240 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/30317a78-afdc-4c04-95b6-d2c8fedfb790-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-zzx6v\" (UID: \"30317a78-afdc-4c04-95b6-d2c8fedfb790\") " pod="openshift-marketplace/marketplace-operator-79b997595-zzx6v" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.328497 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f58eb1d8-bb02-4af7-857c-138518c5bbf2-console-config\") pod \"console-f9d7485db-jhptg\" (UID: \"f58eb1d8-bb02-4af7-857c-138518c5bbf2\") " pod="openshift-console/console-f9d7485db-jhptg" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.330548 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/c488ec3a-dfdf-46dd-8f3f-1346232394d3-tmpfs\") pod \"packageserver-d55dfcdfc-lzdbl\" (UID: \"c488ec3a-dfdf-46dd-8f3f-1346232394d3\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lzdbl" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.331208 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/80663d92-2281-4a3d-9232-f1fc19873d88-config-volume\") pod \"collect-profiles-29484540-kbb7d\" (UID: \"80663d92-2281-4a3d-9232-f1fc19873d88\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484540-kbb7d" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.331637 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2716a2dc-25e2-4a62-8264-41d299b3cd55-service-ca-bundle\") pod \"router-default-5444994796-l8phl\" (UID: \"2716a2dc-25e2-4a62-8264-41d299b3cd55\") " pod="openshift-ingress/router-default-5444994796-l8phl" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.332273 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/1de7ea7e-b219-47b4-9ba9-ef3688eda036-csi-data-dir\") pod \"csi-hostpathplugin-vrdwc\" (UID: \"1de7ea7e-b219-47b4-9ba9-ef3688eda036\") " pod="hostpath-provisioner/csi-hostpathplugin-vrdwc" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.333000 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/1de7ea7e-b219-47b4-9ba9-ef3688eda036-plugins-dir\") pod \"csi-hostpathplugin-vrdwc\" (UID: \"1de7ea7e-b219-47b4-9ba9-ef3688eda036\") " pod="hostpath-provisioner/csi-hostpathplugin-vrdwc" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.333147 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c173016d-7244-468d-a84f-2ef0e2bf0258-config\") pod \"service-ca-operator-777779d784-5w7fh\" (UID: \"c173016d-7244-468d-a84f-2ef0e2bf0258\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-5w7fh" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.333491 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c173016d-7244-468d-a84f-2ef0e2bf0258-serving-cert\") pod \"service-ca-operator-777779d784-5w7fh\" (UID: \"c173016d-7244-468d-a84f-2ef0e2bf0258\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-5w7fh" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.340466 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/2716a2dc-25e2-4a62-8264-41d299b3cd55-default-certificate\") pod \"router-default-5444994796-l8phl\" (UID: \"2716a2dc-25e2-4a62-8264-41d299b3cd55\") " pod="openshift-ingress/router-default-5444994796-l8phl" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.340832 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0d9fa190-3073-4b2b-a348-f9a8e1f994b2-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-2jrk9\" (UID: \"0d9fa190-3073-4b2b-a348-f9a8e1f994b2\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2jrk9" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.341382 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/c1be96c2-9c8f-4c6d-8dff-02e0898a963b-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-5w8tw\" (UID: \"c1be96c2-9c8f-4c6d-8dff-02e0898a963b\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5w8tw" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.341726 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9f194f7f-c770-471d-8bcd-c2fd613e0b46-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-wvhkk\" (UID: \"9f194f7f-c770-471d-8bcd-c2fd613e0b46\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wvhkk" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.342094 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2716a2dc-25e2-4a62-8264-41d299b3cd55-metrics-certs\") pod \"router-default-5444994796-l8phl\" (UID: \"2716a2dc-25e2-4a62-8264-41d299b3cd55\") " pod="openshift-ingress/router-default-5444994796-l8phl" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.343385 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9df8d6f7-ac70-440a-94a9-e4ee69c104a2-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-kj6fk\" (UID: \"9df8d6f7-ac70-440a-94a9-e4ee69c104a2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-kj6fk" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.343844 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f382688c-fb9f-4169-b4eb-3466e08dbd7c-auth-proxy-config\") pod \"machine-config-operator-74547568cd-fcb4z\" (UID: \"f382688c-fb9f-4169-b4eb-3466e08dbd7c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fcb4z" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.344799 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/1de7ea7e-b219-47b4-9ba9-ef3688eda036-mountpoint-dir\") pod \"csi-hostpathplugin-vrdwc\" (UID: \"1de7ea7e-b219-47b4-9ba9-ef3688eda036\") " pod="hostpath-provisioner/csi-hostpathplugin-vrdwc" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.345089 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e7343a26-0e21-4d61-ad1c-c3c5479a89e1-cert\") pod \"ingress-canary-9624f\" (UID: \"e7343a26-0e21-4d61-ad1c-c3c5479a89e1\") " pod="openshift-ingress-canary/ingress-canary-9624f" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.345349 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c47199c0-d608-4ece-913c-b164a4d16f21-srv-cert\") pod \"catalog-operator-68c6474976-qnxwq\" (UID: \"c47199c0-d608-4ece-913c-b164a4d16f21\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qnxwq" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.345529 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0d9fa190-3073-4b2b-a348-f9a8e1f994b2-proxy-tls\") pod \"machine-config-controller-84d6567774-2jrk9\" (UID: \"0d9fa190-3073-4b2b-a348-f9a8e1f994b2\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2jrk9" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.345955 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/cb22b2ae-6c13-482b-b827-5200e2be87ca-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-78zjg\" (UID: \"cb22b2ae-6c13-482b-b827-5200e2be87ca\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-78zjg" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.346019 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/535085bd-3682-4133-a5e6-e5f1149e7d24-srv-cert\") pod \"olm-operator-6b444d44fb-wcwwm\" (UID: \"535085bd-3682-4133-a5e6-e5f1149e7d24\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wcwwm" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.346332 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9df8d6f7-ac70-440a-94a9-e4ee69c104a2-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-kj6fk\" (UID: \"9df8d6f7-ac70-440a-94a9-e4ee69c104a2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-kj6fk" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.346383 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/0803a721-b862-4696-a752-e5af589ced0b-signing-key\") pod \"service-ca-9c57cc56f-6jdmv\" (UID: \"0803a721-b862-4696-a752-e5af589ced0b\") " pod="openshift-service-ca/service-ca-9c57cc56f-6jdmv" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.346970 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/88b52253-3840-42b0-aa3c-d8708274dcfa-config-volume\") pod \"dns-default-v2nnf\" (UID: \"88b52253-3840-42b0-aa3c-d8708274dcfa\") " pod="openshift-dns/dns-default-v2nnf" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.348275 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c488ec3a-dfdf-46dd-8f3f-1346232394d3-webhook-cert\") pod \"packageserver-d55dfcdfc-lzdbl\" (UID: \"c488ec3a-dfdf-46dd-8f3f-1346232394d3\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lzdbl" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.349163 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/535085bd-3682-4133-a5e6-e5f1149e7d24-profile-collector-cert\") pod \"olm-operator-6b444d44fb-wcwwm\" (UID: \"535085bd-3682-4133-a5e6-e5f1149e7d24\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wcwwm" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.349229 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/8b7094aa-cc4a-49eb-be77-715a4efbc1d0-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-9pxnj\" (UID: \"8b7094aa-cc4a-49eb-be77-715a4efbc1d0\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-9pxnj" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.349296 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/1de7ea7e-b219-47b4-9ba9-ef3688eda036-registration-dir\") pod \"csi-hostpathplugin-vrdwc\" (UID: \"1de7ea7e-b219-47b4-9ba9-ef3688eda036\") " pod="hostpath-provisioner/csi-hostpathplugin-vrdwc" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.349682 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/30038e4b-2056-44a4-b3f0-1ec1433a7b4d-certs\") pod \"machine-config-server-8fxth\" (UID: \"30038e4b-2056-44a4-b3f0-1ec1433a7b4d\") " pod="openshift-machine-config-operator/machine-config-server-8fxth" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.350023 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/30038e4b-2056-44a4-b3f0-1ec1433a7b4d-node-bootstrap-token\") pod \"machine-config-server-8fxth\" (UID: \"30038e4b-2056-44a4-b3f0-1ec1433a7b4d\") " pod="openshift-machine-config-operator/machine-config-server-8fxth" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.350640 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f194f7f-c770-471d-8bcd-c2fd613e0b46-config\") pod \"kube-controller-manager-operator-78b949d7b-wvhkk\" (UID: \"9f194f7f-c770-471d-8bcd-c2fd613e0b46\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wvhkk" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.351069 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f382688c-fb9f-4169-b4eb-3466e08dbd7c-images\") pod \"machine-config-operator-74547568cd-fcb4z\" (UID: \"f382688c-fb9f-4169-b4eb-3466e08dbd7c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fcb4z" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.351571 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/0803a721-b862-4696-a752-e5af589ced0b-signing-cabundle\") pod \"service-ca-9c57cc56f-6jdmv\" (UID: \"0803a721-b862-4696-a752-e5af589ced0b\") " pod="openshift-service-ca/service-ca-9c57cc56f-6jdmv" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.352782 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/80663d92-2281-4a3d-9232-f1fc19873d88-secret-volume\") pod \"collect-profiles-29484540-kbb7d\" (UID: \"80663d92-2281-4a3d-9232-f1fc19873d88\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484540-kbb7d" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.352888 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c674e914-755f-48a4-97ca-03e1a69a021a-config\") pod \"kube-apiserver-operator-766d6c64bb-j9b4b\" (UID: \"c674e914-755f-48a4-97ca-03e1a69a021a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-j9b4b" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.353184 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/88b52253-3840-42b0-aa3c-d8708274dcfa-metrics-tls\") pod \"dns-default-v2nnf\" (UID: \"88b52253-3840-42b0-aa3c-d8708274dcfa\") " pod="openshift-dns/dns-default-v2nnf" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.353769 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f382688c-fb9f-4169-b4eb-3466e08dbd7c-proxy-tls\") pod \"machine-config-operator-74547568cd-fcb4z\" (UID: \"f382688c-fb9f-4169-b4eb-3466e08dbd7c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fcb4z" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.354786 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/2716a2dc-25e2-4a62-8264-41d299b3cd55-stats-auth\") pod \"router-default-5444994796-l8phl\" (UID: \"2716a2dc-25e2-4a62-8264-41d299b3cd55\") " pod="openshift-ingress/router-default-5444994796-l8phl" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.355382 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.356843 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c47199c0-d608-4ece-913c-b164a4d16f21-profile-collector-cert\") pod \"catalog-operator-68c6474976-qnxwq\" (UID: \"c47199c0-d608-4ece-913c-b164a4d16f21\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qnxwq" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.361461 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c488ec3a-dfdf-46dd-8f3f-1346232394d3-apiservice-cert\") pod \"packageserver-d55dfcdfc-lzdbl\" (UID: \"c488ec3a-dfdf-46dd-8f3f-1346232394d3\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lzdbl" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.363029 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c674e914-755f-48a4-97ca-03e1a69a021a-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-j9b4b\" (UID: \"c674e914-755f-48a4-97ca-03e1a69a021a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-j9b4b" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.363891 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/30317a78-afdc-4c04-95b6-d2c8fedfb790-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-zzx6v\" (UID: \"30317a78-afdc-4c04-95b6-d2c8fedfb790\") " pod="openshift-marketplace/marketplace-operator-79b997595-zzx6v" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.373041 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.384584 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f58eb1d8-bb02-4af7-857c-138518c5bbf2-trusted-ca-bundle\") pod \"console-f9d7485db-jhptg\" (UID: \"f58eb1d8-bb02-4af7-857c-138518c5bbf2\") " pod="openshift-console/console-f9d7485db-jhptg" Jan 22 09:08:22 crc kubenswrapper[4811]: E0122 09:08:22.384669 4811 secret.go:188] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: failed to sync secret cache: timed out waiting for the condition Jan 22 09:08:22 crc kubenswrapper[4811]: E0122 09:08:22.384719 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0d71c966-5dcc-4f11-b21e-8c60ba5b7b57-samples-operator-tls podName:0d71c966-5dcc-4f11-b21e-8c60ba5b7b57 nodeName:}" failed. No retries permitted until 2026-01-22 09:08:23.384704447 +0000 UTC m=+147.706891569 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/0d71c966-5dcc-4f11-b21e-8c60ba5b7b57-samples-operator-tls") pod "cluster-samples-operator-665b6dd947-6l9qr" (UID: "0d71c966-5dcc-4f11-b21e-8c60ba5b7b57") : failed to sync secret cache: timed out waiting for the condition Jan 22 09:08:22 crc kubenswrapper[4811]: E0122 09:08:22.386784 4811 configmap.go:193] Couldn't get configMap openshift-console/service-ca: failed to sync configmap cache: timed out waiting for the condition Jan 22 09:08:22 crc kubenswrapper[4811]: E0122 09:08:22.386851 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f58eb1d8-bb02-4af7-857c-138518c5bbf2-service-ca podName:f58eb1d8-bb02-4af7-857c-138518c5bbf2 nodeName:}" failed. No retries permitted until 2026-01-22 09:08:23.386837579 +0000 UTC m=+147.709024702 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "service-ca" (UniqueName: "kubernetes.io/configmap/f58eb1d8-bb02-4af7-857c-138518c5bbf2-service-ca") pod "console-f9d7485db-jhptg" (UID: "f58eb1d8-bb02-4af7-857c-138518c5bbf2") : failed to sync configmap cache: timed out waiting for the condition Jan 22 09:08:22 crc kubenswrapper[4811]: E0122 09:08:22.387003 4811 secret.go:188] Couldn't get secret openshift-console/console-oauth-config: failed to sync secret cache: timed out waiting for the condition Jan 22 09:08:22 crc kubenswrapper[4811]: E0122 09:08:22.387133 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f58eb1d8-bb02-4af7-857c-138518c5bbf2-console-oauth-config podName:f58eb1d8-bb02-4af7-857c-138518c5bbf2 nodeName:}" failed. No retries permitted until 2026-01-22 09:08:23.387120903 +0000 UTC m=+147.709308027 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "console-oauth-config" (UniqueName: "kubernetes.io/secret/f58eb1d8-bb02-4af7-857c-138518c5bbf2-console-oauth-config") pod "console-f9d7485db-jhptg" (UID: "f58eb1d8-bb02-4af7-857c-138518c5bbf2") : failed to sync secret cache: timed out waiting for the condition Jan 22 09:08:22 crc kubenswrapper[4811]: E0122 09:08:22.387166 4811 secret.go:188] Couldn't get secret openshift-machine-api/machine-api-operator-tls: failed to sync secret cache: timed out waiting for the condition Jan 22 09:08:22 crc kubenswrapper[4811]: E0122 09:08:22.390386 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a9d91fa-d887-4128-af43-cfe3cad79784-machine-api-operator-tls podName:8a9d91fa-d887-4128-af43-cfe3cad79784 nodeName:}" failed. No retries permitted until 2026-01-22 09:08:23.390180342 +0000 UTC m=+147.712367466 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/8a9d91fa-d887-4128-af43-cfe3cad79784-machine-api-operator-tls") pod "machine-api-operator-5694c8668f-rx42r" (UID: "8a9d91fa-d887-4128-af43-cfe3cad79784") : failed to sync secret cache: timed out waiting for the condition Jan 22 09:08:22 crc kubenswrapper[4811]: E0122 09:08:22.390575 4811 configmap.go:193] Couldn't get configMap openshift-machine-api/machine-api-operator-images: failed to sync configmap cache: timed out waiting for the condition Jan 22 09:08:22 crc kubenswrapper[4811]: E0122 09:08:22.390732 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8a9d91fa-d887-4128-af43-cfe3cad79784-images podName:8a9d91fa-d887-4128-af43-cfe3cad79784 nodeName:}" failed. No retries permitted until 2026-01-22 09:08:23.390604994 +0000 UTC m=+147.712792117 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/8a9d91fa-d887-4128-af43-cfe3cad79784-images") pod "machine-api-operator-5694c8668f-rx42r" (UID: "8a9d91fa-d887-4128-af43-cfe3cad79784") : failed to sync configmap cache: timed out waiting for the condition Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.397100 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.400054 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.409869 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/387d7c1a-1589-4377-9566-a45d8f498f38-config\") pod \"apiserver-76f77b778f-jjjjp\" (UID: \"387d7c1a-1589-4377-9566-a45d8f498f38\") " pod="openshift-apiserver/apiserver-76f77b778f-jjjjp" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.423022 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.435804 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 09:08:22 crc kubenswrapper[4811]: E0122 09:08:22.436731 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 09:08:22.93668902 +0000 UTC m=+147.258876143 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.438298 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.453050 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bn9mc\" (UniqueName: \"kubernetes.io/projected/117d7039-2cd9-4ee9-9272-923cd05c3565-kube-api-access-bn9mc\") pod \"downloads-7954f5f757-hs4pm\" (UID: \"117d7039-2cd9-4ee9-9272-923cd05c3565\") " pod="openshift-console/downloads-7954f5f757-hs4pm" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.457097 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p795n\" (UniqueName: \"kubernetes.io/projected/f58eb1d8-bb02-4af7-857c-138518c5bbf2-kube-api-access-p795n\") pod \"console-f9d7485db-jhptg\" (UID: \"f58eb1d8-bb02-4af7-857c-138518c5bbf2\") " pod="openshift-console/console-f9d7485db-jhptg" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.460791 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.466001 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rzrjc\" (UniqueName: \"kubernetes.io/projected/8a9d91fa-d887-4128-af43-cfe3cad79784-kube-api-access-rzrjc\") pod \"machine-api-operator-5694c8668f-rx42r\" (UID: \"8a9d91fa-d887-4128-af43-cfe3cad79784\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-rx42r" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.474318 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-md7dt"] Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.477616 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.497514 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.516941 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-2mphl"] Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.520518 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.531369 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-2hm84" event={"ID":"c9b6feb2-2c7a-41ea-b160-f461f05057d4","Type":"ContainerStarted","Data":"70d7103446228c94edfa69506d26b1737ed3e0b3f1aa88e2f6dd1b203d0b6424"} Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.531407 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-2hm84" event={"ID":"c9b6feb2-2c7a-41ea-b160-f461f05057d4","Type":"ContainerStarted","Data":"393e1667c82d1cbc8850b76cfaa05f88be4605107ae74dacfd1132c90c9c46ce"} Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.532745 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-sxpvs" event={"ID":"189dea5f-ae62-4140-bd03-14a548c51684","Type":"ContainerStarted","Data":"ca23c6fbf3e90c01d3a68ae47bdb5751a475ae14bffe6e4012a92455a7fc886b"} Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.532773 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-sxpvs" event={"ID":"189dea5f-ae62-4140-bd03-14a548c51684","Type":"ContainerStarted","Data":"97261d3cdee859ada194871f7f8ca829b4643b800f36c0b5df54383f69bcd4a3"} Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.538766 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.542137 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nczwv\" (UID: \"73533561-14fb-4481-872e-1b47096f9d30\") " pod="openshift-image-registry/image-registry-697d97f7c8-nczwv" Jan 22 09:08:22 crc kubenswrapper[4811]: E0122 09:08:22.542926 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:08:23.042911949 +0000 UTC m=+147.365099072 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nczwv" (UID: "73533561-14fb-4481-872e-1b47096f9d30") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.547111 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nljqw"] Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.549234 4811 generic.go:334] "Generic (PLEG): container finished" podID="13db87c4-7297-4265-879d-07ad09539aba" containerID="020147a121027e3de9e814ba9a2d598a842e38c9878a1e95f4800121df98bc4a" exitCode=0 Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.549318 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-bssrl" event={"ID":"13db87c4-7297-4265-879d-07ad09539aba","Type":"ContainerDied","Data":"020147a121027e3de9e814ba9a2d598a842e38c9878a1e95f4800121df98bc4a"} Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.549355 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-bssrl" event={"ID":"13db87c4-7297-4265-879d-07ad09539aba","Type":"ContainerStarted","Data":"6d7eade41fe3f3b1fc66d2c51be14e29a0cdae63b05bd3852287c31c3cb95662"} Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.554808 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-776jv"] Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.557685 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-vx8k5" event={"ID":"799c1556-92e6-43b8-a620-c7211a2ce813","Type":"ContainerStarted","Data":"4947fc0e89be1195278de1d0c7f0b09d917ac00a8bab7a0e811d5ac66f0e683f"} Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.557715 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-vx8k5" event={"ID":"799c1556-92e6-43b8-a620-c7211a2ce813","Type":"ContainerStarted","Data":"56e1eda8491da29653d78995ad484dbd526369bccf6f6d2c7947e9726b928d2b"} Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.558328 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.558336 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-vx8k5" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.561675 4811 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-vx8k5 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.561773 4811 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-vx8k5" podUID="799c1556-92e6-43b8-a620-c7211a2ce813" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.563801 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-knkwh"] Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.565612 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"f904f2c4766f3a1a13274219fb917a91b2c1948469953f75e3e1df2ca8bb9c3a"} Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.568878 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8fflv" event={"ID":"7285d1e4-79ce-4ada-b15f-b1df68271703","Type":"ContainerStarted","Data":"fca687aca6a02eaa20a516a56fbddd1fb23bfc706e76773d5f14be3896f458dc"} Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.568924 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8fflv" event={"ID":"7285d1e4-79ce-4ada-b15f-b1df68271703","Type":"ContainerStarted","Data":"b327810d260818d30ec0ee97db39adfb3a5dc570837740aa00dc841fb3d6ab58"} Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.569044 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8fflv" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.570287 4811 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-8fflv container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.570331 4811 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8fflv" podUID="7285d1e4-79ce-4ada-b15f-b1df68271703" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.577055 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4ftvq" event={"ID":"ed39dbae-8bd1-43b9-aec3-4fb84807d65d","Type":"ContainerStarted","Data":"9c5d9ac263a5608fcc748f65e3f92581627ad5f9e60f684f9a06be289f8ee9a2"} Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.577078 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4ftvq" event={"ID":"ed39dbae-8bd1-43b9-aec3-4fb84807d65d","Type":"ContainerStarted","Data":"998d61a00509c48b2e74c24cd20d94fbb3f3376d69d3c23e132400e8383c0374"} Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.577094 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4ftvq" event={"ID":"ed39dbae-8bd1-43b9-aec3-4fb84807d65d","Type":"ContainerStarted","Data":"3e9945d6f45c6ff17d6ea238c6adb32ac3e096f9615de03c140e6bfa68c56e97"} Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.577275 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 22 09:08:22 crc kubenswrapper[4811]: W0122 09:08:22.581781 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-f704868bfb8c632df9e7a7c5d76d7184b978211c90ce1d1d519e77e473830bab WatchSource:0}: Error finding container f704868bfb8c632df9e7a7c5d76d7184b978211c90ce1d1d519e77e473830bab: Status 404 returned error can't find the container with id f704868bfb8c632df9e7a7c5d76d7184b978211c90ce1d1d519e77e473830bab Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.583085 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4wjd4" event={"ID":"c0eff054-47bc-4d81-acb9-06ef98b170fa","Type":"ContainerStarted","Data":"8fe2542632d96beb56e81ed4c39e7920e3bb9ba507cd355e8d19387364f739ec"} Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.583117 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4wjd4" event={"ID":"c0eff054-47bc-4d81-acb9-06ef98b170fa","Type":"ContainerStarted","Data":"f7f798f6b717b5964fa0214bc18eb95edc8b10995db1f86ab2f2806e664f9234"} Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.584492 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-md7dt" event={"ID":"ab671388-f736-4a76-a421-ba3413830807","Type":"ContainerStarted","Data":"30c017ed67de6c2805e788e45e5b3bee73021c47574c1eff2028d29e2ba824a7"} Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.598146 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.617614 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.643116 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 09:08:22 crc kubenswrapper[4811]: E0122 09:08:22.644406 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 09:08:23.144381885 +0000 UTC m=+147.466569008 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.648076 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.660469 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.665878 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hlwc\" (UniqueName: \"kubernetes.io/projected/0d71c966-5dcc-4f11-b21e-8c60ba5b7b57-kube-api-access-4hlwc\") pod \"cluster-samples-operator-665b6dd947-6l9qr\" (UID: \"0d71c966-5dcc-4f11-b21e-8c60ba5b7b57\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6l9qr" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.666128 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/708ddef5-479e-44ef-a189-c41123a73bbe-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-gmwq7\" (UID: \"708ddef5-479e-44ef-a189-c41123a73bbe\") " pod="openshift-authentication/oauth-openshift-558db77b4-gmwq7" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.678841 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.687191 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/387d7c1a-1589-4377-9566-a45d8f498f38-image-import-ca\") pod \"apiserver-76f77b778f-jjjjp\" (UID: \"387d7c1a-1589-4377-9566-a45d8f498f38\") " pod="openshift-apiserver/apiserver-76f77b778f-jjjjp" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.697574 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.708353 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-hs4pm" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.744711 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nczwv\" (UID: \"73533561-14fb-4481-872e-1b47096f9d30\") " pod="openshift-image-registry/image-registry-697d97f7c8-nczwv" Jan 22 09:08:22 crc kubenswrapper[4811]: E0122 09:08:22.745106 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:08:23.245089814 +0000 UTC m=+147.567276937 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nczwv" (UID: "73533561-14fb-4481-872e-1b47096f9d30") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.756401 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-chlnh\" (UniqueName: \"kubernetes.io/projected/78341d4a-0228-4056-ae13-9619ea5c4c35-kube-api-access-chlnh\") pod \"machine-approver-56656f9798-trtbx\" (UID: \"78341d4a-0228-4056-ae13-9619ea5c4c35\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-trtbx" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.770719 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vlj6w\" (UniqueName: \"kubernetes.io/projected/73533561-14fb-4481-872e-1b47096f9d30-kube-api-access-vlj6w\") pod \"image-registry-697d97f7c8-nczwv\" (UID: \"73533561-14fb-4481-872e-1b47096f9d30\") " pod="openshift-image-registry/image-registry-697d97f7c8-nczwv" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.803142 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wz9vd\" (UniqueName: \"kubernetes.io/projected/5dd00b53-587c-4992-87da-f3b59054b7ff-kube-api-access-wz9vd\") pod \"openshift-controller-manager-operator-756b6f6bc6-j6dqt\" (UID: \"5dd00b53-587c-4992-87da-f3b59054b7ff\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-j6dqt" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.819066 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqs9m\" (UniqueName: \"kubernetes.io/projected/387d7c1a-1589-4377-9566-a45d8f498f38-kube-api-access-xqs9m\") pod \"apiserver-76f77b778f-jjjjp\" (UID: \"387d7c1a-1589-4377-9566-a45d8f498f38\") " pod="openshift-apiserver/apiserver-76f77b778f-jjjjp" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.831721 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/73533561-14fb-4481-872e-1b47096f9d30-bound-sa-token\") pod \"image-registry-697d97f7c8-nczwv\" (UID: \"73533561-14fb-4481-872e-1b47096f9d30\") " pod="openshift-image-registry/image-registry-697d97f7c8-nczwv" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.847531 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 09:08:22 crc kubenswrapper[4811]: E0122 09:08:22.847941 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 09:08:23.347921538 +0000 UTC m=+147.670108661 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.848041 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nczwv\" (UID: \"73533561-14fb-4481-872e-1b47096f9d30\") " pod="openshift-image-registry/image-registry-697d97f7c8-nczwv" Jan 22 09:08:22 crc kubenswrapper[4811]: E0122 09:08:22.848443 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:08:23.348435969 +0000 UTC m=+147.670623091 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nczwv" (UID: "73533561-14fb-4481-872e-1b47096f9d30") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.853994 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7z6s\" (UniqueName: \"kubernetes.io/projected/708ddef5-479e-44ef-a189-c41123a73bbe-kube-api-access-g7z6s\") pod \"oauth-openshift-558db77b4-gmwq7\" (UID: \"708ddef5-479e-44ef-a189-c41123a73bbe\") " pod="openshift-authentication/oauth-openshift-558db77b4-gmwq7" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.862760 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-trtbx" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.876083 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9df8d6f7-ac70-440a-94a9-e4ee69c104a2-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-kj6fk\" (UID: \"9df8d6f7-ac70-440a-94a9-e4ee69c104a2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-kj6fk" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.897024 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xk7s\" (UniqueName: \"kubernetes.io/projected/30317a78-afdc-4c04-95b6-d2c8fedfb790-kube-api-access-7xk7s\") pod \"marketplace-operator-79b997595-zzx6v\" (UID: \"30317a78-afdc-4c04-95b6-d2c8fedfb790\") " pod="openshift-marketplace/marketplace-operator-79b997595-zzx6v" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.917322 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvllp\" (UniqueName: \"kubernetes.io/projected/535085bd-3682-4133-a5e6-e5f1149e7d24-kube-api-access-mvllp\") pod \"olm-operator-6b444d44fb-wcwwm\" (UID: \"535085bd-3682-4133-a5e6-e5f1149e7d24\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wcwwm" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.926368 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-j6dqt" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.960336 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ccxnd\" (UniqueName: \"kubernetes.io/projected/1de7ea7e-b219-47b4-9ba9-ef3688eda036-kube-api-access-ccxnd\") pod \"csi-hostpathplugin-vrdwc\" (UID: \"1de7ea7e-b219-47b4-9ba9-ef3688eda036\") " pod="hostpath-provisioner/csi-hostpathplugin-vrdwc" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.961783 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-kj6fk" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.962468 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 09:08:22 crc kubenswrapper[4811]: E0122 09:08:22.962960 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 09:08:23.462944853 +0000 UTC m=+147.785131976 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.973959 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wh2t4\" (UniqueName: \"kubernetes.io/projected/c173016d-7244-468d-a84f-2ef0e2bf0258-kube-api-access-wh2t4\") pod \"service-ca-operator-777779d784-5w7fh\" (UID: \"c173016d-7244-468d-a84f-2ef0e2bf0258\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-5w7fh" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.990029 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-zzx6v" Jan 22 09:08:22 crc kubenswrapper[4811]: I0122 09:08:22.996887 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wcwwm" Jan 22 09:08:23 crc kubenswrapper[4811]: I0122 09:08:23.005241 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c674e914-755f-48a4-97ca-03e1a69a021a-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-j9b4b\" (UID: \"c674e914-755f-48a4-97ca-03e1a69a021a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-j9b4b" Jan 22 09:08:23 crc kubenswrapper[4811]: I0122 09:08:23.018110 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghb2p\" (UniqueName: \"kubernetes.io/projected/f382688c-fb9f-4169-b4eb-3466e08dbd7c-kube-api-access-ghb2p\") pod \"machine-config-operator-74547568cd-fcb4z\" (UID: \"f382688c-fb9f-4169-b4eb-3466e08dbd7c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fcb4z" Jan 22 09:08:23 crc kubenswrapper[4811]: I0122 09:08:23.022508 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-5w7fh" Jan 22 09:08:23 crc kubenswrapper[4811]: I0122 09:08:23.026648 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrtm4\" (UniqueName: \"kubernetes.io/projected/88b52253-3840-42b0-aa3c-d8708274dcfa-kube-api-access-xrtm4\") pod \"dns-default-v2nnf\" (UID: \"88b52253-3840-42b0-aa3c-d8708274dcfa\") " pod="openshift-dns/dns-default-v2nnf" Jan 22 09:08:23 crc kubenswrapper[4811]: I0122 09:08:23.042853 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-v2nnf" Jan 22 09:08:23 crc kubenswrapper[4811]: I0122 09:08:23.052920 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4b5g\" (UniqueName: \"kubernetes.io/projected/42f53a84-4118-4088-9cb5-39a05774839a-kube-api-access-t4b5g\") pod \"migrator-59844c95c7-zwkzn\" (UID: \"42f53a84-4118-4088-9cb5-39a05774839a\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-zwkzn" Jan 22 09:08:23 crc kubenswrapper[4811]: I0122 09:08:23.061169 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-vrdwc" Jan 22 09:08:23 crc kubenswrapper[4811]: I0122 09:08:23.061826 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldr5v\" (UniqueName: \"kubernetes.io/projected/8b7094aa-cc4a-49eb-be77-715a4efbc1d0-kube-api-access-ldr5v\") pod \"control-plane-machine-set-operator-78cbb6b69f-9pxnj\" (UID: \"8b7094aa-cc4a-49eb-be77-715a4efbc1d0\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-9pxnj" Jan 22 09:08:23 crc kubenswrapper[4811]: I0122 09:08:23.063827 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nczwv\" (UID: \"73533561-14fb-4481-872e-1b47096f9d30\") " pod="openshift-image-registry/image-registry-697d97f7c8-nczwv" Jan 22 09:08:23 crc kubenswrapper[4811]: E0122 09:08:23.064105 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:08:23.564092962 +0000 UTC m=+147.886280085 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nczwv" (UID: "73533561-14fb-4481-872e-1b47096f9d30") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:08:23 crc kubenswrapper[4811]: I0122 09:08:23.075053 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-jjjjp" Jan 22 09:08:23 crc kubenswrapper[4811]: I0122 09:08:23.087851 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-gmwq7" Jan 22 09:08:23 crc kubenswrapper[4811]: I0122 09:08:23.098762 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kk26t\" (UniqueName: \"kubernetes.io/projected/0803a721-b862-4696-a752-e5af589ced0b-kube-api-access-kk26t\") pod \"service-ca-9c57cc56f-6jdmv\" (UID: \"0803a721-b862-4696-a752-e5af589ced0b\") " pod="openshift-service-ca/service-ca-9c57cc56f-6jdmv" Jan 22 09:08:23 crc kubenswrapper[4811]: I0122 09:08:23.098829 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dkbdj\" (UniqueName: \"kubernetes.io/projected/30038e4b-2056-44a4-b3f0-1ec1433a7b4d-kube-api-access-dkbdj\") pod \"machine-config-server-8fxth\" (UID: \"30038e4b-2056-44a4-b3f0-1ec1433a7b4d\") " pod="openshift-machine-config-operator/machine-config-server-8fxth" Jan 22 09:08:23 crc kubenswrapper[4811]: I0122 09:08:23.124689 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9f194f7f-c770-471d-8bcd-c2fd613e0b46-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-wvhkk\" (UID: \"9f194f7f-c770-471d-8bcd-c2fd613e0b46\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wvhkk" Jan 22 09:08:23 crc kubenswrapper[4811]: I0122 09:08:23.129859 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-hs4pm"] Jan 22 09:08:23 crc kubenswrapper[4811]: I0122 09:08:23.166559 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mths7\" (UniqueName: \"kubernetes.io/projected/2716a2dc-25e2-4a62-8264-41d299b3cd55-kube-api-access-mths7\") pod \"router-default-5444994796-l8phl\" (UID: \"2716a2dc-25e2-4a62-8264-41d299b3cd55\") " pod="openshift-ingress/router-default-5444994796-l8phl" Jan 22 09:08:23 crc kubenswrapper[4811]: I0122 09:08:23.168980 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 09:08:23 crc kubenswrapper[4811]: E0122 09:08:23.169365 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 09:08:23.669347205 +0000 UTC m=+147.991534328 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:08:23 crc kubenswrapper[4811]: I0122 09:08:23.170184 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nczwv\" (UID: \"73533561-14fb-4481-872e-1b47096f9d30\") " pod="openshift-image-registry/image-registry-697d97f7c8-nczwv" Jan 22 09:08:23 crc kubenswrapper[4811]: E0122 09:08:23.170674 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:08:23.670660852 +0000 UTC m=+147.992847966 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nczwv" (UID: "73533561-14fb-4481-872e-1b47096f9d30") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:08:23 crc kubenswrapper[4811]: I0122 09:08:23.190375 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cngll\" (UniqueName: \"kubernetes.io/projected/80663d92-2281-4a3d-9232-f1fc19873d88-kube-api-access-cngll\") pod \"collect-profiles-29484540-kbb7d\" (UID: \"80663d92-2281-4a3d-9232-f1fc19873d88\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484540-kbb7d" Jan 22 09:08:23 crc kubenswrapper[4811]: I0122 09:08:23.209736 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l99th\" (UniqueName: \"kubernetes.io/projected/c47199c0-d608-4ece-913c-b164a4d16f21-kube-api-access-l99th\") pod \"catalog-operator-68c6474976-qnxwq\" (UID: \"c47199c0-d608-4ece-913c-b164a4d16f21\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qnxwq" Jan 22 09:08:23 crc kubenswrapper[4811]: I0122 09:08:23.227105 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvvdc\" (UniqueName: \"kubernetes.io/projected/c488ec3a-dfdf-46dd-8f3f-1346232394d3-kube-api-access-qvvdc\") pod \"packageserver-d55dfcdfc-lzdbl\" (UID: \"c488ec3a-dfdf-46dd-8f3f-1346232394d3\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lzdbl" Jan 22 09:08:23 crc kubenswrapper[4811]: I0122 09:08:23.240385 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wvhkk" Jan 22 09:08:23 crc kubenswrapper[4811]: I0122 09:08:23.248668 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-j9b4b" Jan 22 09:08:23 crc kubenswrapper[4811]: I0122 09:08:23.252994 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-l8phl" Jan 22 09:08:23 crc kubenswrapper[4811]: I0122 09:08:23.257018 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2lklt\" (UniqueName: \"kubernetes.io/projected/cb22b2ae-6c13-482b-b827-5200e2be87ca-kube-api-access-2lklt\") pod \"multus-admission-controller-857f4d67dd-78zjg\" (UID: \"cb22b2ae-6c13-482b-b827-5200e2be87ca\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-78zjg" Jan 22 09:08:23 crc kubenswrapper[4811]: W0122 09:08:23.261373 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod117d7039_2cd9_4ee9_9272_923cd05c3565.slice/crio-06641b09d2e12b604c558771069f964b38b827d3f8879605546a56e651c24e88 WatchSource:0}: Error finding container 06641b09d2e12b604c558771069f964b38b827d3f8879605546a56e651c24e88: Status 404 returned error can't find the container with id 06641b09d2e12b604c558771069f964b38b827d3f8879605546a56e651c24e88 Jan 22 09:08:23 crc kubenswrapper[4811]: I0122 09:08:23.262099 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-78zjg" Jan 22 09:08:23 crc kubenswrapper[4811]: I0122 09:08:23.267568 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qnxwq" Jan 22 09:08:23 crc kubenswrapper[4811]: I0122 09:08:23.271174 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 09:08:23 crc kubenswrapper[4811]: E0122 09:08:23.271565 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 09:08:23.771552807 +0000 UTC m=+148.093739930 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:08:23 crc kubenswrapper[4811]: I0122 09:08:23.272788 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-9pxnj" Jan 22 09:08:23 crc kubenswrapper[4811]: I0122 09:08:23.279898 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-zwkzn" Jan 22 09:08:23 crc kubenswrapper[4811]: I0122 09:08:23.284103 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fcb4z" Jan 22 09:08:23 crc kubenswrapper[4811]: I0122 09:08:23.285916 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9zm7\" (UniqueName: \"kubernetes.io/projected/c1be96c2-9c8f-4c6d-8dff-02e0898a963b-kube-api-access-b9zm7\") pod \"package-server-manager-789f6589d5-5w8tw\" (UID: \"c1be96c2-9c8f-4c6d-8dff-02e0898a963b\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5w8tw" Jan 22 09:08:23 crc kubenswrapper[4811]: I0122 09:08:23.304030 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lzdbl" Jan 22 09:08:23 crc kubenswrapper[4811]: I0122 09:08:23.306279 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhx9d\" (UniqueName: \"kubernetes.io/projected/e7343a26-0e21-4d61-ad1c-c3c5479a89e1-kube-api-access-nhx9d\") pod \"ingress-canary-9624f\" (UID: \"e7343a26-0e21-4d61-ad1c-c3c5479a89e1\") " pod="openshift-ingress-canary/ingress-canary-9624f" Jan 22 09:08:23 crc kubenswrapper[4811]: I0122 09:08:23.306902 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rzvlb\" (UniqueName: \"kubernetes.io/projected/0d9fa190-3073-4b2b-a348-f9a8e1f994b2-kube-api-access-rzvlb\") pod \"machine-config-controller-84d6567774-2jrk9\" (UID: \"0d9fa190-3073-4b2b-a348-f9a8e1f994b2\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2jrk9" Jan 22 09:08:23 crc kubenswrapper[4811]: I0122 09:08:23.307038 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-6jdmv" Jan 22 09:08:23 crc kubenswrapper[4811]: I0122 09:08:23.312704 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5w8tw" Jan 22 09:08:23 crc kubenswrapper[4811]: I0122 09:08:23.325769 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484540-kbb7d" Jan 22 09:08:23 crc kubenswrapper[4811]: I0122 09:08:23.334869 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-9624f" Jan 22 09:08:23 crc kubenswrapper[4811]: I0122 09:08:23.338674 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-8fxth" Jan 22 09:08:23 crc kubenswrapper[4811]: I0122 09:08:23.372396 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nczwv\" (UID: \"73533561-14fb-4481-872e-1b47096f9d30\") " pod="openshift-image-registry/image-registry-697d97f7c8-nczwv" Jan 22 09:08:23 crc kubenswrapper[4811]: E0122 09:08:23.372699 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:08:23.872688613 +0000 UTC m=+148.194875736 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nczwv" (UID: "73533561-14fb-4481-872e-1b47096f9d30") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:08:23 crc kubenswrapper[4811]: I0122 09:08:23.477497 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 09:08:23 crc kubenswrapper[4811]: I0122 09:08:23.477691 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/0d71c966-5dcc-4f11-b21e-8c60ba5b7b57-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-6l9qr\" (UID: \"0d71c966-5dcc-4f11-b21e-8c60ba5b7b57\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6l9qr" Jan 22 09:08:23 crc kubenswrapper[4811]: I0122 09:08:23.477734 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f58eb1d8-bb02-4af7-857c-138518c5bbf2-service-ca\") pod \"console-f9d7485db-jhptg\" (UID: \"f58eb1d8-bb02-4af7-857c-138518c5bbf2\") " pod="openshift-console/console-f9d7485db-jhptg" Jan 22 09:08:23 crc kubenswrapper[4811]: I0122 09:08:23.477766 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8a9d91fa-d887-4128-af43-cfe3cad79784-images\") pod \"machine-api-operator-5694c8668f-rx42r\" (UID: \"8a9d91fa-d887-4128-af43-cfe3cad79784\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-rx42r" Jan 22 09:08:23 crc kubenswrapper[4811]: I0122 09:08:23.477828 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f58eb1d8-bb02-4af7-857c-138518c5bbf2-console-oauth-config\") pod \"console-f9d7485db-jhptg\" (UID: \"f58eb1d8-bb02-4af7-857c-138518c5bbf2\") " pod="openshift-console/console-f9d7485db-jhptg" Jan 22 09:08:23 crc kubenswrapper[4811]: I0122 09:08:23.477849 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/8a9d91fa-d887-4128-af43-cfe3cad79784-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-rx42r\" (UID: \"8a9d91fa-d887-4128-af43-cfe3cad79784\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-rx42r" Jan 22 09:08:23 crc kubenswrapper[4811]: I0122 09:08:23.480589 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8a9d91fa-d887-4128-af43-cfe3cad79784-images\") pod \"machine-api-operator-5694c8668f-rx42r\" (UID: \"8a9d91fa-d887-4128-af43-cfe3cad79784\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-rx42r" Jan 22 09:08:23 crc kubenswrapper[4811]: E0122 09:08:23.480995 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 09:08:23.980976877 +0000 UTC m=+148.303164000 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:08:23 crc kubenswrapper[4811]: I0122 09:08:23.482113 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f58eb1d8-bb02-4af7-857c-138518c5bbf2-service-ca\") pod \"console-f9d7485db-jhptg\" (UID: \"f58eb1d8-bb02-4af7-857c-138518c5bbf2\") " pod="openshift-console/console-f9d7485db-jhptg" Jan 22 09:08:23 crc kubenswrapper[4811]: I0122 09:08:23.487978 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f58eb1d8-bb02-4af7-857c-138518c5bbf2-console-oauth-config\") pod \"console-f9d7485db-jhptg\" (UID: \"f58eb1d8-bb02-4af7-857c-138518c5bbf2\") " pod="openshift-console/console-f9d7485db-jhptg" Jan 22 09:08:23 crc kubenswrapper[4811]: I0122 09:08:23.488477 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/8a9d91fa-d887-4128-af43-cfe3cad79784-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-rx42r\" (UID: \"8a9d91fa-d887-4128-af43-cfe3cad79784\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-rx42r" Jan 22 09:08:23 crc kubenswrapper[4811]: I0122 09:08:23.490426 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/0d71c966-5dcc-4f11-b21e-8c60ba5b7b57-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-6l9qr\" (UID: \"0d71c966-5dcc-4f11-b21e-8c60ba5b7b57\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6l9qr" Jan 22 09:08:23 crc kubenswrapper[4811]: I0122 09:08:23.543838 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2jrk9" Jan 22 09:08:23 crc kubenswrapper[4811]: I0122 09:08:23.578657 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nczwv\" (UID: \"73533561-14fb-4481-872e-1b47096f9d30\") " pod="openshift-image-registry/image-registry-697d97f7c8-nczwv" Jan 22 09:08:23 crc kubenswrapper[4811]: E0122 09:08:23.578923 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:08:24.078910766 +0000 UTC m=+148.401097888 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nczwv" (UID: "73533561-14fb-4481-872e-1b47096f9d30") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:08:23 crc kubenswrapper[4811]: I0122 09:08:23.623449 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-rx42r" Jan 22 09:08:23 crc kubenswrapper[4811]: I0122 09:08:23.626279 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-2mphl" event={"ID":"db487354-8574-45b6-b639-4d4afc8a7698","Type":"ContainerStarted","Data":"d84a912e023671a68851f01aebc4b58adaefa0e10c7dd705aaad57f9b02a97a7"} Jan 22 09:08:23 crc kubenswrapper[4811]: I0122 09:08:23.626312 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-2mphl" event={"ID":"db487354-8574-45b6-b639-4d4afc8a7698","Type":"ContainerStarted","Data":"ce215ec8a4aacc97e05043ca5ced1b6cad78ec50824c72db7a4502d90c966323"} Jan 22 09:08:23 crc kubenswrapper[4811]: I0122 09:08:23.626652 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-2mphl" Jan 22 09:08:23 crc kubenswrapper[4811]: I0122 09:08:23.631500 4811 patch_prober.go:28] interesting pod/console-operator-58897d9998-2mphl container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.18:8443/readyz\": dial tcp 10.217.0.18:8443: connect: connection refused" start-of-body= Jan 22 09:08:23 crc kubenswrapper[4811]: I0122 09:08:23.631535 4811 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-2mphl" podUID="db487354-8574-45b6-b639-4d4afc8a7698" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.18:8443/readyz\": dial tcp 10.217.0.18:8443: connect: connection refused" Jan 22 09:08:23 crc kubenswrapper[4811]: I0122 09:08:23.636366 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6l9qr" Jan 22 09:08:23 crc kubenswrapper[4811]: I0122 09:08:23.641718 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-jhptg" Jan 22 09:08:23 crc kubenswrapper[4811]: I0122 09:08:23.643034 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-trtbx" event={"ID":"78341d4a-0228-4056-ae13-9619ea5c4c35","Type":"ContainerStarted","Data":"3f759700c8061dccac7458d02ed74607b3b11e6eb582ceac0ce3929680ea40ba"} Jan 22 09:08:23 crc kubenswrapper[4811]: I0122 09:08:23.643092 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-trtbx" event={"ID":"78341d4a-0228-4056-ae13-9619ea5c4c35","Type":"ContainerStarted","Data":"e034cac5aae6e741d96151172211f896495875feda61acc429229456d7bc758a"} Jan 22 09:08:23 crc kubenswrapper[4811]: I0122 09:08:23.658744 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-hs4pm" event={"ID":"117d7039-2cd9-4ee9-9272-923cd05c3565","Type":"ContainerStarted","Data":"06641b09d2e12b604c558771069f964b38b827d3f8879605546a56e651c24e88"} Jan 22 09:08:23 crc kubenswrapper[4811]: I0122 09:08:23.665317 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"e322c0ae84816d7393bab1632762ce1009d089ac525808f8d4197f8508c26581"} Jan 22 09:08:23 crc kubenswrapper[4811]: I0122 09:08:23.665351 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"f704868bfb8c632df9e7a7c5d76d7184b978211c90ce1d1d519e77e473830bab"} Jan 22 09:08:23 crc kubenswrapper[4811]: I0122 09:08:23.665771 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 09:08:23 crc kubenswrapper[4811]: I0122 09:08:23.670887 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-md7dt" event={"ID":"ab671388-f736-4a76-a421-ba3413830807","Type":"ContainerStarted","Data":"fcc82aa3a50badf9da7dd469f04eaddd12ea2f03be352d857b66d529300794c5"} Jan 22 09:08:23 crc kubenswrapper[4811]: I0122 09:08:23.679787 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-knkwh" event={"ID":"80529f80-c8f8-4bc5-83a6-eb19a23401f0","Type":"ContainerStarted","Data":"a1357ebc92571ad491e706138aad95702eb11520d2ba0c5f1fd8563f7a71015b"} Jan 22 09:08:23 crc kubenswrapper[4811]: I0122 09:08:23.679830 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-knkwh" event={"ID":"80529f80-c8f8-4bc5-83a6-eb19a23401f0","Type":"ContainerStarted","Data":"3a36b31a066973d1f920eebafc567837fe7511979dfc76859c96787bc257dbda"} Jan 22 09:08:23 crc kubenswrapper[4811]: I0122 09:08:23.681181 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 09:08:23 crc kubenswrapper[4811]: I0122 09:08:23.681338 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"946ee49cdcbcff6404454123fb98e3bbd2e67406487b925ba2afa1368cbbefed"} Jan 22 09:08:23 crc kubenswrapper[4811]: E0122 09:08:23.681497 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 09:08:24.181477008 +0000 UTC m=+148.503664131 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:08:23 crc kubenswrapper[4811]: I0122 09:08:23.681801 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nczwv\" (UID: \"73533561-14fb-4481-872e-1b47096f9d30\") " pod="openshift-image-registry/image-registry-697d97f7c8-nczwv" Jan 22 09:08:23 crc kubenswrapper[4811]: E0122 09:08:23.683229 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:08:24.183217761 +0000 UTC m=+148.505404884 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nczwv" (UID: "73533561-14fb-4481-872e-1b47096f9d30") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:08:23 crc kubenswrapper[4811]: I0122 09:08:23.694455 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-2hm84" event={"ID":"c9b6feb2-2c7a-41ea-b160-f461f05057d4","Type":"ContainerStarted","Data":"9e5865c2a10e4e05df6428ee9b6e2ea654b0aed1332269a07f7efa0b42f763da"} Jan 22 09:08:23 crc kubenswrapper[4811]: I0122 09:08:23.702308 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nljqw" event={"ID":"05a61535-90ca-4127-991a-0b3f1c110f5a","Type":"ContainerStarted","Data":"a8b385a14b86074fc0b1e145883ea9b30749b83d76ab73113e274a26cab87ebe"} Jan 22 09:08:23 crc kubenswrapper[4811]: I0122 09:08:23.702342 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nljqw" event={"ID":"05a61535-90ca-4127-991a-0b3f1c110f5a","Type":"ContainerStarted","Data":"847ea9da50d62e708a8939306dec72e8873fef83d92d5e96c041ecf8d511835f"} Jan 22 09:08:23 crc kubenswrapper[4811]: I0122 09:08:23.704510 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"8eac4046be705ea5bd8d7bc6f0412fec1c5334f93812a0de6f022019e1612019"} Jan 22 09:08:23 crc kubenswrapper[4811]: I0122 09:08:23.704535 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"e80c65f09ca8a17f76ead4d2ffa67e9449e14e5c53c5546f2aff9b717a3538dc"} Jan 22 09:08:23 crc kubenswrapper[4811]: I0122 09:08:23.721401 4811 generic.go:334] "Generic (PLEG): container finished" podID="c910b62e-8a7e-4a3d-b8b0-f90384ec999f" containerID="d2954ad57bf63e0218b450855db72b04031a7a0191db2d2a2c860e21b67dabac" exitCode=0 Jan 22 09:08:23 crc kubenswrapper[4811]: I0122 09:08:23.721451 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-776jv" event={"ID":"c910b62e-8a7e-4a3d-b8b0-f90384ec999f","Type":"ContainerDied","Data":"d2954ad57bf63e0218b450855db72b04031a7a0191db2d2a2c860e21b67dabac"} Jan 22 09:08:23 crc kubenswrapper[4811]: I0122 09:08:23.721469 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-776jv" event={"ID":"c910b62e-8a7e-4a3d-b8b0-f90384ec999f","Type":"ContainerStarted","Data":"12567804f015e05d46a197d3b29baa4f573be1629540b307147fef89b845825b"} Jan 22 09:08:23 crc kubenswrapper[4811]: I0122 09:08:23.752564 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-bssrl" event={"ID":"13db87c4-7297-4265-879d-07ad09539aba","Type":"ContainerStarted","Data":"86a4eff40b82577900ae7291d7c3f63f373ff43e3de78443afe25280cf55be30"} Jan 22 09:08:23 crc kubenswrapper[4811]: I0122 09:08:23.773051 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-vx8k5" Jan 22 09:08:23 crc kubenswrapper[4811]: I0122 09:08:23.786927 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 09:08:23 crc kubenswrapper[4811]: E0122 09:08:23.787606 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 09:08:24.287593487 +0000 UTC m=+148.609780610 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:08:23 crc kubenswrapper[4811]: I0122 09:08:23.797331 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8fflv" Jan 22 09:08:23 crc kubenswrapper[4811]: I0122 09:08:23.889059 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nczwv\" (UID: \"73533561-14fb-4481-872e-1b47096f9d30\") " pod="openshift-image-registry/image-registry-697d97f7c8-nczwv" Jan 22 09:08:23 crc kubenswrapper[4811]: E0122 09:08:23.896307 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:08:24.396239535 +0000 UTC m=+148.718426659 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nczwv" (UID: "73533561-14fb-4481-872e-1b47096f9d30") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:08:23 crc kubenswrapper[4811]: I0122 09:08:23.930882 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-j6dqt"] Jan 22 09:08:23 crc kubenswrapper[4811]: W0122 09:08:23.949009 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod30038e4b_2056_44a4_b3f0_1ec1433a7b4d.slice/crio-b649ae0e07d8d6944c9471665117ad67024f8536c85be8919a45dd4b8b058a50 WatchSource:0}: Error finding container b649ae0e07d8d6944c9471665117ad67024f8536c85be8919a45dd4b8b058a50: Status 404 returned error can't find the container with id b649ae0e07d8d6944c9471665117ad67024f8536c85be8919a45dd4b8b058a50 Jan 22 09:08:23 crc kubenswrapper[4811]: I0122 09:08:23.994252 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 09:08:23 crc kubenswrapper[4811]: E0122 09:08:23.994489 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 09:08:24.494456266 +0000 UTC m=+148.816643389 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:08:23 crc kubenswrapper[4811]: I0122 09:08:23.994675 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nczwv\" (UID: \"73533561-14fb-4481-872e-1b47096f9d30\") " pod="openshift-image-registry/image-registry-697d97f7c8-nczwv" Jan 22 09:08:23 crc kubenswrapper[4811]: E0122 09:08:23.995006 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:08:24.494995554 +0000 UTC m=+148.817182676 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nczwv" (UID: "73533561-14fb-4481-872e-1b47096f9d30") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:08:24 crc kubenswrapper[4811]: I0122 09:08:24.097893 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 09:08:24 crc kubenswrapper[4811]: E0122 09:08:24.098646 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 09:08:24.598617948 +0000 UTC m=+148.920805071 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:08:24 crc kubenswrapper[4811]: I0122 09:08:24.203674 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nczwv\" (UID: \"73533561-14fb-4481-872e-1b47096f9d30\") " pod="openshift-image-registry/image-registry-697d97f7c8-nczwv" Jan 22 09:08:24 crc kubenswrapper[4811]: E0122 09:08:24.203993 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:08:24.703979112 +0000 UTC m=+149.026166235 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nczwv" (UID: "73533561-14fb-4481-872e-1b47096f9d30") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:08:24 crc kubenswrapper[4811]: I0122 09:08:24.263326 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-md7dt" podStartSLOduration=127.263308618 podStartE2EDuration="2m7.263308618s" podCreationTimestamp="2026-01-22 09:06:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:08:24.24598601 +0000 UTC m=+148.568173133" watchObservedRunningTime="2026-01-22 09:08:24.263308618 +0000 UTC m=+148.585495741" Jan 22 09:08:24 crc kubenswrapper[4811]: I0122 09:08:24.307256 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 09:08:24 crc kubenswrapper[4811]: E0122 09:08:24.307606 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 09:08:24.807590997 +0000 UTC m=+149.129778120 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:08:24 crc kubenswrapper[4811]: I0122 09:08:24.409221 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nczwv\" (UID: \"73533561-14fb-4481-872e-1b47096f9d30\") " pod="openshift-image-registry/image-registry-697d97f7c8-nczwv" Jan 22 09:08:24 crc kubenswrapper[4811]: E0122 09:08:24.410718 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:08:24.910618901 +0000 UTC m=+149.232806024 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nczwv" (UID: "73533561-14fb-4481-872e-1b47096f9d30") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:08:24 crc kubenswrapper[4811]: I0122 09:08:24.519128 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 09:08:24 crc kubenswrapper[4811]: E0122 09:08:24.519453 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 09:08:25.019433969 +0000 UTC m=+149.341621092 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:08:24 crc kubenswrapper[4811]: I0122 09:08:24.566969 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-sxpvs" podStartSLOduration=126.566949445 podStartE2EDuration="2m6.566949445s" podCreationTimestamp="2026-01-22 09:06:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:08:24.561283179 +0000 UTC m=+148.883470293" watchObservedRunningTime="2026-01-22 09:08:24.566949445 +0000 UTC m=+148.889136567" Jan 22 09:08:24 crc kubenswrapper[4811]: I0122 09:08:24.567317 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wcwwm"] Jan 22 09:08:24 crc kubenswrapper[4811]: I0122 09:08:24.573994 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-kj6fk"] Jan 22 09:08:24 crc kubenswrapper[4811]: I0122 09:08:24.620549 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nczwv\" (UID: \"73533561-14fb-4481-872e-1b47096f9d30\") " pod="openshift-image-registry/image-registry-697d97f7c8-nczwv" Jan 22 09:08:24 crc kubenswrapper[4811]: E0122 09:08:24.620881 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:08:25.120868627 +0000 UTC m=+149.443055751 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nczwv" (UID: "73533561-14fb-4481-872e-1b47096f9d30") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:08:24 crc kubenswrapper[4811]: I0122 09:08:24.637184 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-bssrl" Jan 22 09:08:24 crc kubenswrapper[4811]: I0122 09:08:24.722548 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 09:08:24 crc kubenswrapper[4811]: E0122 09:08:24.723276 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 09:08:25.223257828 +0000 UTC m=+149.545444951 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:08:24 crc kubenswrapper[4811]: I0122 09:08:24.770084 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-8fxth" event={"ID":"30038e4b-2056-44a4-b3f0-1ec1433a7b4d","Type":"ContainerStarted","Data":"b649ae0e07d8d6944c9471665117ad67024f8536c85be8919a45dd4b8b058a50"} Jan 22 09:08:24 crc kubenswrapper[4811]: I0122 09:08:24.810922 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-j6dqt" event={"ID":"5dd00b53-587c-4992-87da-f3b59054b7ff","Type":"ContainerStarted","Data":"db13d824333f5a1b96550383c24d38be189021acdea01d7fe566bfe97bd7f0d8"} Jan 22 09:08:24 crc kubenswrapper[4811]: I0122 09:08:24.819654 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wcwwm" event={"ID":"535085bd-3682-4133-a5e6-e5f1149e7d24","Type":"ContainerStarted","Data":"a5b963a2bf34279ac419d92c0ed7dd2cad8a0ea727632976e242fd7a0e12e86f"} Jan 22 09:08:24 crc kubenswrapper[4811]: I0122 09:08:24.826281 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nczwv\" (UID: \"73533561-14fb-4481-872e-1b47096f9d30\") " pod="openshift-image-registry/image-registry-697d97f7c8-nczwv" Jan 22 09:08:24 crc kubenswrapper[4811]: E0122 09:08:24.826651 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:08:25.326638886 +0000 UTC m=+149.648826000 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nczwv" (UID: "73533561-14fb-4481-872e-1b47096f9d30") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:08:24 crc kubenswrapper[4811]: I0122 09:08:24.837921 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-l8phl" event={"ID":"2716a2dc-25e2-4a62-8264-41d299b3cd55","Type":"ContainerStarted","Data":"09d0e2c7c13139edda29b5381eef982f2b51fdfc719b2019a183ad1fd4c413bb"} Jan 22 09:08:24 crc kubenswrapper[4811]: I0122 09:08:24.837960 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-l8phl" event={"ID":"2716a2dc-25e2-4a62-8264-41d299b3cd55","Type":"ContainerStarted","Data":"6a2759e4a693f6bef7ad569280da021a395f665e523cd9fd62da18da21fb4c2d"} Jan 22 09:08:24 crc kubenswrapper[4811]: I0122 09:08:24.846244 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nljqw" podStartSLOduration=126.846230404 podStartE2EDuration="2m6.846230404s" podCreationTimestamp="2026-01-22 09:06:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:08:24.844793904 +0000 UTC m=+149.166981028" watchObservedRunningTime="2026-01-22 09:08:24.846230404 +0000 UTC m=+149.168417516" Jan 22 09:08:24 crc kubenswrapper[4811]: I0122 09:08:24.853763 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-kj6fk" event={"ID":"9df8d6f7-ac70-440a-94a9-e4ee69c104a2","Type":"ContainerStarted","Data":"a0e790a721f5f1ab5a460caf4e93a8a6186a92ede555b18e08d27c7aec235661"} Jan 22 09:08:24 crc kubenswrapper[4811]: I0122 09:08:24.871282 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-hs4pm" event={"ID":"117d7039-2cd9-4ee9-9272-923cd05c3565","Type":"ContainerStarted","Data":"0b465e37f00dd296eea7effa90ef9bb9d4de9e2547a84b3b43e6530cdcf90f5e"} Jan 22 09:08:24 crc kubenswrapper[4811]: I0122 09:08:24.917488 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-5w7fh"] Jan 22 09:08:24 crc kubenswrapper[4811]: I0122 09:08:24.928913 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 09:08:24 crc kubenswrapper[4811]: E0122 09:08:24.930384 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 09:08:25.430367892 +0000 UTC m=+149.752555016 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:08:25 crc kubenswrapper[4811]: I0122 09:08:25.035717 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nczwv\" (UID: \"73533561-14fb-4481-872e-1b47096f9d30\") " pod="openshift-image-registry/image-registry-697d97f7c8-nczwv" Jan 22 09:08:25 crc kubenswrapper[4811]: E0122 09:08:25.036313 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:08:25.536298701 +0000 UTC m=+149.858485824 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nczwv" (UID: "73533561-14fb-4481-872e-1b47096f9d30") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:08:25 crc kubenswrapper[4811]: I0122 09:08:25.123208 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-vrdwc"] Jan 22 09:08:25 crc kubenswrapper[4811]: I0122 09:08:25.139642 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 09:08:25 crc kubenswrapper[4811]: E0122 09:08:25.140192 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 09:08:25.640173873 +0000 UTC m=+149.962360996 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:08:25 crc kubenswrapper[4811]: I0122 09:08:25.165529 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-2mphl" Jan 22 09:08:25 crc kubenswrapper[4811]: I0122 09:08:25.168938 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-zzx6v"] Jan 22 09:08:25 crc kubenswrapper[4811]: I0122 09:08:25.174799 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wvhkk"] Jan 22 09:08:25 crc kubenswrapper[4811]: I0122 09:08:25.199562 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5w8tw"] Jan 22 09:08:25 crc kubenswrapper[4811]: I0122 09:08:25.245773 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nczwv\" (UID: \"73533561-14fb-4481-872e-1b47096f9d30\") " pod="openshift-image-registry/image-registry-697d97f7c8-nczwv" Jan 22 09:08:25 crc kubenswrapper[4811]: E0122 09:08:25.246690 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:08:25.746670508 +0000 UTC m=+150.068857631 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nczwv" (UID: "73533561-14fb-4481-872e-1b47096f9d30") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:08:25 crc kubenswrapper[4811]: I0122 09:08:25.251445 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-2mphl" podStartSLOduration=128.251427038 podStartE2EDuration="2m8.251427038s" podCreationTimestamp="2026-01-22 09:06:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:08:25.247059401 +0000 UTC m=+149.569246525" watchObservedRunningTime="2026-01-22 09:08:25.251427038 +0000 UTC m=+149.573614161" Jan 22 09:08:25 crc kubenswrapper[4811]: I0122 09:08:25.252222 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8fflv" podStartSLOduration=127.252213691 podStartE2EDuration="2m7.252213691s" podCreationTimestamp="2026-01-22 09:06:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:08:25.223125247 +0000 UTC m=+149.545312369" watchObservedRunningTime="2026-01-22 09:08:25.252213691 +0000 UTC m=+149.574400814" Jan 22 09:08:25 crc kubenswrapper[4811]: I0122 09:08:25.258710 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-l8phl" Jan 22 09:08:25 crc kubenswrapper[4811]: I0122 09:08:25.307432 4811 patch_prober.go:28] interesting pod/router-default-5444994796-l8phl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 09:08:25 crc kubenswrapper[4811]: [-]has-synced failed: reason withheld Jan 22 09:08:25 crc kubenswrapper[4811]: [+]process-running ok Jan 22 09:08:25 crc kubenswrapper[4811]: healthz check failed Jan 22 09:08:25 crc kubenswrapper[4811]: I0122 09:08:25.307482 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-l8phl" podUID="2716a2dc-25e2-4a62-8264-41d299b3cd55" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 09:08:25 crc kubenswrapper[4811]: W0122 09:08:25.325822 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod30317a78_afdc_4c04_95b6_d2c8fedfb790.slice/crio-6d2f32e2ee5ab2e046fbb742d164b70fcaca0fb853f7f3f4ad8c780a145a242f WatchSource:0}: Error finding container 6d2f32e2ee5ab2e046fbb742d164b70fcaca0fb853f7f3f4ad8c780a145a242f: Status 404 returned error can't find the container with id 6d2f32e2ee5ab2e046fbb742d164b70fcaca0fb853f7f3f4ad8c780a145a242f Jan 22 09:08:25 crc kubenswrapper[4811]: I0122 09:08:25.341294 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-gmwq7"] Jan 22 09:08:25 crc kubenswrapper[4811]: I0122 09:08:25.350204 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 09:08:25 crc kubenswrapper[4811]: E0122 09:08:25.350595 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 09:08:25.850581407 +0000 UTC m=+150.172768530 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:08:25 crc kubenswrapper[4811]: I0122 09:08:25.380696 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-j9b4b"] Jan 22 09:08:25 crc kubenswrapper[4811]: I0122 09:08:25.423551 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-2hm84" podStartSLOduration=127.423533533 podStartE2EDuration="2m7.423533533s" podCreationTimestamp="2026-01-22 09:06:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:08:25.41179696 +0000 UTC m=+149.733984083" watchObservedRunningTime="2026-01-22 09:08:25.423533533 +0000 UTC m=+149.745720656" Jan 22 09:08:25 crc kubenswrapper[4811]: I0122 09:08:25.423876 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-jhptg"] Jan 22 09:08:25 crc kubenswrapper[4811]: I0122 09:08:25.428115 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-78zjg"] Jan 22 09:08:25 crc kubenswrapper[4811]: I0122 09:08:25.455375 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nczwv\" (UID: \"73533561-14fb-4481-872e-1b47096f9d30\") " pod="openshift-image-registry/image-registry-697d97f7c8-nczwv" Jan 22 09:08:25 crc kubenswrapper[4811]: E0122 09:08:25.455698 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:08:25.955686158 +0000 UTC m=+150.277873281 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nczwv" (UID: "73533561-14fb-4481-872e-1b47096f9d30") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:08:25 crc kubenswrapper[4811]: I0122 09:08:25.513397 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-knkwh" podStartSLOduration=128.513376442 podStartE2EDuration="2m8.513376442s" podCreationTimestamp="2026-01-22 09:06:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:08:25.503833917 +0000 UTC m=+149.826021040" watchObservedRunningTime="2026-01-22 09:08:25.513376442 +0000 UTC m=+149.835563566" Jan 22 09:08:25 crc kubenswrapper[4811]: I0122 09:08:25.562043 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 09:08:25 crc kubenswrapper[4811]: E0122 09:08:25.562402 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 09:08:26.062387219 +0000 UTC m=+150.384574342 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:08:25 crc kubenswrapper[4811]: I0122 09:08:25.572976 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-vx8k5" podStartSLOduration=127.572957132 podStartE2EDuration="2m7.572957132s" podCreationTimestamp="2026-01-22 09:06:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:08:25.572927516 +0000 UTC m=+149.895114639" watchObservedRunningTime="2026-01-22 09:08:25.572957132 +0000 UTC m=+149.895144254" Jan 22 09:08:25 crc kubenswrapper[4811]: W0122 09:08:25.601657 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf58eb1d8_bb02_4af7_857c_138518c5bbf2.slice/crio-684026d62f766b4dd501868118b9ea743242efd4c2e4420dfddd6a9fcc3c0959 WatchSource:0}: Error finding container 684026d62f766b4dd501868118b9ea743242efd4c2e4420dfddd6a9fcc3c0959: Status 404 returned error can't find the container with id 684026d62f766b4dd501868118b9ea743242efd4c2e4420dfddd6a9fcc3c0959 Jan 22 09:08:25 crc kubenswrapper[4811]: I0122 09:08:25.622732 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4ftvq" podStartSLOduration=127.622713835 podStartE2EDuration="2m7.622713835s" podCreationTimestamp="2026-01-22 09:06:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:08:25.620484802 +0000 UTC m=+149.942671924" watchObservedRunningTime="2026-01-22 09:08:25.622713835 +0000 UTC m=+149.944900948" Jan 22 09:08:25 crc kubenswrapper[4811]: I0122 09:08:25.624128 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-9624f"] Jan 22 09:08:25 crc kubenswrapper[4811]: W0122 09:08:25.632642 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcb22b2ae_6c13_482b_b827_5200e2be87ca.slice/crio-67cc5480a398ba328c5d6fb4e4a1a15ec7dab68b99cb2dac32fc89ebf775fefc WatchSource:0}: Error finding container 67cc5480a398ba328c5d6fb4e4a1a15ec7dab68b99cb2dac32fc89ebf775fefc: Status 404 returned error can't find the container with id 67cc5480a398ba328c5d6fb4e4a1a15ec7dab68b99cb2dac32fc89ebf775fefc Jan 22 09:08:25 crc kubenswrapper[4811]: I0122 09:08:25.664177 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nczwv\" (UID: \"73533561-14fb-4481-872e-1b47096f9d30\") " pod="openshift-image-registry/image-registry-697d97f7c8-nczwv" Jan 22 09:08:25 crc kubenswrapper[4811]: E0122 09:08:25.664461 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:08:26.16444886 +0000 UTC m=+150.486635984 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nczwv" (UID: "73533561-14fb-4481-872e-1b47096f9d30") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:08:25 crc kubenswrapper[4811]: I0122 09:08:25.766567 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 09:08:25 crc kubenswrapper[4811]: E0122 09:08:25.768125 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 09:08:26.26810553 +0000 UTC m=+150.590292653 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:08:25 crc kubenswrapper[4811]: I0122 09:08:25.768619 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nczwv\" (UID: \"73533561-14fb-4481-872e-1b47096f9d30\") " pod="openshift-image-registry/image-registry-697d97f7c8-nczwv" Jan 22 09:08:25 crc kubenswrapper[4811]: E0122 09:08:25.769318 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:08:26.269300254 +0000 UTC m=+150.591487366 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nczwv" (UID: "73533561-14fb-4481-872e-1b47096f9d30") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:08:25 crc kubenswrapper[4811]: I0122 09:08:25.832083 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lzdbl"] Jan 22 09:08:25 crc kubenswrapper[4811]: I0122 09:08:25.870996 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 09:08:25 crc kubenswrapper[4811]: E0122 09:08:25.871349 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 09:08:26.371330445 +0000 UTC m=+150.693517568 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:08:25 crc kubenswrapper[4811]: I0122 09:08:25.902536 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-9624f" event={"ID":"e7343a26-0e21-4d61-ad1c-c3c5479a89e1","Type":"ContainerStarted","Data":"b440e65eb08a72aaf20c488d249132c6ecec935922a293039ca131df09fbfa70"} Jan 22 09:08:25 crc kubenswrapper[4811]: I0122 09:08:25.932556 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-776jv" event={"ID":"c910b62e-8a7e-4a3d-b8b0-f90384ec999f","Type":"ContainerStarted","Data":"a6204ae7a91ce7cacaec45c30690857805ce553ba0a65819fa9002323fa8350a"} Jan 22 09:08:25 crc kubenswrapper[4811]: I0122 09:08:25.950712 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4wjd4" podStartSLOduration=127.950698079 podStartE2EDuration="2m7.950698079s" podCreationTimestamp="2026-01-22 09:06:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:08:25.870305563 +0000 UTC m=+150.192492686" watchObservedRunningTime="2026-01-22 09:08:25.950698079 +0000 UTC m=+150.272885202" Jan 22 09:08:25 crc kubenswrapper[4811]: I0122 09:08:25.952706 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-fcb4z"] Jan 22 09:08:25 crc kubenswrapper[4811]: I0122 09:08:25.961716 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-bssrl" podStartSLOduration=128.961707982 podStartE2EDuration="2m8.961707982s" podCreationTimestamp="2026-01-22 09:06:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:08:25.952209009 +0000 UTC m=+150.274396132" watchObservedRunningTime="2026-01-22 09:08:25.961707982 +0000 UTC m=+150.283895106" Jan 22 09:08:25 crc kubenswrapper[4811]: I0122 09:08:25.964217 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-v2nnf"] Jan 22 09:08:25 crc kubenswrapper[4811]: I0122 09:08:25.971772 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5w8tw" event={"ID":"c1be96c2-9c8f-4c6d-8dff-02e0898a963b","Type":"ContainerStarted","Data":"aa2b1ce39d578eb2fd6399675afe2a0a6495eddc5bf30943fa8a3e64a8b6c646"} Jan 22 09:08:25 crc kubenswrapper[4811]: I0122 09:08:25.971804 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5w8tw" event={"ID":"c1be96c2-9c8f-4c6d-8dff-02e0898a963b","Type":"ContainerStarted","Data":"bee17ba969b10a856cbb43c88447d6fbec650a5c47632b7860e11771248b9274"} Jan 22 09:08:25 crc kubenswrapper[4811]: I0122 09:08:25.979238 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nczwv\" (UID: \"73533561-14fb-4481-872e-1b47096f9d30\") " pod="openshift-image-registry/image-registry-697d97f7c8-nczwv" Jan 22 09:08:25 crc kubenswrapper[4811]: E0122 09:08:25.979647 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:08:26.479612949 +0000 UTC m=+150.801800072 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nczwv" (UID: "73533561-14fb-4481-872e-1b47096f9d30") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:08:25 crc kubenswrapper[4811]: I0122 09:08:25.985318 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-kj6fk" event={"ID":"9df8d6f7-ac70-440a-94a9-e4ee69c104a2","Type":"ContainerStarted","Data":"4d7b30f9f6721254ac1c1ff6a86815c26418d8130dd90ae2ccf7724b566bb897"} Jan 22 09:08:25 crc kubenswrapper[4811]: I0122 09:08:25.990771 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wcwwm" event={"ID":"535085bd-3682-4133-a5e6-e5f1149e7d24","Type":"ContainerStarted","Data":"995cc4a4aec448424a1031ea60ebc72ab1f61a6f1b1a6cf24961ffb209f44e82"} Jan 22 09:08:26 crc kubenswrapper[4811]: I0122 09:08:26.000562 4811 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-wcwwm container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.41:8443/healthz\": dial tcp 10.217.0.41:8443: connect: connection refused" start-of-body= Jan 22 09:08:26 crc kubenswrapper[4811]: I0122 09:08:26.000674 4811 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wcwwm" podUID="535085bd-3682-4133-a5e6-e5f1149e7d24" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.41:8443/healthz\": dial tcp 10.217.0.41:8443: connect: connection refused" Jan 22 09:08:26 crc kubenswrapper[4811]: W0122 09:08:26.049638 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod88b52253_3840_42b0_aa3c_d8708274dcfa.slice/crio-cab508c5530383323e695698a48e94a44cef8ed86758873a2a62363c20367f7d WatchSource:0}: Error finding container cab508c5530383323e695698a48e94a44cef8ed86758873a2a62363c20367f7d: Status 404 returned error can't find the container with id cab508c5530383323e695698a48e94a44cef8ed86758873a2a62363c20367f7d Jan 22 09:08:26 crc kubenswrapper[4811]: I0122 09:08:26.080469 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 09:08:26 crc kubenswrapper[4811]: E0122 09:08:26.081422 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 09:08:26.58140437 +0000 UTC m=+150.903591494 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:08:26 crc kubenswrapper[4811]: I0122 09:08:26.085048 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-gmwq7" event={"ID":"708ddef5-479e-44ef-a189-c41123a73bbe","Type":"ContainerStarted","Data":"91d56d670615843faf27681599a20642ab001d934408a138f460299b37dfa017"} Jan 22 09:08:26 crc kubenswrapper[4811]: I0122 09:08:26.085093 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wcwwm" Jan 22 09:08:26 crc kubenswrapper[4811]: I0122 09:08:26.102067 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-8fxth" event={"ID":"30038e4b-2056-44a4-b3f0-1ec1433a7b4d","Type":"ContainerStarted","Data":"06b588c1b7654efd1030b2a543466b4b1189448684204cb818adc296e0a3b27f"} Jan 22 09:08:26 crc kubenswrapper[4811]: I0122 09:08:26.115209 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-vrdwc" event={"ID":"1de7ea7e-b219-47b4-9ba9-ef3688eda036","Type":"ContainerStarted","Data":"504ffc93c28db10345a56d7e7e12cb860f639349c69bc3b2da8fc456a1bf215e"} Jan 22 09:08:26 crc kubenswrapper[4811]: I0122 09:08:26.149816 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lzdbl" event={"ID":"c488ec3a-dfdf-46dd-8f3f-1346232394d3","Type":"ContainerStarted","Data":"ab42db488d35e23c36ba8623f97eca27b2ad5b07f3381741701d2fc535f13e24"} Jan 22 09:08:26 crc kubenswrapper[4811]: I0122 09:08:26.182972 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nczwv\" (UID: \"73533561-14fb-4481-872e-1b47096f9d30\") " pod="openshift-image-registry/image-registry-697d97f7c8-nczwv" Jan 22 09:08:26 crc kubenswrapper[4811]: E0122 09:08:26.183982 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:08:26.683969151 +0000 UTC m=+151.006156274 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nczwv" (UID: "73533561-14fb-4481-872e-1b47096f9d30") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:08:26 crc kubenswrapper[4811]: I0122 09:08:26.222961 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-776jv" podStartSLOduration=128.222940234 podStartE2EDuration="2m8.222940234s" podCreationTimestamp="2026-01-22 09:06:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:08:26.196001293 +0000 UTC m=+150.518188415" watchObservedRunningTime="2026-01-22 09:08:26.222940234 +0000 UTC m=+150.545127347" Jan 22 09:08:26 crc kubenswrapper[4811]: I0122 09:08:26.224023 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-9pxnj"] Jan 22 09:08:26 crc kubenswrapper[4811]: I0122 09:08:26.269431 4811 patch_prober.go:28] interesting pod/router-default-5444994796-l8phl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 09:08:26 crc kubenswrapper[4811]: [-]has-synced failed: reason withheld Jan 22 09:08:26 crc kubenswrapper[4811]: [+]process-running ok Jan 22 09:08:26 crc kubenswrapper[4811]: healthz check failed Jan 22 09:08:26 crc kubenswrapper[4811]: I0122 09:08:26.269488 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-l8phl" podUID="2716a2dc-25e2-4a62-8264-41d299b3cd55" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 09:08:26 crc kubenswrapper[4811]: I0122 09:08:26.279336 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-trtbx" event={"ID":"78341d4a-0228-4056-ae13-9619ea5c4c35","Type":"ContainerStarted","Data":"860907f65f5768a8c4f4d7572c8c554ab0401efac198ca137edb93b865a4b044"} Jan 22 09:08:26 crc kubenswrapper[4811]: I0122 09:08:26.281695 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-2jrk9"] Jan 22 09:08:26 crc kubenswrapper[4811]: I0122 09:08:26.283009 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qnxwq"] Jan 22 09:08:26 crc kubenswrapper[4811]: I0122 09:08:26.285110 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 09:08:26 crc kubenswrapper[4811]: E0122 09:08:26.291398 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 09:08:26.791377766 +0000 UTC m=+151.113564889 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:08:26 crc kubenswrapper[4811]: I0122 09:08:26.299193 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-zzx6v" event={"ID":"30317a78-afdc-4c04-95b6-d2c8fedfb790","Type":"ContainerStarted","Data":"437f918afa8141131585b9e72b8d2b113639d64be20e4eda6db965d16f343f38"} Jan 22 09:08:26 crc kubenswrapper[4811]: I0122 09:08:26.299247 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-zzx6v" event={"ID":"30317a78-afdc-4c04-95b6-d2c8fedfb790","Type":"ContainerStarted","Data":"6d2f32e2ee5ab2e046fbb742d164b70fcaca0fb853f7f3f4ad8c780a145a242f"} Jan 22 09:08:26 crc kubenswrapper[4811]: I0122 09:08:26.300199 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-zzx6v" Jan 22 09:08:26 crc kubenswrapper[4811]: I0122 09:08:26.309038 4811 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-zzx6v container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.31:8080/healthz\": dial tcp 10.217.0.31:8080: connect: connection refused" start-of-body= Jan 22 09:08:26 crc kubenswrapper[4811]: I0122 09:08:26.309094 4811 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-zzx6v" podUID="30317a78-afdc-4c04-95b6-d2c8fedfb790" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.31:8080/healthz\": dial tcp 10.217.0.31:8080: connect: connection refused" Jan 22 09:08:26 crc kubenswrapper[4811]: I0122 09:08:26.325945 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-jjjjp"] Jan 22 09:08:26 crc kubenswrapper[4811]: W0122 09:08:26.326312 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8b7094aa_cc4a_49eb_be77_715a4efbc1d0.slice/crio-4643b7d4056db142d90a31fcd1fba184c525ce90e0636ad233612532cda2d0e4 WatchSource:0}: Error finding container 4643b7d4056db142d90a31fcd1fba184c525ce90e0636ad233612532cda2d0e4: Status 404 returned error can't find the container with id 4643b7d4056db142d90a31fcd1fba184c525ce90e0636ad233612532cda2d0e4 Jan 22 09:08:26 crc kubenswrapper[4811]: I0122 09:08:26.335606 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-l8phl" podStartSLOduration=128.335593851 podStartE2EDuration="2m8.335593851s" podCreationTimestamp="2026-01-22 09:06:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:08:26.33502607 +0000 UTC m=+150.657213183" watchObservedRunningTime="2026-01-22 09:08:26.335593851 +0000 UTC m=+150.657780974" Jan 22 09:08:26 crc kubenswrapper[4811]: I0122 09:08:26.349411 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6l9qr"] Jan 22 09:08:26 crc kubenswrapper[4811]: I0122 09:08:26.356306 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-78zjg" event={"ID":"cb22b2ae-6c13-482b-b827-5200e2be87ca","Type":"ContainerStarted","Data":"67cc5480a398ba328c5d6fb4e4a1a15ec7dab68b99cb2dac32fc89ebf775fefc"} Jan 22 09:08:26 crc kubenswrapper[4811]: W0122 09:08:26.369714 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0d9fa190_3073_4b2b_a348_f9a8e1f994b2.slice/crio-7da5d6f1f861f9c782bedf6281b856dc78a8867280f1d206b65dc3db460a5df3 WatchSource:0}: Error finding container 7da5d6f1f861f9c782bedf6281b856dc78a8867280f1d206b65dc3db460a5df3: Status 404 returned error can't find the container with id 7da5d6f1f861f9c782bedf6281b856dc78a8867280f1d206b65dc3db460a5df3 Jan 22 09:08:26 crc kubenswrapper[4811]: I0122 09:08:26.371504 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-j9b4b" event={"ID":"c674e914-755f-48a4-97ca-03e1a69a021a","Type":"ContainerStarted","Data":"394396cfaf914e8dd1ed9f36748cf706feced8bc6444e38c59969f76045151fa"} Jan 22 09:08:26 crc kubenswrapper[4811]: W0122 09:08:26.372591 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod387d7c1a_1589_4377_9566_a45d8f498f38.slice/crio-b8bb8164f09549a5336c09008515b1dfa07304be27ff3c82c9bf567a0b0e8bef WatchSource:0}: Error finding container b8bb8164f09549a5336c09008515b1dfa07304be27ff3c82c9bf567a0b0e8bef: Status 404 returned error can't find the container with id b8bb8164f09549a5336c09008515b1dfa07304be27ff3c82c9bf567a0b0e8bef Jan 22 09:08:26 crc kubenswrapper[4811]: I0122 09:08:26.374891 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-6jdmv"] Jan 22 09:08:26 crc kubenswrapper[4811]: I0122 09:08:26.387768 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nczwv\" (UID: \"73533561-14fb-4481-872e-1b47096f9d30\") " pod="openshift-image-registry/image-registry-697d97f7c8-nczwv" Jan 22 09:08:26 crc kubenswrapper[4811]: E0122 09:08:26.389116 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:08:26.889099874 +0000 UTC m=+151.211286997 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nczwv" (UID: "73533561-14fb-4481-872e-1b47096f9d30") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:08:26 crc kubenswrapper[4811]: I0122 09:08:26.435216 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-jhptg" event={"ID":"f58eb1d8-bb02-4af7-857c-138518c5bbf2","Type":"ContainerStarted","Data":"684026d62f766b4dd501868118b9ea743242efd4c2e4420dfddd6a9fcc3c0959"} Jan 22 09:08:26 crc kubenswrapper[4811]: I0122 09:08:26.472638 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-hs4pm" podStartSLOduration=128.472596405 podStartE2EDuration="2m8.472596405s" podCreationTimestamp="2026-01-22 09:06:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:08:26.410809765 +0000 UTC m=+150.732996889" watchObservedRunningTime="2026-01-22 09:08:26.472596405 +0000 UTC m=+150.794783528" Jan 22 09:08:26 crc kubenswrapper[4811]: I0122 09:08:26.489504 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 09:08:26 crc kubenswrapper[4811]: E0122 09:08:26.517844 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 09:08:27.0178152 +0000 UTC m=+151.340002323 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:08:26 crc kubenswrapper[4811]: I0122 09:08:26.518250 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-j6dqt" event={"ID":"5dd00b53-587c-4992-87da-f3b59054b7ff","Type":"ContainerStarted","Data":"f63c79cb9f0dc689e2a89443759369ecf8b141ab20b11265df7bcdc02e39f53b"} Jan 22 09:08:26 crc kubenswrapper[4811]: I0122 09:08:26.538203 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-kj6fk" podStartSLOduration=128.53818802 podStartE2EDuration="2m8.53818802s" podCreationTimestamp="2026-01-22 09:06:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:08:26.520871043 +0000 UTC m=+150.843058167" watchObservedRunningTime="2026-01-22 09:08:26.53818802 +0000 UTC m=+150.860375143" Jan 22 09:08:26 crc kubenswrapper[4811]: I0122 09:08:26.571910 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wvhkk" event={"ID":"9f194f7f-c770-471d-8bcd-c2fd613e0b46","Type":"ContainerStarted","Data":"d3a973bfa4ce6a49533267d09c855b5855becff5830bc9ae21014643b900c5fa"} Jan 22 09:08:26 crc kubenswrapper[4811]: W0122 09:08:26.594994 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0803a721_b862_4696_a752_e5af589ced0b.slice/crio-d5cf68099c11d32ce944c5120b4de22e305a0413b58a8d9583e28c691369916b WatchSource:0}: Error finding container d5cf68099c11d32ce944c5120b4de22e305a0413b58a8d9583e28c691369916b: Status 404 returned error can't find the container with id d5cf68099c11d32ce944c5120b4de22e305a0413b58a8d9583e28c691369916b Jan 22 09:08:26 crc kubenswrapper[4811]: I0122 09:08:26.599153 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484540-kbb7d"] Jan 22 09:08:26 crc kubenswrapper[4811]: I0122 09:08:26.612790 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-5w7fh" event={"ID":"c173016d-7244-468d-a84f-2ef0e2bf0258","Type":"ContainerStarted","Data":"cd70e7fe10539e45ba7f07af3ef870269f768af5ad8e4147f7743610b1c4f2d7"} Jan 22 09:08:26 crc kubenswrapper[4811]: I0122 09:08:26.612832 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-5w7fh" event={"ID":"c173016d-7244-468d-a84f-2ef0e2bf0258","Type":"ContainerStarted","Data":"5d5ba9a779eb9cadf839bda2403f197d7b6481bfd0949bd91b658ed5632a49c4"} Jan 22 09:08:26 crc kubenswrapper[4811]: I0122 09:08:26.618451 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-hs4pm" Jan 22 09:08:26 crc kubenswrapper[4811]: I0122 09:08:26.623500 4811 patch_prober.go:28] interesting pod/downloads-7954f5f757-hs4pm container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Jan 22 09:08:26 crc kubenswrapper[4811]: I0122 09:08:26.623553 4811 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-hs4pm" podUID="117d7039-2cd9-4ee9-9272-923cd05c3565" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" Jan 22 09:08:26 crc kubenswrapper[4811]: I0122 09:08:26.625786 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nczwv\" (UID: \"73533561-14fb-4481-872e-1b47096f9d30\") " pod="openshift-image-registry/image-registry-697d97f7c8-nczwv" Jan 22 09:08:26 crc kubenswrapper[4811]: E0122 09:08:26.626259 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:08:27.126241435 +0000 UTC m=+151.448428558 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nczwv" (UID: "73533561-14fb-4481-872e-1b47096f9d30") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:08:26 crc kubenswrapper[4811]: I0122 09:08:26.628935 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-bssrl" Jan 22 09:08:26 crc kubenswrapper[4811]: I0122 09:08:26.646369 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wcwwm" podStartSLOduration=128.646355707 podStartE2EDuration="2m8.646355707s" podCreationTimestamp="2026-01-22 09:06:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:08:26.645003006 +0000 UTC m=+150.967190129" watchObservedRunningTime="2026-01-22 09:08:26.646355707 +0000 UTC m=+150.968542830" Jan 22 09:08:26 crc kubenswrapper[4811]: I0122 09:08:26.727248 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 09:08:26 crc kubenswrapper[4811]: E0122 09:08:26.727908 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 09:08:27.227886752 +0000 UTC m=+151.550073875 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:08:26 crc kubenswrapper[4811]: I0122 09:08:26.732266 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-5w7fh" podStartSLOduration=128.73225031 podStartE2EDuration="2m8.73225031s" podCreationTimestamp="2026-01-22 09:06:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:08:26.728925901 +0000 UTC m=+151.051113023" watchObservedRunningTime="2026-01-22 09:08:26.73225031 +0000 UTC m=+151.054437433" Jan 22 09:08:26 crc kubenswrapper[4811]: I0122 09:08:26.759020 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-rx42r"] Jan 22 09:08:26 crc kubenswrapper[4811]: I0122 09:08:26.796322 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-j6dqt" podStartSLOduration=128.796307101 podStartE2EDuration="2m8.796307101s" podCreationTimestamp="2026-01-22 09:06:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:08:26.795656484 +0000 UTC m=+151.117843607" watchObservedRunningTime="2026-01-22 09:08:26.796307101 +0000 UTC m=+151.118494225" Jan 22 09:08:26 crc kubenswrapper[4811]: I0122 09:08:26.816690 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-zwkzn"] Jan 22 09:08:26 crc kubenswrapper[4811]: I0122 09:08:26.820376 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-trtbx" podStartSLOduration=129.820361705 podStartE2EDuration="2m9.820361705s" podCreationTimestamp="2026-01-22 09:06:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:08:26.812858164 +0000 UTC m=+151.135045287" watchObservedRunningTime="2026-01-22 09:08:26.820361705 +0000 UTC m=+151.142548817" Jan 22 09:08:26 crc kubenswrapper[4811]: I0122 09:08:26.829544 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nczwv\" (UID: \"73533561-14fb-4481-872e-1b47096f9d30\") " pod="openshift-image-registry/image-registry-697d97f7c8-nczwv" Jan 22 09:08:26 crc kubenswrapper[4811]: E0122 09:08:26.833226 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:08:27.333207449 +0000 UTC m=+151.655394572 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nczwv" (UID: "73533561-14fb-4481-872e-1b47096f9d30") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:08:26 crc kubenswrapper[4811]: I0122 09:08:26.877984 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wvhkk" podStartSLOduration=128.877968131 podStartE2EDuration="2m8.877968131s" podCreationTimestamp="2026-01-22 09:06:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:08:26.848084795 +0000 UTC m=+151.170271908" watchObservedRunningTime="2026-01-22 09:08:26.877968131 +0000 UTC m=+151.200155253" Jan 22 09:08:26 crc kubenswrapper[4811]: I0122 09:08:26.924126 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-776jv" Jan 22 09:08:26 crc kubenswrapper[4811]: I0122 09:08:26.924447 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-776jv" Jan 22 09:08:26 crc kubenswrapper[4811]: I0122 09:08:26.924609 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-8fxth" podStartSLOduration=7.924590423 podStartE2EDuration="7.924590423s" podCreationTimestamp="2026-01-22 09:08:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:08:26.879610858 +0000 UTC m=+151.201797982" watchObservedRunningTime="2026-01-22 09:08:26.924590423 +0000 UTC m=+151.246777546" Jan 22 09:08:26 crc kubenswrapper[4811]: I0122 09:08:26.925750 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-jhptg" podStartSLOduration=128.925739169 podStartE2EDuration="2m8.925739169s" podCreationTimestamp="2026-01-22 09:06:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:08:26.924733111 +0000 UTC m=+151.246920234" watchObservedRunningTime="2026-01-22 09:08:26.925739169 +0000 UTC m=+151.247926292" Jan 22 09:08:26 crc kubenswrapper[4811]: I0122 09:08:26.937984 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 09:08:26 crc kubenswrapper[4811]: E0122 09:08:26.938669 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 09:08:27.438654945 +0000 UTC m=+151.760842058 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:08:26 crc kubenswrapper[4811]: I0122 09:08:26.962845 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-776jv" Jan 22 09:08:27 crc kubenswrapper[4811]: I0122 09:08:27.037728 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-zzx6v" podStartSLOduration=129.037703405 podStartE2EDuration="2m9.037703405s" podCreationTimestamp="2026-01-22 09:06:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:08:26.978297295 +0000 UTC m=+151.300484418" watchObservedRunningTime="2026-01-22 09:08:27.037703405 +0000 UTC m=+151.359890528" Jan 22 09:08:27 crc kubenswrapper[4811]: I0122 09:08:27.042509 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nczwv\" (UID: \"73533561-14fb-4481-872e-1b47096f9d30\") " pod="openshift-image-registry/image-registry-697d97f7c8-nczwv" Jan 22 09:08:27 crc kubenswrapper[4811]: E0122 09:08:27.042943 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:08:27.542929841 +0000 UTC m=+151.865116964 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nczwv" (UID: "73533561-14fb-4481-872e-1b47096f9d30") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:08:27 crc kubenswrapper[4811]: I0122 09:08:27.096784 4811 csr.go:261] certificate signing request csr-56pkb is approved, waiting to be issued Jan 22 09:08:27 crc kubenswrapper[4811]: I0122 09:08:27.129011 4811 csr.go:257] certificate signing request csr-56pkb is issued Jan 22 09:08:27 crc kubenswrapper[4811]: I0122 09:08:27.143216 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 09:08:27 crc kubenswrapper[4811]: E0122 09:08:27.144012 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 09:08:27.643990956 +0000 UTC m=+151.966178079 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:08:27 crc kubenswrapper[4811]: I0122 09:08:27.245400 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nczwv\" (UID: \"73533561-14fb-4481-872e-1b47096f9d30\") " pod="openshift-image-registry/image-registry-697d97f7c8-nczwv" Jan 22 09:08:27 crc kubenswrapper[4811]: E0122 09:08:27.245862 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:08:27.745851299 +0000 UTC m=+152.068038421 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nczwv" (UID: "73533561-14fb-4481-872e-1b47096f9d30") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:08:27 crc kubenswrapper[4811]: I0122 09:08:27.270889 4811 patch_prober.go:28] interesting pod/router-default-5444994796-l8phl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 09:08:27 crc kubenswrapper[4811]: [-]has-synced failed: reason withheld Jan 22 09:08:27 crc kubenswrapper[4811]: [+]process-running ok Jan 22 09:08:27 crc kubenswrapper[4811]: healthz check failed Jan 22 09:08:27 crc kubenswrapper[4811]: I0122 09:08:27.270924 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-l8phl" podUID="2716a2dc-25e2-4a62-8264-41d299b3cd55" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 09:08:27 crc kubenswrapper[4811]: I0122 09:08:27.346499 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 09:08:27 crc kubenswrapper[4811]: E0122 09:08:27.346856 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 09:08:27.846844195 +0000 UTC m=+152.169031317 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:08:27 crc kubenswrapper[4811]: I0122 09:08:27.448803 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nczwv\" (UID: \"73533561-14fb-4481-872e-1b47096f9d30\") " pod="openshift-image-registry/image-registry-697d97f7c8-nczwv" Jan 22 09:08:27 crc kubenswrapper[4811]: E0122 09:08:27.449169 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:08:27.949150717 +0000 UTC m=+152.271337840 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nczwv" (UID: "73533561-14fb-4481-872e-1b47096f9d30") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:08:27 crc kubenswrapper[4811]: I0122 09:08:27.461357 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-274vf" Jan 22 09:08:27 crc kubenswrapper[4811]: I0122 09:08:27.551159 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 09:08:27 crc kubenswrapper[4811]: E0122 09:08:27.552112 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 09:08:28.052095065 +0000 UTC m=+152.374282187 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:08:27 crc kubenswrapper[4811]: I0122 09:08:27.654490 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nczwv\" (UID: \"73533561-14fb-4481-872e-1b47096f9d30\") " pod="openshift-image-registry/image-registry-697d97f7c8-nczwv" Jan 22 09:08:27 crc kubenswrapper[4811]: E0122 09:08:27.654987 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:08:28.154974267 +0000 UTC m=+152.477161391 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nczwv" (UID: "73533561-14fb-4481-872e-1b47096f9d30") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:08:27 crc kubenswrapper[4811]: I0122 09:08:27.679148 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5w8tw" event={"ID":"c1be96c2-9c8f-4c6d-8dff-02e0898a963b","Type":"ContainerStarted","Data":"372ef7210e0cafce8fd88d26140ddbe9ca987e094842d31496f54845b38badd0"} Jan 22 09:08:27 crc kubenswrapper[4811]: I0122 09:08:27.679713 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5w8tw" Jan 22 09:08:27 crc kubenswrapper[4811]: I0122 09:08:27.700908 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5w8tw" podStartSLOduration=129.700898253 podStartE2EDuration="2m9.700898253s" podCreationTimestamp="2026-01-22 09:06:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:08:27.69878667 +0000 UTC m=+152.020973794" watchObservedRunningTime="2026-01-22 09:08:27.700898253 +0000 UTC m=+152.023085376" Jan 22 09:08:27 crc kubenswrapper[4811]: I0122 09:08:27.711697 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-rx42r" event={"ID":"8a9d91fa-d887-4128-af43-cfe3cad79784","Type":"ContainerStarted","Data":"d648319ca56fa01db405273ab6ce163c881df7ca97d0f47a38db6c24bf6c4b13"} Jan 22 09:08:27 crc kubenswrapper[4811]: I0122 09:08:27.734211 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-v2nnf" event={"ID":"88b52253-3840-42b0-aa3c-d8708274dcfa","Type":"ContainerStarted","Data":"28e14d99bde67891bee3af9246b27a391dbdd2aaebec4040c233052956085e26"} Jan 22 09:08:27 crc kubenswrapper[4811]: I0122 09:08:27.734250 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-v2nnf" event={"ID":"88b52253-3840-42b0-aa3c-d8708274dcfa","Type":"ContainerStarted","Data":"cab508c5530383323e695698a48e94a44cef8ed86758873a2a62363c20367f7d"} Jan 22 09:08:27 crc kubenswrapper[4811]: I0122 09:08:27.735461 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-gmwq7" event={"ID":"708ddef5-479e-44ef-a189-c41123a73bbe","Type":"ContainerStarted","Data":"aff357e7d0b9e0d1714f204b08bc9fc9f9f7f007005874fbf29fcf7244e246b1"} Jan 22 09:08:27 crc kubenswrapper[4811]: I0122 09:08:27.736197 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-gmwq7" Jan 22 09:08:27 crc kubenswrapper[4811]: I0122 09:08:27.740158 4811 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-gmwq7 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.9:6443/healthz\": dial tcp 10.217.0.9:6443: connect: connection refused" start-of-body= Jan 22 09:08:27 crc kubenswrapper[4811]: I0122 09:08:27.740208 4811 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-gmwq7" podUID="708ddef5-479e-44ef-a189-c41123a73bbe" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.9:6443/healthz\": dial tcp 10.217.0.9:6443: connect: connection refused" Jan 22 09:08:27 crc kubenswrapper[4811]: I0122 09:08:27.756160 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 09:08:27 crc kubenswrapper[4811]: E0122 09:08:27.756550 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 09:08:28.256536597 +0000 UTC m=+152.578723721 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:08:27 crc kubenswrapper[4811]: I0122 09:08:27.759266 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6l9qr" event={"ID":"0d71c966-5dcc-4f11-b21e-8c60ba5b7b57","Type":"ContainerStarted","Data":"071a75a177520957f49721dded8cb4bb52988c5c74a9f914c9c57c456a9cd5c2"} Jan 22 09:08:27 crc kubenswrapper[4811]: I0122 09:08:27.759310 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6l9qr" event={"ID":"0d71c966-5dcc-4f11-b21e-8c60ba5b7b57","Type":"ContainerStarted","Data":"dc8764379d26a5783fdcb4c5497164981eaa7ce68958cd75e25e2e72ab02a2a7"} Jan 22 09:08:27 crc kubenswrapper[4811]: I0122 09:08:27.768883 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-gmwq7" podStartSLOduration=130.768873002 podStartE2EDuration="2m10.768873002s" podCreationTimestamp="2026-01-22 09:06:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:08:27.767794347 +0000 UTC m=+152.089981461" watchObservedRunningTime="2026-01-22 09:08:27.768873002 +0000 UTC m=+152.091060125" Jan 22 09:08:27 crc kubenswrapper[4811]: I0122 09:08:27.770459 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qnxwq" event={"ID":"c47199c0-d608-4ece-913c-b164a4d16f21","Type":"ContainerStarted","Data":"4d1af2190c7832f585e86e0ec7e286ebf4be932ce5efb7b44dac3a77d2a4b797"} Jan 22 09:08:27 crc kubenswrapper[4811]: I0122 09:08:27.770492 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qnxwq" event={"ID":"c47199c0-d608-4ece-913c-b164a4d16f21","Type":"ContainerStarted","Data":"4803c73f9440291e97651bf9d64ae910f8e5bfe60c9e43c1ab53f4b55ba0da17"} Jan 22 09:08:27 crc kubenswrapper[4811]: I0122 09:08:27.770873 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qnxwq" Jan 22 09:08:27 crc kubenswrapper[4811]: I0122 09:08:27.772262 4811 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-qnxwq container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.26:8443/healthz\": dial tcp 10.217.0.26:8443: connect: connection refused" start-of-body= Jan 22 09:08:27 crc kubenswrapper[4811]: I0122 09:08:27.772290 4811 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qnxwq" podUID="c47199c0-d608-4ece-913c-b164a4d16f21" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.26:8443/healthz\": dial tcp 10.217.0.26:8443: connect: connection refused" Jan 22 09:08:27 crc kubenswrapper[4811]: I0122 09:08:27.793818 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-9pxnj" event={"ID":"8b7094aa-cc4a-49eb-be77-715a4efbc1d0","Type":"ContainerStarted","Data":"3678aebd4862ee094527e6352986ed25002cda159c83885f4737e0c880198796"} Jan 22 09:08:27 crc kubenswrapper[4811]: I0122 09:08:27.793853 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-9pxnj" event={"ID":"8b7094aa-cc4a-49eb-be77-715a4efbc1d0","Type":"ContainerStarted","Data":"4643b7d4056db142d90a31fcd1fba184c525ce90e0636ad233612532cda2d0e4"} Jan 22 09:08:27 crc kubenswrapper[4811]: I0122 09:08:27.803590 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-vrdwc" event={"ID":"1de7ea7e-b219-47b4-9ba9-ef3688eda036","Type":"ContainerStarted","Data":"324f7634c6c98801a87632fa8dc33526702b35e05213826cabf680c74147f39a"} Jan 22 09:08:27 crc kubenswrapper[4811]: I0122 09:08:27.810004 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2jrk9" event={"ID":"0d9fa190-3073-4b2b-a348-f9a8e1f994b2","Type":"ContainerStarted","Data":"06f656af93a84666284b8d33e11018534df603fc4efbba44aacd06dd5a052869"} Jan 22 09:08:27 crc kubenswrapper[4811]: I0122 09:08:27.810036 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2jrk9" event={"ID":"0d9fa190-3073-4b2b-a348-f9a8e1f994b2","Type":"ContainerStarted","Data":"7da5d6f1f861f9c782bedf6281b856dc78a8867280f1d206b65dc3db460a5df3"} Jan 22 09:08:27 crc kubenswrapper[4811]: I0122 09:08:27.822107 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fcb4z" event={"ID":"f382688c-fb9f-4169-b4eb-3466e08dbd7c","Type":"ContainerStarted","Data":"0a10496cc120eb6f036bf0d44b3de2f2b1694030b567b81da747f78bf218a16c"} Jan 22 09:08:27 crc kubenswrapper[4811]: I0122 09:08:27.822174 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fcb4z" event={"ID":"f382688c-fb9f-4169-b4eb-3466e08dbd7c","Type":"ContainerStarted","Data":"e051072487bd6ee0cfbcacbb676d086391a784993c4df7b03531f88f7ddf910e"} Jan 22 09:08:27 crc kubenswrapper[4811]: I0122 09:08:27.822188 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fcb4z" event={"ID":"f382688c-fb9f-4169-b4eb-3466e08dbd7c","Type":"ContainerStarted","Data":"94428c3e2f03d4a7a634dd3f43fad23129b3af6c7ad8e4c7fe79b47201edb244"} Jan 22 09:08:27 crc kubenswrapper[4811]: I0122 09:08:27.828517 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wvhkk" event={"ID":"9f194f7f-c770-471d-8bcd-c2fd613e0b46","Type":"ContainerStarted","Data":"ded45d338fc641c662c8594c23bdcfe661db7482bb6c1d3ee95eb0d35a1b0265"} Jan 22 09:08:27 crc kubenswrapper[4811]: I0122 09:08:27.832207 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-9624f" event={"ID":"e7343a26-0e21-4d61-ad1c-c3c5479a89e1","Type":"ContainerStarted","Data":"31777b97860057a785c36eef8b208cbee11dafd4a27ade1c7741deb5ea5b8a46"} Jan 22 09:08:27 crc kubenswrapper[4811]: I0122 09:08:27.835487 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qnxwq" podStartSLOduration=129.835466846 podStartE2EDuration="2m9.835466846s" podCreationTimestamp="2026-01-22 09:06:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:08:27.834491939 +0000 UTC m=+152.156679061" watchObservedRunningTime="2026-01-22 09:08:27.835466846 +0000 UTC m=+152.157653960" Jan 22 09:08:27 crc kubenswrapper[4811]: I0122 09:08:27.838160 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-jhptg" event={"ID":"f58eb1d8-bb02-4af7-857c-138518c5bbf2","Type":"ContainerStarted","Data":"5da706283e4a03d22b1e471a06cecfcd8037ec41b3baba8bfd796cc892c8fbdf"} Jan 22 09:08:27 crc kubenswrapper[4811]: I0122 09:08:27.847824 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-zwkzn" event={"ID":"42f53a84-4118-4088-9cb5-39a05774839a","Type":"ContainerStarted","Data":"308d03ade7f51a77fa5161086c51273b404f2fa0b4c4e890012d257715527507"} Jan 22 09:08:27 crc kubenswrapper[4811]: I0122 09:08:27.849982 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484540-kbb7d" event={"ID":"80663d92-2281-4a3d-9232-f1fc19873d88","Type":"ContainerStarted","Data":"eb5e4023a072cbe0f3e1a802e0497cf3c121c9291366cc2088cb1628499372b4"} Jan 22 09:08:27 crc kubenswrapper[4811]: I0122 09:08:27.853553 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lzdbl" event={"ID":"c488ec3a-dfdf-46dd-8f3f-1346232394d3","Type":"ContainerStarted","Data":"71ad6aab40de74b0e3b25c719245e87dfc6c863afc1737c94346133c42db014a"} Jan 22 09:08:27 crc kubenswrapper[4811]: I0122 09:08:27.853997 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lzdbl" Jan 22 09:08:27 crc kubenswrapper[4811]: I0122 09:08:27.860256 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nczwv\" (UID: \"73533561-14fb-4481-872e-1b47096f9d30\") " pod="openshift-image-registry/image-registry-697d97f7c8-nczwv" Jan 22 09:08:27 crc kubenswrapper[4811]: I0122 09:08:27.861667 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-78zjg" event={"ID":"cb22b2ae-6c13-482b-b827-5200e2be87ca","Type":"ContainerStarted","Data":"b1db100137ba67773fc580ef1f2d86602c1457a91bf68554c170d6df49d6bdf3"} Jan 22 09:08:27 crc kubenswrapper[4811]: E0122 09:08:27.861906 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:08:28.361893704 +0000 UTC m=+152.684080827 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nczwv" (UID: "73533561-14fb-4481-872e-1b47096f9d30") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:08:27 crc kubenswrapper[4811]: I0122 09:08:27.873576 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-9624f" podStartSLOduration=8.873565345 podStartE2EDuration="8.873565345s" podCreationTimestamp="2026-01-22 09:08:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:08:27.873222709 +0000 UTC m=+152.195409832" watchObservedRunningTime="2026-01-22 09:08:27.873565345 +0000 UTC m=+152.195752468" Jan 22 09:08:27 crc kubenswrapper[4811]: I0122 09:08:27.880689 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-j9b4b" event={"ID":"c674e914-755f-48a4-97ca-03e1a69a021a","Type":"ContainerStarted","Data":"36939a56a872065a523e0fae064f05e76e0857d78a692291f485a9abafe474ca"} Jan 22 09:08:27 crc kubenswrapper[4811]: I0122 09:08:27.901779 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-6jdmv" event={"ID":"0803a721-b862-4696-a752-e5af589ced0b","Type":"ContainerStarted","Data":"d5cf68099c11d32ce944c5120b4de22e305a0413b58a8d9583e28c691369916b"} Jan 22 09:08:27 crc kubenswrapper[4811]: I0122 09:08:27.906449 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-9pxnj" podStartSLOduration=129.906430914 podStartE2EDuration="2m9.906430914s" podCreationTimestamp="2026-01-22 09:06:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:08:27.903110772 +0000 UTC m=+152.225297895" watchObservedRunningTime="2026-01-22 09:08:27.906430914 +0000 UTC m=+152.228618037" Jan 22 09:08:27 crc kubenswrapper[4811]: I0122 09:08:27.912435 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-jjjjp" event={"ID":"387d7c1a-1589-4377-9566-a45d8f498f38","Type":"ContainerStarted","Data":"b8bb8164f09549a5336c09008515b1dfa07304be27ff3c82c9bf567a0b0e8bef"} Jan 22 09:08:27 crc kubenswrapper[4811]: I0122 09:08:27.912787 4811 patch_prober.go:28] interesting pod/downloads-7954f5f757-hs4pm container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Jan 22 09:08:27 crc kubenswrapper[4811]: I0122 09:08:27.912821 4811 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-hs4pm" podUID="117d7039-2cd9-4ee9-9272-923cd05c3565" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" Jan 22 09:08:27 crc kubenswrapper[4811]: I0122 09:08:27.914089 4811 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-zzx6v container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.31:8080/healthz\": dial tcp 10.217.0.31:8080: connect: connection refused" start-of-body= Jan 22 09:08:27 crc kubenswrapper[4811]: I0122 09:08:27.914116 4811 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-zzx6v" podUID="30317a78-afdc-4c04-95b6-d2c8fedfb790" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.31:8080/healthz\": dial tcp 10.217.0.31:8080: connect: connection refused" Jan 22 09:08:27 crc kubenswrapper[4811]: I0122 09:08:27.930092 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-776jv" Jan 22 09:08:27 crc kubenswrapper[4811]: I0122 09:08:27.933440 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wcwwm" Jan 22 09:08:27 crc kubenswrapper[4811]: I0122 09:08:27.961937 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 09:08:27 crc kubenswrapper[4811]: E0122 09:08:27.963084 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 09:08:28.463067211 +0000 UTC m=+152.785254334 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:08:27 crc kubenswrapper[4811]: I0122 09:08:27.980284 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fcb4z" podStartSLOduration=129.980267338 podStartE2EDuration="2m9.980267338s" podCreationTimestamp="2026-01-22 09:06:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:08:27.946793781 +0000 UTC m=+152.268980905" watchObservedRunningTime="2026-01-22 09:08:27.980267338 +0000 UTC m=+152.302454461" Jan 22 09:08:28 crc kubenswrapper[4811]: I0122 09:08:28.036355 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-6jdmv" podStartSLOduration=130.036332407 podStartE2EDuration="2m10.036332407s" podCreationTimestamp="2026-01-22 09:06:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:08:28.035201094 +0000 UTC m=+152.357388217" watchObservedRunningTime="2026-01-22 09:08:28.036332407 +0000 UTC m=+152.358519530" Jan 22 09:08:28 crc kubenswrapper[4811]: I0122 09:08:28.067678 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nczwv\" (UID: \"73533561-14fb-4481-872e-1b47096f9d30\") " pod="openshift-image-registry/image-registry-697d97f7c8-nczwv" Jan 22 09:08:28 crc kubenswrapper[4811]: E0122 09:08:28.069674 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:08:28.56966096 +0000 UTC m=+152.891848083 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nczwv" (UID: "73533561-14fb-4481-872e-1b47096f9d30") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:08:28 crc kubenswrapper[4811]: I0122 09:08:28.094987 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-j9b4b" podStartSLOduration=130.094974175 podStartE2EDuration="2m10.094974175s" podCreationTimestamp="2026-01-22 09:06:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:08:28.063320873 +0000 UTC m=+152.385507996" watchObservedRunningTime="2026-01-22 09:08:28.094974175 +0000 UTC m=+152.417161299" Jan 22 09:08:28 crc kubenswrapper[4811]: I0122 09:08:28.130830 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-22 09:03:27 +0000 UTC, rotation deadline is 2026-12-15 03:45:55.907225236 +0000 UTC Jan 22 09:08:28 crc kubenswrapper[4811]: I0122 09:08:28.130893 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7842h37m27.776334628s for next certificate rotation Jan 22 09:08:28 crc kubenswrapper[4811]: I0122 09:08:28.171278 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 09:08:28 crc kubenswrapper[4811]: E0122 09:08:28.171700 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 09:08:28.671684589 +0000 UTC m=+152.993871712 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:08:28 crc kubenswrapper[4811]: I0122 09:08:28.179085 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lzdbl" podStartSLOduration=130.179074125 podStartE2EDuration="2m10.179074125s" podCreationTimestamp="2026-01-22 09:06:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:08:28.177945376 +0000 UTC m=+152.500132519" watchObservedRunningTime="2026-01-22 09:08:28.179074125 +0000 UTC m=+152.501261248" Jan 22 09:08:28 crc kubenswrapper[4811]: I0122 09:08:28.266000 4811 patch_prober.go:28] interesting pod/router-default-5444994796-l8phl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 09:08:28 crc kubenswrapper[4811]: [-]has-synced failed: reason withheld Jan 22 09:08:28 crc kubenswrapper[4811]: [+]process-running ok Jan 22 09:08:28 crc kubenswrapper[4811]: healthz check failed Jan 22 09:08:28 crc kubenswrapper[4811]: I0122 09:08:28.266048 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-l8phl" podUID="2716a2dc-25e2-4a62-8264-41d299b3cd55" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 09:08:28 crc kubenswrapper[4811]: I0122 09:08:28.273185 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nczwv\" (UID: \"73533561-14fb-4481-872e-1b47096f9d30\") " pod="openshift-image-registry/image-registry-697d97f7c8-nczwv" Jan 22 09:08:28 crc kubenswrapper[4811]: E0122 09:08:28.273529 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:08:28.773515516 +0000 UTC m=+153.095702639 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nczwv" (UID: "73533561-14fb-4481-872e-1b47096f9d30") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:08:28 crc kubenswrapper[4811]: I0122 09:08:28.374188 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 09:08:28 crc kubenswrapper[4811]: E0122 09:08:28.374650 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 09:08:28.874612378 +0000 UTC m=+153.196799501 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:08:28 crc kubenswrapper[4811]: I0122 09:08:28.476277 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nczwv\" (UID: \"73533561-14fb-4481-872e-1b47096f9d30\") " pod="openshift-image-registry/image-registry-697d97f7c8-nczwv" Jan 22 09:08:28 crc kubenswrapper[4811]: E0122 09:08:28.476608 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:08:28.976595001 +0000 UTC m=+153.298782154 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nczwv" (UID: "73533561-14fb-4481-872e-1b47096f9d30") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:08:28 crc kubenswrapper[4811]: I0122 09:08:28.577235 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 09:08:28 crc kubenswrapper[4811]: E0122 09:08:28.577662 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 09:08:29.077643523 +0000 UTC m=+153.399830645 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:08:28 crc kubenswrapper[4811]: I0122 09:08:28.678560 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nczwv\" (UID: \"73533561-14fb-4481-872e-1b47096f9d30\") " pod="openshift-image-registry/image-registry-697d97f7c8-nczwv" Jan 22 09:08:28 crc kubenswrapper[4811]: E0122 09:08:28.678990 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:08:29.178975498 +0000 UTC m=+153.501162620 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nczwv" (UID: "73533561-14fb-4481-872e-1b47096f9d30") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:08:28 crc kubenswrapper[4811]: I0122 09:08:28.779800 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 09:08:28 crc kubenswrapper[4811]: E0122 09:08:28.780253 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 09:08:29.28022607 +0000 UTC m=+153.602413183 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:08:28 crc kubenswrapper[4811]: I0122 09:08:28.854275 4811 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-lzdbl container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.33:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 22 09:08:28 crc kubenswrapper[4811]: I0122 09:08:28.854347 4811 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lzdbl" podUID="c488ec3a-dfdf-46dd-8f3f-1346232394d3" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.33:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 22 09:08:28 crc kubenswrapper[4811]: I0122 09:08:28.881358 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nczwv\" (UID: \"73533561-14fb-4481-872e-1b47096f9d30\") " pod="openshift-image-registry/image-registry-697d97f7c8-nczwv" Jan 22 09:08:28 crc kubenswrapper[4811]: E0122 09:08:28.881788 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:08:29.381770515 +0000 UTC m=+153.703957639 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nczwv" (UID: "73533561-14fb-4481-872e-1b47096f9d30") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:08:28 crc kubenswrapper[4811]: I0122 09:08:28.926129 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-zwkzn" event={"ID":"42f53a84-4118-4088-9cb5-39a05774839a","Type":"ContainerStarted","Data":"78f8a201943d827b80a5b523ff29d28036c91aca01d903d44f693c7dffb8978b"} Jan 22 09:08:28 crc kubenswrapper[4811]: I0122 09:08:28.926189 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-zwkzn" event={"ID":"42f53a84-4118-4088-9cb5-39a05774839a","Type":"ContainerStarted","Data":"89f136146ba6412c0dafa74d5d7a9fb80f67846ab182da9137eca6c0d4981be6"} Jan 22 09:08:28 crc kubenswrapper[4811]: I0122 09:08:28.929157 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2jrk9" event={"ID":"0d9fa190-3073-4b2b-a348-f9a8e1f994b2","Type":"ContainerStarted","Data":"2853732c58c35db0809580881d3e786644fe1558a7679dbfde322d620dc715f6"} Jan 22 09:08:28 crc kubenswrapper[4811]: I0122 09:08:28.931275 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6l9qr" event={"ID":"0d71c966-5dcc-4f11-b21e-8c60ba5b7b57","Type":"ContainerStarted","Data":"4598b995315d0a88aba888639e4c94f2229f1237be3868474d7eaab0522c87ea"} Jan 22 09:08:28 crc kubenswrapper[4811]: I0122 09:08:28.933163 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-rx42r" event={"ID":"8a9d91fa-d887-4128-af43-cfe3cad79784","Type":"ContainerStarted","Data":"84f30dfe21a54f4b8c2a20ce00b3fcc6c218beedc3bc8ccb44f7050b370de65c"} Jan 22 09:08:28 crc kubenswrapper[4811]: I0122 09:08:28.933200 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-rx42r" event={"ID":"8a9d91fa-d887-4128-af43-cfe3cad79784","Type":"ContainerStarted","Data":"f82cb19901712946ef8e8a7f2cb3027e760cf225ac710ca8a9371b396982f8a8"} Jan 22 09:08:28 crc kubenswrapper[4811]: I0122 09:08:28.935886 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-v2nnf" event={"ID":"88b52253-3840-42b0-aa3c-d8708274dcfa","Type":"ContainerStarted","Data":"f06590fa66cdd1883b6eb37fe85eaf4c26b28433b6bfe0f110113f674a2d0d20"} Jan 22 09:08:28 crc kubenswrapper[4811]: I0122 09:08:28.936255 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-v2nnf" Jan 22 09:08:28 crc kubenswrapper[4811]: I0122 09:08:28.937998 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-vrdwc" event={"ID":"1de7ea7e-b219-47b4-9ba9-ef3688eda036","Type":"ContainerStarted","Data":"fbae6a7434683278509171775dcc2f149e3940389d38aa9f24d368120993b8aa"} Jan 22 09:08:28 crc kubenswrapper[4811]: I0122 09:08:28.939559 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-6jdmv" event={"ID":"0803a721-b862-4696-a752-e5af589ced0b","Type":"ContainerStarted","Data":"5f53f952a01efb8788062b01a0895936603a61e51789e6d9f279f99c8e759378"} Jan 22 09:08:28 crc kubenswrapper[4811]: I0122 09:08:28.941583 4811 generic.go:334] "Generic (PLEG): container finished" podID="387d7c1a-1589-4377-9566-a45d8f498f38" containerID="03b68bfcc5c9c54f0911f458c381e9ea444119b5aae4eee929c662547716a2eb" exitCode=0 Jan 22 09:08:28 crc kubenswrapper[4811]: I0122 09:08:28.941655 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-jjjjp" event={"ID":"387d7c1a-1589-4377-9566-a45d8f498f38","Type":"ContainerDied","Data":"03b68bfcc5c9c54f0911f458c381e9ea444119b5aae4eee929c662547716a2eb"} Jan 22 09:08:28 crc kubenswrapper[4811]: I0122 09:08:28.941675 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-jjjjp" event={"ID":"387d7c1a-1589-4377-9566-a45d8f498f38","Type":"ContainerStarted","Data":"c85324ee0d5494f7e48a52a9519dbc74660f657d099ba19a1b533087db9faff7"} Jan 22 09:08:28 crc kubenswrapper[4811]: I0122 09:08:28.941684 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-jjjjp" event={"ID":"387d7c1a-1589-4377-9566-a45d8f498f38","Type":"ContainerStarted","Data":"720833ea5f3e6033288ac3b6443e4cb4f1bfb67c97bd59c0e110350e2c7fecd9"} Jan 22 09:08:28 crc kubenswrapper[4811]: I0122 09:08:28.943333 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484540-kbb7d" event={"ID":"80663d92-2281-4a3d-9232-f1fc19873d88","Type":"ContainerStarted","Data":"98fd9e69518f482da1a8e15c9a11f5948330e7204ac69c0cafecb09c3a98aa73"} Jan 22 09:08:28 crc kubenswrapper[4811]: I0122 09:08:28.945757 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-78zjg" event={"ID":"cb22b2ae-6c13-482b-b827-5200e2be87ca","Type":"ContainerStarted","Data":"735d3936eb4a587723211a03a2a628c2a1c37ab4809d882b7df56d4b43a69f48"} Jan 22 09:08:28 crc kubenswrapper[4811]: I0122 09:08:28.954711 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-zzx6v" Jan 22 09:08:28 crc kubenswrapper[4811]: I0122 09:08:28.960065 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-gmwq7" Jan 22 09:08:28 crc kubenswrapper[4811]: I0122 09:08:28.962953 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qnxwq" Jan 22 09:08:28 crc kubenswrapper[4811]: I0122 09:08:28.979382 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-zwkzn" podStartSLOduration=130.979362428 podStartE2EDuration="2m10.979362428s" podCreationTimestamp="2026-01-22 09:06:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:08:28.977713258 +0000 UTC m=+153.299900381" watchObservedRunningTime="2026-01-22 09:08:28.979362428 +0000 UTC m=+153.301549552" Jan 22 09:08:28 crc kubenswrapper[4811]: I0122 09:08:28.979799 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29484540-kbb7d" podStartSLOduration=130.97979322 podStartE2EDuration="2m10.97979322s" podCreationTimestamp="2026-01-22 09:06:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:08:28.216103005 +0000 UTC m=+152.538290129" watchObservedRunningTime="2026-01-22 09:08:28.97979322 +0000 UTC m=+153.301980343" Jan 22 09:08:28 crc kubenswrapper[4811]: I0122 09:08:28.993224 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 09:08:28 crc kubenswrapper[4811]: E0122 09:08:28.993339 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 09:08:29.493319589 +0000 UTC m=+153.815506712 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:08:28 crc kubenswrapper[4811]: I0122 09:08:28.994049 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nczwv\" (UID: \"73533561-14fb-4481-872e-1b47096f9d30\") " pod="openshift-image-registry/image-registry-697d97f7c8-nczwv" Jan 22 09:08:28 crc kubenswrapper[4811]: E0122 09:08:28.997531 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:08:29.497517556 +0000 UTC m=+153.819704679 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nczwv" (UID: "73533561-14fb-4481-872e-1b47096f9d30") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:08:29 crc kubenswrapper[4811]: I0122 09:08:29.086370 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-rx42r" podStartSLOduration=131.086350831 podStartE2EDuration="2m11.086350831s" podCreationTimestamp="2026-01-22 09:06:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:08:29.063429575 +0000 UTC m=+153.385616699" watchObservedRunningTime="2026-01-22 09:08:29.086350831 +0000 UTC m=+153.408537954" Jan 22 09:08:29 crc kubenswrapper[4811]: I0122 09:08:29.095831 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 09:08:29 crc kubenswrapper[4811]: E0122 09:08:29.096329 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 09:08:29.596306856 +0000 UTC m=+153.918493980 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:08:29 crc kubenswrapper[4811]: I0122 09:08:29.096614 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nczwv\" (UID: \"73533561-14fb-4481-872e-1b47096f9d30\") " pod="openshift-image-registry/image-registry-697d97f7c8-nczwv" Jan 22 09:08:29 crc kubenswrapper[4811]: E0122 09:08:29.101873 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:08:29.601858215 +0000 UTC m=+153.924045338 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nczwv" (UID: "73533561-14fb-4481-872e-1b47096f9d30") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:08:29 crc kubenswrapper[4811]: I0122 09:08:29.134518 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-v2nnf" podStartSLOduration=10.134495414 podStartE2EDuration="10.134495414s" podCreationTimestamp="2026-01-22 09:08:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:08:29.133430005 +0000 UTC m=+153.455617127" watchObservedRunningTime="2026-01-22 09:08:29.134495414 +0000 UTC m=+153.456682527" Jan 22 09:08:29 crc kubenswrapper[4811]: I0122 09:08:29.189227 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2jrk9" podStartSLOduration=131.189200559 podStartE2EDuration="2m11.189200559s" podCreationTimestamp="2026-01-22 09:06:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:08:29.188099393 +0000 UTC m=+153.510286516" watchObservedRunningTime="2026-01-22 09:08:29.189200559 +0000 UTC m=+153.511387682" Jan 22 09:08:29 crc kubenswrapper[4811]: I0122 09:08:29.197560 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 09:08:29 crc kubenswrapper[4811]: E0122 09:08:29.197734 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 09:08:29.697712882 +0000 UTC m=+154.019900005 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:08:29 crc kubenswrapper[4811]: I0122 09:08:29.197955 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nczwv\" (UID: \"73533561-14fb-4481-872e-1b47096f9d30\") " pod="openshift-image-registry/image-registry-697d97f7c8-nczwv" Jan 22 09:08:29 crc kubenswrapper[4811]: E0122 09:08:29.198282 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:08:29.698274451 +0000 UTC m=+154.020461574 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nczwv" (UID: "73533561-14fb-4481-872e-1b47096f9d30") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:08:29 crc kubenswrapper[4811]: I0122 09:08:29.236298 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-jjjjp" podStartSLOduration=132.236277438 podStartE2EDuration="2m12.236277438s" podCreationTimestamp="2026-01-22 09:06:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:08:29.236222063 +0000 UTC m=+153.558409187" watchObservedRunningTime="2026-01-22 09:08:29.236277438 +0000 UTC m=+153.558464561" Jan 22 09:08:29 crc kubenswrapper[4811]: I0122 09:08:29.266051 4811 patch_prober.go:28] interesting pod/router-default-5444994796-l8phl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 09:08:29 crc kubenswrapper[4811]: [-]has-synced failed: reason withheld Jan 22 09:08:29 crc kubenswrapper[4811]: [+]process-running ok Jan 22 09:08:29 crc kubenswrapper[4811]: healthz check failed Jan 22 09:08:29 crc kubenswrapper[4811]: I0122 09:08:29.266126 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-l8phl" podUID="2716a2dc-25e2-4a62-8264-41d299b3cd55" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 09:08:29 crc kubenswrapper[4811]: I0122 09:08:29.299351 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 09:08:29 crc kubenswrapper[4811]: E0122 09:08:29.299733 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 09:08:29.799716434 +0000 UTC m=+154.121903556 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:08:29 crc kubenswrapper[4811]: I0122 09:08:29.335359 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-78zjg" podStartSLOduration=131.335340084 podStartE2EDuration="2m11.335340084s" podCreationTimestamp="2026-01-22 09:06:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:08:29.313898178 +0000 UTC m=+153.636085301" watchObservedRunningTime="2026-01-22 09:08:29.335340084 +0000 UTC m=+153.657527207" Jan 22 09:08:29 crc kubenswrapper[4811]: I0122 09:08:29.336073 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6l9qr" podStartSLOduration=132.336068778 podStartE2EDuration="2m12.336068778s" podCreationTimestamp="2026-01-22 09:06:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:08:29.333566119 +0000 UTC m=+153.655753242" watchObservedRunningTime="2026-01-22 09:08:29.336068778 +0000 UTC m=+153.658255892" Jan 22 09:08:29 crc kubenswrapper[4811]: I0122 09:08:29.400777 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nczwv\" (UID: \"73533561-14fb-4481-872e-1b47096f9d30\") " pod="openshift-image-registry/image-registry-697d97f7c8-nczwv" Jan 22 09:08:29 crc kubenswrapper[4811]: E0122 09:08:29.401080 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:08:29.901068297 +0000 UTC m=+154.223255420 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nczwv" (UID: "73533561-14fb-4481-872e-1b47096f9d30") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:08:29 crc kubenswrapper[4811]: I0122 09:08:29.501492 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 09:08:29 crc kubenswrapper[4811]: E0122 09:08:29.501913 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 09:08:30.001899097 +0000 UTC m=+154.324086220 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:08:29 crc kubenswrapper[4811]: I0122 09:08:29.528348 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lzdbl" Jan 22 09:08:29 crc kubenswrapper[4811]: I0122 09:08:29.598461 4811 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 22 09:08:29 crc kubenswrapper[4811]: I0122 09:08:29.603003 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nczwv\" (UID: \"73533561-14fb-4481-872e-1b47096f9d30\") " pod="openshift-image-registry/image-registry-697d97f7c8-nczwv" Jan 22 09:08:29 crc kubenswrapper[4811]: E0122 09:08:29.603345 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:08:30.10333031 +0000 UTC m=+154.425517433 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nczwv" (UID: "73533561-14fb-4481-872e-1b47096f9d30") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:08:29 crc kubenswrapper[4811]: I0122 09:08:29.704595 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 09:08:29 crc kubenswrapper[4811]: E0122 09:08:29.705063 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 09:08:30.205049135 +0000 UTC m=+154.527236258 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:08:29 crc kubenswrapper[4811]: I0122 09:08:29.806103 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nczwv\" (UID: \"73533561-14fb-4481-872e-1b47096f9d30\") " pod="openshift-image-registry/image-registry-697d97f7c8-nczwv" Jan 22 09:08:29 crc kubenswrapper[4811]: E0122 09:08:29.806454 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:08:30.306444219 +0000 UTC m=+154.628631343 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nczwv" (UID: "73533561-14fb-4481-872e-1b47096f9d30") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:08:29 crc kubenswrapper[4811]: I0122 09:08:29.906976 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 09:08:29 crc kubenswrapper[4811]: E0122 09:08:29.907128 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 09:08:30.407108346 +0000 UTC m=+154.729295469 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:08:29 crc kubenswrapper[4811]: I0122 09:08:29.907283 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nczwv\" (UID: \"73533561-14fb-4481-872e-1b47096f9d30\") " pod="openshift-image-registry/image-registry-697d97f7c8-nczwv" Jan 22 09:08:29 crc kubenswrapper[4811]: E0122 09:08:29.907576 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:08:30.407568454 +0000 UTC m=+154.729755577 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nczwv" (UID: "73533561-14fb-4481-872e-1b47096f9d30") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:08:29 crc kubenswrapper[4811]: I0122 09:08:29.954178 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-vrdwc" event={"ID":"1de7ea7e-b219-47b4-9ba9-ef3688eda036","Type":"ContainerStarted","Data":"fb3a2132449414b499c5f48853f8d8d09def8fe3a8dec96dae789c0e41ca4e92"} Jan 22 09:08:29 crc kubenswrapper[4811]: I0122 09:08:29.954214 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-vrdwc" event={"ID":"1de7ea7e-b219-47b4-9ba9-ef3688eda036","Type":"ContainerStarted","Data":"7c086d854b31fdf54a5d628437356a78012cfff3d3558e19b349da3713f50471"} Jan 22 09:08:29 crc kubenswrapper[4811]: I0122 09:08:29.973694 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-vrdwc" podStartSLOduration=10.973683526 podStartE2EDuration="10.973683526s" podCreationTimestamp="2026-01-22 09:08:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:08:29.971440456 +0000 UTC m=+154.293627580" watchObservedRunningTime="2026-01-22 09:08:29.973683526 +0000 UTC m=+154.295870649" Jan 22 09:08:30 crc kubenswrapper[4811]: I0122 09:08:30.007915 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 09:08:30 crc kubenswrapper[4811]: E0122 09:08:30.008224 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 09:08:30.508211831 +0000 UTC m=+154.830398954 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:08:30 crc kubenswrapper[4811]: I0122 09:08:30.052841 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-l7qd2"] Jan 22 09:08:30 crc kubenswrapper[4811]: I0122 09:08:30.053677 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l7qd2" Jan 22 09:08:30 crc kubenswrapper[4811]: I0122 09:08:30.055407 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 22 09:08:30 crc kubenswrapper[4811]: I0122 09:08:30.064728 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-l7qd2"] Jan 22 09:08:30 crc kubenswrapper[4811]: I0122 09:08:30.109703 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nczwv\" (UID: \"73533561-14fb-4481-872e-1b47096f9d30\") " pod="openshift-image-registry/image-registry-697d97f7c8-nczwv" Jan 22 09:08:30 crc kubenswrapper[4811]: E0122 09:08:30.112492 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 09:08:30.612469114 +0000 UTC m=+154.934656236 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nczwv" (UID: "73533561-14fb-4481-872e-1b47096f9d30") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 09:08:30 crc kubenswrapper[4811]: I0122 09:08:30.146141 4811 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-22T09:08:29.59872239Z","Handler":null,"Name":""} Jan 22 09:08:30 crc kubenswrapper[4811]: I0122 09:08:30.180494 4811 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 22 09:08:30 crc kubenswrapper[4811]: I0122 09:08:30.180553 4811 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 22 09:08:30 crc kubenswrapper[4811]: I0122 09:08:30.210615 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 09:08:30 crc kubenswrapper[4811]: I0122 09:08:30.210888 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89c58cc2-8741-4c9f-95fa-c73db10026d3-utilities\") pod \"community-operators-l7qd2\" (UID: \"89c58cc2-8741-4c9f-95fa-c73db10026d3\") " pod="openshift-marketplace/community-operators-l7qd2" Jan 22 09:08:30 crc kubenswrapper[4811]: I0122 09:08:30.210932 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89c58cc2-8741-4c9f-95fa-c73db10026d3-catalog-content\") pod \"community-operators-l7qd2\" (UID: \"89c58cc2-8741-4c9f-95fa-c73db10026d3\") " pod="openshift-marketplace/community-operators-l7qd2" Jan 22 09:08:30 crc kubenswrapper[4811]: I0122 09:08:30.210970 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lczdl\" (UniqueName: \"kubernetes.io/projected/89c58cc2-8741-4c9f-95fa-c73db10026d3-kube-api-access-lczdl\") pod \"community-operators-l7qd2\" (UID: \"89c58cc2-8741-4c9f-95fa-c73db10026d3\") " pod="openshift-marketplace/community-operators-l7qd2" Jan 22 09:08:30 crc kubenswrapper[4811]: I0122 09:08:30.238735 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 22 09:08:30 crc kubenswrapper[4811]: I0122 09:08:30.256939 4811 patch_prober.go:28] interesting pod/router-default-5444994796-l8phl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 09:08:30 crc kubenswrapper[4811]: [-]has-synced failed: reason withheld Jan 22 09:08:30 crc kubenswrapper[4811]: [+]process-running ok Jan 22 09:08:30 crc kubenswrapper[4811]: healthz check failed Jan 22 09:08:30 crc kubenswrapper[4811]: I0122 09:08:30.256990 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-l8phl" podUID="2716a2dc-25e2-4a62-8264-41d299b3cd55" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 09:08:30 crc kubenswrapper[4811]: I0122 09:08:30.272857 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-68p7c"] Jan 22 09:08:30 crc kubenswrapper[4811]: I0122 09:08:30.273851 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-68p7c" Jan 22 09:08:30 crc kubenswrapper[4811]: I0122 09:08:30.278457 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 22 09:08:30 crc kubenswrapper[4811]: I0122 09:08:30.312687 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89c58cc2-8741-4c9f-95fa-c73db10026d3-utilities\") pod \"community-operators-l7qd2\" (UID: \"89c58cc2-8741-4c9f-95fa-c73db10026d3\") " pod="openshift-marketplace/community-operators-l7qd2" Jan 22 09:08:30 crc kubenswrapper[4811]: I0122 09:08:30.312734 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nczwv\" (UID: \"73533561-14fb-4481-872e-1b47096f9d30\") " pod="openshift-image-registry/image-registry-697d97f7c8-nczwv" Jan 22 09:08:30 crc kubenswrapper[4811]: I0122 09:08:30.312756 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89c58cc2-8741-4c9f-95fa-c73db10026d3-catalog-content\") pod \"community-operators-l7qd2\" (UID: \"89c58cc2-8741-4c9f-95fa-c73db10026d3\") " pod="openshift-marketplace/community-operators-l7qd2" Jan 22 09:08:30 crc kubenswrapper[4811]: I0122 09:08:30.312789 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lczdl\" (UniqueName: \"kubernetes.io/projected/89c58cc2-8741-4c9f-95fa-c73db10026d3-kube-api-access-lczdl\") pod \"community-operators-l7qd2\" (UID: \"89c58cc2-8741-4c9f-95fa-c73db10026d3\") " pod="openshift-marketplace/community-operators-l7qd2" Jan 22 09:08:30 crc kubenswrapper[4811]: I0122 09:08:30.313339 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89c58cc2-8741-4c9f-95fa-c73db10026d3-catalog-content\") pod \"community-operators-l7qd2\" (UID: \"89c58cc2-8741-4c9f-95fa-c73db10026d3\") " pod="openshift-marketplace/community-operators-l7qd2" Jan 22 09:08:30 crc kubenswrapper[4811]: I0122 09:08:30.313562 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89c58cc2-8741-4c9f-95fa-c73db10026d3-utilities\") pod \"community-operators-l7qd2\" (UID: \"89c58cc2-8741-4c9f-95fa-c73db10026d3\") " pod="openshift-marketplace/community-operators-l7qd2" Jan 22 09:08:30 crc kubenswrapper[4811]: I0122 09:08:30.316835 4811 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 22 09:08:30 crc kubenswrapper[4811]: I0122 09:08:30.316861 4811 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nczwv\" (UID: \"73533561-14fb-4481-872e-1b47096f9d30\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-nczwv" Jan 22 09:08:30 crc kubenswrapper[4811]: I0122 09:08:30.349578 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lczdl\" (UniqueName: \"kubernetes.io/projected/89c58cc2-8741-4c9f-95fa-c73db10026d3-kube-api-access-lczdl\") pod \"community-operators-l7qd2\" (UID: \"89c58cc2-8741-4c9f-95fa-c73db10026d3\") " pod="openshift-marketplace/community-operators-l7qd2" Jan 22 09:08:30 crc kubenswrapper[4811]: I0122 09:08:30.365338 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l7qd2" Jan 22 09:08:30 crc kubenswrapper[4811]: I0122 09:08:30.414783 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b8f4ccb-0c41-4bc0-8cf6-bb4508afbb4d-utilities\") pod \"certified-operators-68p7c\" (UID: \"7b8f4ccb-0c41-4bc0-8cf6-bb4508afbb4d\") " pod="openshift-marketplace/certified-operators-68p7c" Jan 22 09:08:30 crc kubenswrapper[4811]: I0122 09:08:30.415018 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v78kd\" (UniqueName: \"kubernetes.io/projected/7b8f4ccb-0c41-4bc0-8cf6-bb4508afbb4d-kube-api-access-v78kd\") pod \"certified-operators-68p7c\" (UID: \"7b8f4ccb-0c41-4bc0-8cf6-bb4508afbb4d\") " pod="openshift-marketplace/certified-operators-68p7c" Jan 22 09:08:30 crc kubenswrapper[4811]: I0122 09:08:30.415150 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b8f4ccb-0c41-4bc0-8cf6-bb4508afbb4d-catalog-content\") pod \"certified-operators-68p7c\" (UID: \"7b8f4ccb-0c41-4bc0-8cf6-bb4508afbb4d\") " pod="openshift-marketplace/certified-operators-68p7c" Jan 22 09:08:30 crc kubenswrapper[4811]: I0122 09:08:30.435294 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-68p7c"] Jan 22 09:08:30 crc kubenswrapper[4811]: I0122 09:08:30.473542 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-b2fpc"] Jan 22 09:08:30 crc kubenswrapper[4811]: I0122 09:08:30.474698 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b2fpc" Jan 22 09:08:30 crc kubenswrapper[4811]: I0122 09:08:30.494722 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-b2fpc"] Jan 22 09:08:30 crc kubenswrapper[4811]: I0122 09:08:30.516942 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v78kd\" (UniqueName: \"kubernetes.io/projected/7b8f4ccb-0c41-4bc0-8cf6-bb4508afbb4d-kube-api-access-v78kd\") pod \"certified-operators-68p7c\" (UID: \"7b8f4ccb-0c41-4bc0-8cf6-bb4508afbb4d\") " pod="openshift-marketplace/certified-operators-68p7c" Jan 22 09:08:30 crc kubenswrapper[4811]: I0122 09:08:30.517024 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b8f4ccb-0c41-4bc0-8cf6-bb4508afbb4d-catalog-content\") pod \"certified-operators-68p7c\" (UID: \"7b8f4ccb-0c41-4bc0-8cf6-bb4508afbb4d\") " pod="openshift-marketplace/certified-operators-68p7c" Jan 22 09:08:30 crc kubenswrapper[4811]: I0122 09:08:30.517076 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b8f4ccb-0c41-4bc0-8cf6-bb4508afbb4d-utilities\") pod \"certified-operators-68p7c\" (UID: \"7b8f4ccb-0c41-4bc0-8cf6-bb4508afbb4d\") " pod="openshift-marketplace/certified-operators-68p7c" Jan 22 09:08:30 crc kubenswrapper[4811]: I0122 09:08:30.517455 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b8f4ccb-0c41-4bc0-8cf6-bb4508afbb4d-utilities\") pod \"certified-operators-68p7c\" (UID: \"7b8f4ccb-0c41-4bc0-8cf6-bb4508afbb4d\") " pod="openshift-marketplace/certified-operators-68p7c" Jan 22 09:08:30 crc kubenswrapper[4811]: I0122 09:08:30.517860 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b8f4ccb-0c41-4bc0-8cf6-bb4508afbb4d-catalog-content\") pod \"certified-operators-68p7c\" (UID: \"7b8f4ccb-0c41-4bc0-8cf6-bb4508afbb4d\") " pod="openshift-marketplace/certified-operators-68p7c" Jan 22 09:08:30 crc kubenswrapper[4811]: I0122 09:08:30.544805 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v78kd\" (UniqueName: \"kubernetes.io/projected/7b8f4ccb-0c41-4bc0-8cf6-bb4508afbb4d-kube-api-access-v78kd\") pod \"certified-operators-68p7c\" (UID: \"7b8f4ccb-0c41-4bc0-8cf6-bb4508afbb4d\") " pod="openshift-marketplace/certified-operators-68p7c" Jan 22 09:08:30 crc kubenswrapper[4811]: I0122 09:08:30.570606 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nczwv\" (UID: \"73533561-14fb-4481-872e-1b47096f9d30\") " pod="openshift-image-registry/image-registry-697d97f7c8-nczwv" Jan 22 09:08:30 crc kubenswrapper[4811]: I0122 09:08:30.584238 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-68p7c" Jan 22 09:08:30 crc kubenswrapper[4811]: I0122 09:08:30.642993 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83b7219e-ce69-408d-92ad-7b58cc6d0b7a-catalog-content\") pod \"community-operators-b2fpc\" (UID: \"83b7219e-ce69-408d-92ad-7b58cc6d0b7a\") " pod="openshift-marketplace/community-operators-b2fpc" Jan 22 09:08:30 crc kubenswrapper[4811]: I0122 09:08:30.643071 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83b7219e-ce69-408d-92ad-7b58cc6d0b7a-utilities\") pod \"community-operators-b2fpc\" (UID: \"83b7219e-ce69-408d-92ad-7b58cc6d0b7a\") " pod="openshift-marketplace/community-operators-b2fpc" Jan 22 09:08:30 crc kubenswrapper[4811]: I0122 09:08:30.643102 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbdwk\" (UniqueName: \"kubernetes.io/projected/83b7219e-ce69-408d-92ad-7b58cc6d0b7a-kube-api-access-jbdwk\") pod \"community-operators-b2fpc\" (UID: \"83b7219e-ce69-408d-92ad-7b58cc6d0b7a\") " pod="openshift-marketplace/community-operators-b2fpc" Jan 22 09:08:30 crc kubenswrapper[4811]: I0122 09:08:30.651348 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-m45zk"] Jan 22 09:08:30 crc kubenswrapper[4811]: I0122 09:08:30.652323 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m45zk" Jan 22 09:08:30 crc kubenswrapper[4811]: I0122 09:08:30.679470 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-m45zk"] Jan 22 09:08:30 crc kubenswrapper[4811]: I0122 09:08:30.717200 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-nczwv" Jan 22 09:08:30 crc kubenswrapper[4811]: I0122 09:08:30.744293 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83b7219e-ce69-408d-92ad-7b58cc6d0b7a-utilities\") pod \"community-operators-b2fpc\" (UID: \"83b7219e-ce69-408d-92ad-7b58cc6d0b7a\") " pod="openshift-marketplace/community-operators-b2fpc" Jan 22 09:08:30 crc kubenswrapper[4811]: I0122 09:08:30.744347 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf6a8a20-43c6-48f6-ba8f-585a3e4dd7ad-catalog-content\") pod \"certified-operators-m45zk\" (UID: \"bf6a8a20-43c6-48f6-ba8f-585a3e4dd7ad\") " pod="openshift-marketplace/certified-operators-m45zk" Jan 22 09:08:30 crc kubenswrapper[4811]: I0122 09:08:30.744370 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbdwk\" (UniqueName: \"kubernetes.io/projected/83b7219e-ce69-408d-92ad-7b58cc6d0b7a-kube-api-access-jbdwk\") pod \"community-operators-b2fpc\" (UID: \"83b7219e-ce69-408d-92ad-7b58cc6d0b7a\") " pod="openshift-marketplace/community-operators-b2fpc" Jan 22 09:08:30 crc kubenswrapper[4811]: I0122 09:08:30.744413 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5q72\" (UniqueName: \"kubernetes.io/projected/bf6a8a20-43c6-48f6-ba8f-585a3e4dd7ad-kube-api-access-l5q72\") pod \"certified-operators-m45zk\" (UID: \"bf6a8a20-43c6-48f6-ba8f-585a3e4dd7ad\") " pod="openshift-marketplace/certified-operators-m45zk" Jan 22 09:08:30 crc kubenswrapper[4811]: I0122 09:08:30.744438 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83b7219e-ce69-408d-92ad-7b58cc6d0b7a-catalog-content\") pod \"community-operators-b2fpc\" (UID: \"83b7219e-ce69-408d-92ad-7b58cc6d0b7a\") " pod="openshift-marketplace/community-operators-b2fpc" Jan 22 09:08:30 crc kubenswrapper[4811]: I0122 09:08:30.744465 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf6a8a20-43c6-48f6-ba8f-585a3e4dd7ad-utilities\") pod \"certified-operators-m45zk\" (UID: \"bf6a8a20-43c6-48f6-ba8f-585a3e4dd7ad\") " pod="openshift-marketplace/certified-operators-m45zk" Jan 22 09:08:30 crc kubenswrapper[4811]: I0122 09:08:30.744898 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83b7219e-ce69-408d-92ad-7b58cc6d0b7a-utilities\") pod \"community-operators-b2fpc\" (UID: \"83b7219e-ce69-408d-92ad-7b58cc6d0b7a\") " pod="openshift-marketplace/community-operators-b2fpc" Jan 22 09:08:30 crc kubenswrapper[4811]: I0122 09:08:30.745366 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83b7219e-ce69-408d-92ad-7b58cc6d0b7a-catalog-content\") pod \"community-operators-b2fpc\" (UID: \"83b7219e-ce69-408d-92ad-7b58cc6d0b7a\") " pod="openshift-marketplace/community-operators-b2fpc" Jan 22 09:08:30 crc kubenswrapper[4811]: I0122 09:08:30.775613 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbdwk\" (UniqueName: \"kubernetes.io/projected/83b7219e-ce69-408d-92ad-7b58cc6d0b7a-kube-api-access-jbdwk\") pod \"community-operators-b2fpc\" (UID: \"83b7219e-ce69-408d-92ad-7b58cc6d0b7a\") " pod="openshift-marketplace/community-operators-b2fpc" Jan 22 09:08:30 crc kubenswrapper[4811]: I0122 09:08:30.785737 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b2fpc" Jan 22 09:08:30 crc kubenswrapper[4811]: I0122 09:08:30.846214 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf6a8a20-43c6-48f6-ba8f-585a3e4dd7ad-catalog-content\") pod \"certified-operators-m45zk\" (UID: \"bf6a8a20-43c6-48f6-ba8f-585a3e4dd7ad\") " pod="openshift-marketplace/certified-operators-m45zk" Jan 22 09:08:30 crc kubenswrapper[4811]: I0122 09:08:30.846285 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5q72\" (UniqueName: \"kubernetes.io/projected/bf6a8a20-43c6-48f6-ba8f-585a3e4dd7ad-kube-api-access-l5q72\") pod \"certified-operators-m45zk\" (UID: \"bf6a8a20-43c6-48f6-ba8f-585a3e4dd7ad\") " pod="openshift-marketplace/certified-operators-m45zk" Jan 22 09:08:30 crc kubenswrapper[4811]: I0122 09:08:30.846324 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf6a8a20-43c6-48f6-ba8f-585a3e4dd7ad-utilities\") pod \"certified-operators-m45zk\" (UID: \"bf6a8a20-43c6-48f6-ba8f-585a3e4dd7ad\") " pod="openshift-marketplace/certified-operators-m45zk" Jan 22 09:08:30 crc kubenswrapper[4811]: I0122 09:08:30.846708 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf6a8a20-43c6-48f6-ba8f-585a3e4dd7ad-utilities\") pod \"certified-operators-m45zk\" (UID: \"bf6a8a20-43c6-48f6-ba8f-585a3e4dd7ad\") " pod="openshift-marketplace/certified-operators-m45zk" Jan 22 09:08:30 crc kubenswrapper[4811]: I0122 09:08:30.846920 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf6a8a20-43c6-48f6-ba8f-585a3e4dd7ad-catalog-content\") pod \"certified-operators-m45zk\" (UID: \"bf6a8a20-43c6-48f6-ba8f-585a3e4dd7ad\") " pod="openshift-marketplace/certified-operators-m45zk" Jan 22 09:08:30 crc kubenswrapper[4811]: I0122 09:08:30.874092 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l5q72\" (UniqueName: \"kubernetes.io/projected/bf6a8a20-43c6-48f6-ba8f-585a3e4dd7ad-kube-api-access-l5q72\") pod \"certified-operators-m45zk\" (UID: \"bf6a8a20-43c6-48f6-ba8f-585a3e4dd7ad\") " pod="openshift-marketplace/certified-operators-m45zk" Jan 22 09:08:30 crc kubenswrapper[4811]: I0122 09:08:30.888728 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-l7qd2"] Jan 22 09:08:30 crc kubenswrapper[4811]: I0122 09:08:30.977113 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l7qd2" event={"ID":"89c58cc2-8741-4c9f-95fa-c73db10026d3","Type":"ContainerStarted","Data":"7e6f6e29896ec06ef96ad4813f48636184a16baff07a7883108877da644cc914"} Jan 22 09:08:30 crc kubenswrapper[4811]: I0122 09:08:30.988420 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-68p7c"] Jan 22 09:08:30 crc kubenswrapper[4811]: I0122 09:08:30.996083 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m45zk" Jan 22 09:08:31 crc kubenswrapper[4811]: I0122 09:08:31.103492 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-nczwv"] Jan 22 09:08:31 crc kubenswrapper[4811]: I0122 09:08:31.237858 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-b2fpc"] Jan 22 09:08:31 crc kubenswrapper[4811]: I0122 09:08:31.266699 4811 patch_prober.go:28] interesting pod/router-default-5444994796-l8phl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 09:08:31 crc kubenswrapper[4811]: [-]has-synced failed: reason withheld Jan 22 09:08:31 crc kubenswrapper[4811]: [+]process-running ok Jan 22 09:08:31 crc kubenswrapper[4811]: healthz check failed Jan 22 09:08:31 crc kubenswrapper[4811]: I0122 09:08:31.266756 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-l8phl" podUID="2716a2dc-25e2-4a62-8264-41d299b3cd55" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 09:08:31 crc kubenswrapper[4811]: I0122 09:08:31.294439 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-m45zk"] Jan 22 09:08:31 crc kubenswrapper[4811]: W0122 09:08:31.303490 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbf6a8a20_43c6_48f6_ba8f_585a3e4dd7ad.slice/crio-3cddf5fa3f09e085521df9275c3de6cbc84ae9f7194d69e1c95a70a23c4a9eb1 WatchSource:0}: Error finding container 3cddf5fa3f09e085521df9275c3de6cbc84ae9f7194d69e1c95a70a23c4a9eb1: Status 404 returned error can't find the container with id 3cddf5fa3f09e085521df9275c3de6cbc84ae9f7194d69e1c95a70a23c4a9eb1 Jan 22 09:08:31 crc kubenswrapper[4811]: I0122 09:08:31.985539 4811 generic.go:334] "Generic (PLEG): container finished" podID="89c58cc2-8741-4c9f-95fa-c73db10026d3" containerID="a2454da840dd1fef32f320db4ccf5bfb75eab9c691cafa0ebace0e014218b97a" exitCode=0 Jan 22 09:08:31 crc kubenswrapper[4811]: I0122 09:08:31.985599 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l7qd2" event={"ID":"89c58cc2-8741-4c9f-95fa-c73db10026d3","Type":"ContainerDied","Data":"a2454da840dd1fef32f320db4ccf5bfb75eab9c691cafa0ebace0e014218b97a"} Jan 22 09:08:31 crc kubenswrapper[4811]: I0122 09:08:31.988457 4811 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 09:08:31 crc kubenswrapper[4811]: I0122 09:08:31.988498 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m45zk" event={"ID":"bf6a8a20-43c6-48f6-ba8f-585a3e4dd7ad","Type":"ContainerDied","Data":"d34f4ed0fdc8ae3cde89db0500a74822c5397895f1168dc62a61dd24d34b196e"} Jan 22 09:08:31 crc kubenswrapper[4811]: I0122 09:08:31.988453 4811 generic.go:334] "Generic (PLEG): container finished" podID="bf6a8a20-43c6-48f6-ba8f-585a3e4dd7ad" containerID="d34f4ed0fdc8ae3cde89db0500a74822c5397895f1168dc62a61dd24d34b196e" exitCode=0 Jan 22 09:08:31 crc kubenswrapper[4811]: I0122 09:08:31.988667 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m45zk" event={"ID":"bf6a8a20-43c6-48f6-ba8f-585a3e4dd7ad","Type":"ContainerStarted","Data":"3cddf5fa3f09e085521df9275c3de6cbc84ae9f7194d69e1c95a70a23c4a9eb1"} Jan 22 09:08:31 crc kubenswrapper[4811]: I0122 09:08:31.991452 4811 generic.go:334] "Generic (PLEG): container finished" podID="80663d92-2281-4a3d-9232-f1fc19873d88" containerID="98fd9e69518f482da1a8e15c9a11f5948330e7204ac69c0cafecb09c3a98aa73" exitCode=0 Jan 22 09:08:31 crc kubenswrapper[4811]: I0122 09:08:31.995085 4811 generic.go:334] "Generic (PLEG): container finished" podID="7b8f4ccb-0c41-4bc0-8cf6-bb4508afbb4d" containerID="984de5c265d50e22d4510c5c7070710a10a3f0811cf88961e837b67744d08630" exitCode=0 Jan 22 09:08:31 crc kubenswrapper[4811]: I0122 09:08:31.997878 4811 generic.go:334] "Generic (PLEG): container finished" podID="83b7219e-ce69-408d-92ad-7b58cc6d0b7a" containerID="8908c5e30b5991f658952c022b1b47ce84f924f8c76d071c5b3e52d46c51fdfc" exitCode=0 Jan 22 09:08:32 crc kubenswrapper[4811]: I0122 09:08:32.000726 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 22 09:08:32 crc kubenswrapper[4811]: I0122 09:08:32.001444 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484540-kbb7d" event={"ID":"80663d92-2281-4a3d-9232-f1fc19873d88","Type":"ContainerDied","Data":"98fd9e69518f482da1a8e15c9a11f5948330e7204ac69c0cafecb09c3a98aa73"} Jan 22 09:08:32 crc kubenswrapper[4811]: I0122 09:08:32.001483 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-68p7c" event={"ID":"7b8f4ccb-0c41-4bc0-8cf6-bb4508afbb4d","Type":"ContainerDied","Data":"984de5c265d50e22d4510c5c7070710a10a3f0811cf88961e837b67744d08630"} Jan 22 09:08:32 crc kubenswrapper[4811]: I0122 09:08:32.001501 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-68p7c" event={"ID":"7b8f4ccb-0c41-4bc0-8cf6-bb4508afbb4d","Type":"ContainerStarted","Data":"1fc500ba620a0793258596b0f4f5310f9cc7f3a12a1de7093ef009fd1d1441c1"} Jan 22 09:08:32 crc kubenswrapper[4811]: I0122 09:08:32.001518 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-nczwv" Jan 22 09:08:32 crc kubenswrapper[4811]: I0122 09:08:32.001534 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b2fpc" event={"ID":"83b7219e-ce69-408d-92ad-7b58cc6d0b7a","Type":"ContainerDied","Data":"8908c5e30b5991f658952c022b1b47ce84f924f8c76d071c5b3e52d46c51fdfc"} Jan 22 09:08:32 crc kubenswrapper[4811]: I0122 09:08:32.001550 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b2fpc" event={"ID":"83b7219e-ce69-408d-92ad-7b58cc6d0b7a","Type":"ContainerStarted","Data":"36bf1c8002b997bf04229ee30293db5167d9a96c4e11059124d6827482afc5b1"} Jan 22 09:08:32 crc kubenswrapper[4811]: I0122 09:08:32.001559 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-nczwv" event={"ID":"73533561-14fb-4481-872e-1b47096f9d30","Type":"ContainerStarted","Data":"1ddd3eb570b4a5eb2e92fcdddf21609cdab4e03742dcecfaef66d5702ce2f3db"} Jan 22 09:08:32 crc kubenswrapper[4811]: I0122 09:08:32.001571 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-nczwv" event={"ID":"73533561-14fb-4481-872e-1b47096f9d30","Type":"ContainerStarted","Data":"a74b21ff1721c8e4857c6574b62956e4c1b0fcd0dc4d85bfeb59292760b815f4"} Jan 22 09:08:32 crc kubenswrapper[4811]: I0122 09:08:32.097287 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-nczwv" podStartSLOduration=134.097268892 podStartE2EDuration="2m14.097268892s" podCreationTimestamp="2026-01-22 09:06:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:08:32.096354778 +0000 UTC m=+156.418541902" watchObservedRunningTime="2026-01-22 09:08:32.097268892 +0000 UTC m=+156.419456005" Jan 22 09:08:32 crc kubenswrapper[4811]: I0122 09:08:32.257877 4811 patch_prober.go:28] interesting pod/router-default-5444994796-l8phl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 09:08:32 crc kubenswrapper[4811]: [-]has-synced failed: reason withheld Jan 22 09:08:32 crc kubenswrapper[4811]: [+]process-running ok Jan 22 09:08:32 crc kubenswrapper[4811]: healthz check failed Jan 22 09:08:32 crc kubenswrapper[4811]: I0122 09:08:32.257990 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-l8phl" podUID="2716a2dc-25e2-4a62-8264-41d299b3cd55" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 09:08:32 crc kubenswrapper[4811]: I0122 09:08:32.444227 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vjcnk"] Jan 22 09:08:32 crc kubenswrapper[4811]: I0122 09:08:32.445872 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vjcnk" Jan 22 09:08:32 crc kubenswrapper[4811]: I0122 09:08:32.448105 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 22 09:08:32 crc kubenswrapper[4811]: I0122 09:08:32.458642 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vjcnk"] Jan 22 09:08:32 crc kubenswrapper[4811]: I0122 09:08:32.485257 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9g4pc\" (UniqueName: \"kubernetes.io/projected/6dcdfa22-db17-42c3-a366-f96b6dd7b27d-kube-api-access-9g4pc\") pod \"redhat-marketplace-vjcnk\" (UID: \"6dcdfa22-db17-42c3-a366-f96b6dd7b27d\") " pod="openshift-marketplace/redhat-marketplace-vjcnk" Jan 22 09:08:32 crc kubenswrapper[4811]: I0122 09:08:32.485339 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6dcdfa22-db17-42c3-a366-f96b6dd7b27d-utilities\") pod \"redhat-marketplace-vjcnk\" (UID: \"6dcdfa22-db17-42c3-a366-f96b6dd7b27d\") " pod="openshift-marketplace/redhat-marketplace-vjcnk" Jan 22 09:08:32 crc kubenswrapper[4811]: I0122 09:08:32.485507 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6dcdfa22-db17-42c3-a366-f96b6dd7b27d-catalog-content\") pod \"redhat-marketplace-vjcnk\" (UID: \"6dcdfa22-db17-42c3-a366-f96b6dd7b27d\") " pod="openshift-marketplace/redhat-marketplace-vjcnk" Jan 22 09:08:32 crc kubenswrapper[4811]: I0122 09:08:32.586806 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6dcdfa22-db17-42c3-a366-f96b6dd7b27d-utilities\") pod \"redhat-marketplace-vjcnk\" (UID: \"6dcdfa22-db17-42c3-a366-f96b6dd7b27d\") " pod="openshift-marketplace/redhat-marketplace-vjcnk" Jan 22 09:08:32 crc kubenswrapper[4811]: I0122 09:08:32.586902 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6dcdfa22-db17-42c3-a366-f96b6dd7b27d-catalog-content\") pod \"redhat-marketplace-vjcnk\" (UID: \"6dcdfa22-db17-42c3-a366-f96b6dd7b27d\") " pod="openshift-marketplace/redhat-marketplace-vjcnk" Jan 22 09:08:32 crc kubenswrapper[4811]: I0122 09:08:32.586994 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9g4pc\" (UniqueName: \"kubernetes.io/projected/6dcdfa22-db17-42c3-a366-f96b6dd7b27d-kube-api-access-9g4pc\") pod \"redhat-marketplace-vjcnk\" (UID: \"6dcdfa22-db17-42c3-a366-f96b6dd7b27d\") " pod="openshift-marketplace/redhat-marketplace-vjcnk" Jan 22 09:08:32 crc kubenswrapper[4811]: I0122 09:08:32.587424 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6dcdfa22-db17-42c3-a366-f96b6dd7b27d-utilities\") pod \"redhat-marketplace-vjcnk\" (UID: \"6dcdfa22-db17-42c3-a366-f96b6dd7b27d\") " pod="openshift-marketplace/redhat-marketplace-vjcnk" Jan 22 09:08:32 crc kubenswrapper[4811]: I0122 09:08:32.587546 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6dcdfa22-db17-42c3-a366-f96b6dd7b27d-catalog-content\") pod \"redhat-marketplace-vjcnk\" (UID: \"6dcdfa22-db17-42c3-a366-f96b6dd7b27d\") " pod="openshift-marketplace/redhat-marketplace-vjcnk" Jan 22 09:08:32 crc kubenswrapper[4811]: I0122 09:08:32.606313 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9g4pc\" (UniqueName: \"kubernetes.io/projected/6dcdfa22-db17-42c3-a366-f96b6dd7b27d-kube-api-access-9g4pc\") pod \"redhat-marketplace-vjcnk\" (UID: \"6dcdfa22-db17-42c3-a366-f96b6dd7b27d\") " pod="openshift-marketplace/redhat-marketplace-vjcnk" Jan 22 09:08:32 crc kubenswrapper[4811]: I0122 09:08:32.709538 4811 patch_prober.go:28] interesting pod/downloads-7954f5f757-hs4pm container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Jan 22 09:08:32 crc kubenswrapper[4811]: I0122 09:08:32.709602 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-hs4pm" podUID="117d7039-2cd9-4ee9-9272-923cd05c3565" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" Jan 22 09:08:32 crc kubenswrapper[4811]: I0122 09:08:32.709616 4811 patch_prober.go:28] interesting pod/downloads-7954f5f757-hs4pm container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Jan 22 09:08:32 crc kubenswrapper[4811]: I0122 09:08:32.709712 4811 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-hs4pm" podUID="117d7039-2cd9-4ee9-9272-923cd05c3565" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" Jan 22 09:08:32 crc kubenswrapper[4811]: I0122 09:08:32.763448 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vjcnk" Jan 22 09:08:32 crc kubenswrapper[4811]: I0122 09:08:32.843870 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-8bdsg"] Jan 22 09:08:32 crc kubenswrapper[4811]: I0122 09:08:32.845272 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8bdsg" Jan 22 09:08:32 crc kubenswrapper[4811]: I0122 09:08:32.855097 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8bdsg"] Jan 22 09:08:32 crc kubenswrapper[4811]: I0122 09:08:32.891027 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwcrp\" (UniqueName: \"kubernetes.io/projected/e1f79d07-2bae-46a7-92f1-fdd65ec4e0c5-kube-api-access-jwcrp\") pod \"redhat-marketplace-8bdsg\" (UID: \"e1f79d07-2bae-46a7-92f1-fdd65ec4e0c5\") " pod="openshift-marketplace/redhat-marketplace-8bdsg" Jan 22 09:08:32 crc kubenswrapper[4811]: I0122 09:08:32.891076 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1f79d07-2bae-46a7-92f1-fdd65ec4e0c5-utilities\") pod \"redhat-marketplace-8bdsg\" (UID: \"e1f79d07-2bae-46a7-92f1-fdd65ec4e0c5\") " pod="openshift-marketplace/redhat-marketplace-8bdsg" Jan 22 09:08:32 crc kubenswrapper[4811]: I0122 09:08:32.891150 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1f79d07-2bae-46a7-92f1-fdd65ec4e0c5-catalog-content\") pod \"redhat-marketplace-8bdsg\" (UID: \"e1f79d07-2bae-46a7-92f1-fdd65ec4e0c5\") " pod="openshift-marketplace/redhat-marketplace-8bdsg" Jan 22 09:08:32 crc kubenswrapper[4811]: I0122 09:08:32.991949 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vjcnk"] Jan 22 09:08:32 crc kubenswrapper[4811]: I0122 09:08:32.996849 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1f79d07-2bae-46a7-92f1-fdd65ec4e0c5-catalog-content\") pod \"redhat-marketplace-8bdsg\" (UID: \"e1f79d07-2bae-46a7-92f1-fdd65ec4e0c5\") " pod="openshift-marketplace/redhat-marketplace-8bdsg" Jan 22 09:08:32 crc kubenswrapper[4811]: I0122 09:08:32.996988 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jwcrp\" (UniqueName: \"kubernetes.io/projected/e1f79d07-2bae-46a7-92f1-fdd65ec4e0c5-kube-api-access-jwcrp\") pod \"redhat-marketplace-8bdsg\" (UID: \"e1f79d07-2bae-46a7-92f1-fdd65ec4e0c5\") " pod="openshift-marketplace/redhat-marketplace-8bdsg" Jan 22 09:08:32 crc kubenswrapper[4811]: I0122 09:08:32.997127 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1f79d07-2bae-46a7-92f1-fdd65ec4e0c5-utilities\") pod \"redhat-marketplace-8bdsg\" (UID: \"e1f79d07-2bae-46a7-92f1-fdd65ec4e0c5\") " pod="openshift-marketplace/redhat-marketplace-8bdsg" Jan 22 09:08:32 crc kubenswrapper[4811]: I0122 09:08:32.997256 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1f79d07-2bae-46a7-92f1-fdd65ec4e0c5-catalog-content\") pod \"redhat-marketplace-8bdsg\" (UID: \"e1f79d07-2bae-46a7-92f1-fdd65ec4e0c5\") " pod="openshift-marketplace/redhat-marketplace-8bdsg" Jan 22 09:08:32 crc kubenswrapper[4811]: I0122 09:08:32.999678 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1f79d07-2bae-46a7-92f1-fdd65ec4e0c5-utilities\") pod \"redhat-marketplace-8bdsg\" (UID: \"e1f79d07-2bae-46a7-92f1-fdd65ec4e0c5\") " pod="openshift-marketplace/redhat-marketplace-8bdsg" Jan 22 09:08:33 crc kubenswrapper[4811]: I0122 09:08:33.001485 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 22 09:08:33 crc kubenswrapper[4811]: I0122 09:08:33.011313 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 22 09:08:33 crc kubenswrapper[4811]: I0122 09:08:33.014658 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 22 09:08:33 crc kubenswrapper[4811]: I0122 09:08:33.014953 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 22 09:08:33 crc kubenswrapper[4811]: I0122 09:08:33.026745 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwcrp\" (UniqueName: \"kubernetes.io/projected/e1f79d07-2bae-46a7-92f1-fdd65ec4e0c5-kube-api-access-jwcrp\") pod \"redhat-marketplace-8bdsg\" (UID: \"e1f79d07-2bae-46a7-92f1-fdd65ec4e0c5\") " pod="openshift-marketplace/redhat-marketplace-8bdsg" Jan 22 09:08:33 crc kubenswrapper[4811]: I0122 09:08:33.030973 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 22 09:08:33 crc kubenswrapper[4811]: I0122 09:08:33.076781 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-jjjjp" Jan 22 09:08:33 crc kubenswrapper[4811]: I0122 09:08:33.076817 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-jjjjp" Jan 22 09:08:33 crc kubenswrapper[4811]: I0122 09:08:33.082521 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-jjjjp" Jan 22 09:08:33 crc kubenswrapper[4811]: I0122 09:08:33.103169 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c7552110-67b1-4150-8f6a-dfe6fb065d8f-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"c7552110-67b1-4150-8f6a-dfe6fb065d8f\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 22 09:08:33 crc kubenswrapper[4811]: I0122 09:08:33.109897 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c7552110-67b1-4150-8f6a-dfe6fb065d8f-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"c7552110-67b1-4150-8f6a-dfe6fb065d8f\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 22 09:08:33 crc kubenswrapper[4811]: I0122 09:08:33.189536 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8bdsg" Jan 22 09:08:33 crc kubenswrapper[4811]: I0122 09:08:33.211181 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c7552110-67b1-4150-8f6a-dfe6fb065d8f-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"c7552110-67b1-4150-8f6a-dfe6fb065d8f\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 22 09:08:33 crc kubenswrapper[4811]: I0122 09:08:33.211614 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c7552110-67b1-4150-8f6a-dfe6fb065d8f-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"c7552110-67b1-4150-8f6a-dfe6fb065d8f\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 22 09:08:33 crc kubenswrapper[4811]: I0122 09:08:33.211789 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c7552110-67b1-4150-8f6a-dfe6fb065d8f-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"c7552110-67b1-4150-8f6a-dfe6fb065d8f\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 22 09:08:33 crc kubenswrapper[4811]: I0122 09:08:33.224792 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c7552110-67b1-4150-8f6a-dfe6fb065d8f-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"c7552110-67b1-4150-8f6a-dfe6fb065d8f\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 22 09:08:33 crc kubenswrapper[4811]: I0122 09:08:33.240770 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-mv8mw"] Jan 22 09:08:33 crc kubenswrapper[4811]: I0122 09:08:33.242161 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mv8mw" Jan 22 09:08:33 crc kubenswrapper[4811]: I0122 09:08:33.244315 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 22 09:08:33 crc kubenswrapper[4811]: I0122 09:08:33.252804 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mv8mw"] Jan 22 09:08:33 crc kubenswrapper[4811]: I0122 09:08:33.260159 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-l8phl" Jan 22 09:08:33 crc kubenswrapper[4811]: I0122 09:08:33.263963 4811 patch_prober.go:28] interesting pod/router-default-5444994796-l8phl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 09:08:33 crc kubenswrapper[4811]: [-]has-synced failed: reason withheld Jan 22 09:08:33 crc kubenswrapper[4811]: [+]process-running ok Jan 22 09:08:33 crc kubenswrapper[4811]: healthz check failed Jan 22 09:08:33 crc kubenswrapper[4811]: I0122 09:08:33.263995 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-l8phl" podUID="2716a2dc-25e2-4a62-8264-41d299b3cd55" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 09:08:33 crc kubenswrapper[4811]: I0122 09:08:33.312657 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/516e9ebd-6782-459d-99df-6902a5098c4e-utilities\") pod \"redhat-operators-mv8mw\" (UID: \"516e9ebd-6782-459d-99df-6902a5098c4e\") " pod="openshift-marketplace/redhat-operators-mv8mw" Jan 22 09:08:33 crc kubenswrapper[4811]: I0122 09:08:33.312696 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cznrp\" (UniqueName: \"kubernetes.io/projected/516e9ebd-6782-459d-99df-6902a5098c4e-kube-api-access-cznrp\") pod \"redhat-operators-mv8mw\" (UID: \"516e9ebd-6782-459d-99df-6902a5098c4e\") " pod="openshift-marketplace/redhat-operators-mv8mw" Jan 22 09:08:33 crc kubenswrapper[4811]: I0122 09:08:33.312873 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/516e9ebd-6782-459d-99df-6902a5098c4e-catalog-content\") pod \"redhat-operators-mv8mw\" (UID: \"516e9ebd-6782-459d-99df-6902a5098c4e\") " pod="openshift-marketplace/redhat-operators-mv8mw" Jan 22 09:08:33 crc kubenswrapper[4811]: I0122 09:08:33.358058 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 22 09:08:33 crc kubenswrapper[4811]: I0122 09:08:33.413881 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/516e9ebd-6782-459d-99df-6902a5098c4e-utilities\") pod \"redhat-operators-mv8mw\" (UID: \"516e9ebd-6782-459d-99df-6902a5098c4e\") " pod="openshift-marketplace/redhat-operators-mv8mw" Jan 22 09:08:33 crc kubenswrapper[4811]: I0122 09:08:33.413928 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cznrp\" (UniqueName: \"kubernetes.io/projected/516e9ebd-6782-459d-99df-6902a5098c4e-kube-api-access-cznrp\") pod \"redhat-operators-mv8mw\" (UID: \"516e9ebd-6782-459d-99df-6902a5098c4e\") " pod="openshift-marketplace/redhat-operators-mv8mw" Jan 22 09:08:33 crc kubenswrapper[4811]: I0122 09:08:33.413960 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/516e9ebd-6782-459d-99df-6902a5098c4e-catalog-content\") pod \"redhat-operators-mv8mw\" (UID: \"516e9ebd-6782-459d-99df-6902a5098c4e\") " pod="openshift-marketplace/redhat-operators-mv8mw" Jan 22 09:08:33 crc kubenswrapper[4811]: I0122 09:08:33.414540 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/516e9ebd-6782-459d-99df-6902a5098c4e-utilities\") pod \"redhat-operators-mv8mw\" (UID: \"516e9ebd-6782-459d-99df-6902a5098c4e\") " pod="openshift-marketplace/redhat-operators-mv8mw" Jan 22 09:08:33 crc kubenswrapper[4811]: I0122 09:08:33.414671 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/516e9ebd-6782-459d-99df-6902a5098c4e-catalog-content\") pod \"redhat-operators-mv8mw\" (UID: \"516e9ebd-6782-459d-99df-6902a5098c4e\") " pod="openshift-marketplace/redhat-operators-mv8mw" Jan 22 09:08:33 crc kubenswrapper[4811]: I0122 09:08:33.438330 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cznrp\" (UniqueName: \"kubernetes.io/projected/516e9ebd-6782-459d-99df-6902a5098c4e-kube-api-access-cznrp\") pod \"redhat-operators-mv8mw\" (UID: \"516e9ebd-6782-459d-99df-6902a5098c4e\") " pod="openshift-marketplace/redhat-operators-mv8mw" Jan 22 09:08:33 crc kubenswrapper[4811]: I0122 09:08:33.456734 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-4c4pw"] Jan 22 09:08:33 crc kubenswrapper[4811]: I0122 09:08:33.458130 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4c4pw" Jan 22 09:08:33 crc kubenswrapper[4811]: I0122 09:08:33.476752 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4c4pw"] Jan 22 09:08:33 crc kubenswrapper[4811]: I0122 09:08:33.484749 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484540-kbb7d" Jan 22 09:08:33 crc kubenswrapper[4811]: I0122 09:08:33.516055 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/80663d92-2281-4a3d-9232-f1fc19873d88-config-volume\") pod \"80663d92-2281-4a3d-9232-f1fc19873d88\" (UID: \"80663d92-2281-4a3d-9232-f1fc19873d88\") " Jan 22 09:08:33 crc kubenswrapper[4811]: I0122 09:08:33.516225 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/80663d92-2281-4a3d-9232-f1fc19873d88-secret-volume\") pod \"80663d92-2281-4a3d-9232-f1fc19873d88\" (UID: \"80663d92-2281-4a3d-9232-f1fc19873d88\") " Jan 22 09:08:33 crc kubenswrapper[4811]: I0122 09:08:33.516326 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cngll\" (UniqueName: \"kubernetes.io/projected/80663d92-2281-4a3d-9232-f1fc19873d88-kube-api-access-cngll\") pod \"80663d92-2281-4a3d-9232-f1fc19873d88\" (UID: \"80663d92-2281-4a3d-9232-f1fc19873d88\") " Jan 22 09:08:33 crc kubenswrapper[4811]: I0122 09:08:33.516977 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d9bfd89-ad04-4234-aaa3-8cbaa1f0c770-utilities\") pod \"redhat-operators-4c4pw\" (UID: \"4d9bfd89-ad04-4234-aaa3-8cbaa1f0c770\") " pod="openshift-marketplace/redhat-operators-4c4pw" Jan 22 09:08:33 crc kubenswrapper[4811]: I0122 09:08:33.517546 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d9bfd89-ad04-4234-aaa3-8cbaa1f0c770-catalog-content\") pod \"redhat-operators-4c4pw\" (UID: \"4d9bfd89-ad04-4234-aaa3-8cbaa1f0c770\") " pod="openshift-marketplace/redhat-operators-4c4pw" Jan 22 09:08:33 crc kubenswrapper[4811]: I0122 09:08:33.517816 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ss7cx\" (UniqueName: \"kubernetes.io/projected/4d9bfd89-ad04-4234-aaa3-8cbaa1f0c770-kube-api-access-ss7cx\") pod \"redhat-operators-4c4pw\" (UID: \"4d9bfd89-ad04-4234-aaa3-8cbaa1f0c770\") " pod="openshift-marketplace/redhat-operators-4c4pw" Jan 22 09:08:33 crc kubenswrapper[4811]: I0122 09:08:33.517386 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/80663d92-2281-4a3d-9232-f1fc19873d88-config-volume" (OuterVolumeSpecName: "config-volume") pod "80663d92-2281-4a3d-9232-f1fc19873d88" (UID: "80663d92-2281-4a3d-9232-f1fc19873d88"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:08:33 crc kubenswrapper[4811]: I0122 09:08:33.527065 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80663d92-2281-4a3d-9232-f1fc19873d88-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "80663d92-2281-4a3d-9232-f1fc19873d88" (UID: "80663d92-2281-4a3d-9232-f1fc19873d88"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:08:33 crc kubenswrapper[4811]: I0122 09:08:33.533751 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80663d92-2281-4a3d-9232-f1fc19873d88-kube-api-access-cngll" (OuterVolumeSpecName: "kube-api-access-cngll") pod "80663d92-2281-4a3d-9232-f1fc19873d88" (UID: "80663d92-2281-4a3d-9232-f1fc19873d88"). InnerVolumeSpecName "kube-api-access-cngll". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:08:33 crc kubenswrapper[4811]: I0122 09:08:33.588447 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mv8mw" Jan 22 09:08:33 crc kubenswrapper[4811]: I0122 09:08:33.621681 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d9bfd89-ad04-4234-aaa3-8cbaa1f0c770-utilities\") pod \"redhat-operators-4c4pw\" (UID: \"4d9bfd89-ad04-4234-aaa3-8cbaa1f0c770\") " pod="openshift-marketplace/redhat-operators-4c4pw" Jan 22 09:08:33 crc kubenswrapper[4811]: I0122 09:08:33.622287 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d9bfd89-ad04-4234-aaa3-8cbaa1f0c770-catalog-content\") pod \"redhat-operators-4c4pw\" (UID: \"4d9bfd89-ad04-4234-aaa3-8cbaa1f0c770\") " pod="openshift-marketplace/redhat-operators-4c4pw" Jan 22 09:08:33 crc kubenswrapper[4811]: I0122 09:08:33.622375 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ss7cx\" (UniqueName: \"kubernetes.io/projected/4d9bfd89-ad04-4234-aaa3-8cbaa1f0c770-kube-api-access-ss7cx\") pod \"redhat-operators-4c4pw\" (UID: \"4d9bfd89-ad04-4234-aaa3-8cbaa1f0c770\") " pod="openshift-marketplace/redhat-operators-4c4pw" Jan 22 09:08:33 crc kubenswrapper[4811]: I0122 09:08:33.622558 4811 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/80663d92-2281-4a3d-9232-f1fc19873d88-config-volume\") on node \"crc\" DevicePath \"\"" Jan 22 09:08:33 crc kubenswrapper[4811]: I0122 09:08:33.622615 4811 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/80663d92-2281-4a3d-9232-f1fc19873d88-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 22 09:08:33 crc kubenswrapper[4811]: I0122 09:08:33.622700 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cngll\" (UniqueName: \"kubernetes.io/projected/80663d92-2281-4a3d-9232-f1fc19873d88-kube-api-access-cngll\") on node \"crc\" DevicePath \"\"" Jan 22 09:08:33 crc kubenswrapper[4811]: I0122 09:08:33.623692 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d9bfd89-ad04-4234-aaa3-8cbaa1f0c770-utilities\") pod \"redhat-operators-4c4pw\" (UID: \"4d9bfd89-ad04-4234-aaa3-8cbaa1f0c770\") " pod="openshift-marketplace/redhat-operators-4c4pw" Jan 22 09:08:33 crc kubenswrapper[4811]: I0122 09:08:33.624040 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d9bfd89-ad04-4234-aaa3-8cbaa1f0c770-catalog-content\") pod \"redhat-operators-4c4pw\" (UID: \"4d9bfd89-ad04-4234-aaa3-8cbaa1f0c770\") " pod="openshift-marketplace/redhat-operators-4c4pw" Jan 22 09:08:33 crc kubenswrapper[4811]: I0122 09:08:33.650730 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-jhptg" Jan 22 09:08:33 crc kubenswrapper[4811]: I0122 09:08:33.651910 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-jhptg" Jan 22 09:08:33 crc kubenswrapper[4811]: I0122 09:08:33.667983 4811 patch_prober.go:28] interesting pod/console-f9d7485db-jhptg container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.19:8443/health\": dial tcp 10.217.0.19:8443: connect: connection refused" start-of-body= Jan 22 09:08:33 crc kubenswrapper[4811]: I0122 09:08:33.669835 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-jhptg" podUID="f58eb1d8-bb02-4af7-857c-138518c5bbf2" containerName="console" probeResult="failure" output="Get \"https://10.217.0.19:8443/health\": dial tcp 10.217.0.19:8443: connect: connection refused" Jan 22 09:08:33 crc kubenswrapper[4811]: I0122 09:08:33.670527 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ss7cx\" (UniqueName: \"kubernetes.io/projected/4d9bfd89-ad04-4234-aaa3-8cbaa1f0c770-kube-api-access-ss7cx\") pod \"redhat-operators-4c4pw\" (UID: \"4d9bfd89-ad04-4234-aaa3-8cbaa1f0c770\") " pod="openshift-marketplace/redhat-operators-4c4pw" Jan 22 09:08:33 crc kubenswrapper[4811]: I0122 09:08:33.799245 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4c4pw" Jan 22 09:08:33 crc kubenswrapper[4811]: I0122 09:08:33.833800 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 22 09:08:33 crc kubenswrapper[4811]: W0122 09:08:33.871529 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podc7552110_67b1_4150_8f6a_dfe6fb065d8f.slice/crio-7c25052aa49998a19a542e8622ca613819f1489682c6f28cb2271a1d28f00b5f WatchSource:0}: Error finding container 7c25052aa49998a19a542e8622ca613819f1489682c6f28cb2271a1d28f00b5f: Status 404 returned error can't find the container with id 7c25052aa49998a19a542e8622ca613819f1489682c6f28cb2271a1d28f00b5f Jan 22 09:08:33 crc kubenswrapper[4811]: I0122 09:08:33.932406 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mv8mw"] Jan 22 09:08:33 crc kubenswrapper[4811]: I0122 09:08:33.945566 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8bdsg"] Jan 22 09:08:33 crc kubenswrapper[4811]: W0122 09:08:33.954321 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod516e9ebd_6782_459d_99df_6902a5098c4e.slice/crio-0d9888b6b66485dc6c149295b75ae071d76f70d06f81f6884ac16dd967ed4df7 WatchSource:0}: Error finding container 0d9888b6b66485dc6c149295b75ae071d76f70d06f81f6884ac16dd967ed4df7: Status 404 returned error can't find the container with id 0d9888b6b66485dc6c149295b75ae071d76f70d06f81f6884ac16dd967ed4df7 Jan 22 09:08:34 crc kubenswrapper[4811]: I0122 09:08:34.120432 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4c4pw"] Jan 22 09:08:34 crc kubenswrapper[4811]: I0122 09:08:34.130171 4811 generic.go:334] "Generic (PLEG): container finished" podID="6dcdfa22-db17-42c3-a366-f96b6dd7b27d" containerID="cbbff8bf691befc1df4db594182f6bc094b32ee5a6245914086552e773a3c89b" exitCode=0 Jan 22 09:08:34 crc kubenswrapper[4811]: I0122 09:08:34.130298 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vjcnk" event={"ID":"6dcdfa22-db17-42c3-a366-f96b6dd7b27d","Type":"ContainerDied","Data":"cbbff8bf691befc1df4db594182f6bc094b32ee5a6245914086552e773a3c89b"} Jan 22 09:08:34 crc kubenswrapper[4811]: I0122 09:08:34.130329 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vjcnk" event={"ID":"6dcdfa22-db17-42c3-a366-f96b6dd7b27d","Type":"ContainerStarted","Data":"86d46a6016d455fcc92e2286c93bec599c8d86693b389b92d80381dd69ee9c9d"} Jan 22 09:08:34 crc kubenswrapper[4811]: I0122 09:08:34.147994 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8bdsg" event={"ID":"e1f79d07-2bae-46a7-92f1-fdd65ec4e0c5","Type":"ContainerStarted","Data":"a031f05894b10d98bc1b50cc06f66e1b6d6e4f5db0f90643bcb1e91ffadd97a6"} Jan 22 09:08:34 crc kubenswrapper[4811]: I0122 09:08:34.185539 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484540-kbb7d" event={"ID":"80663d92-2281-4a3d-9232-f1fc19873d88","Type":"ContainerDied","Data":"eb5e4023a072cbe0f3e1a802e0497cf3c121c9291366cc2088cb1628499372b4"} Jan 22 09:08:34 crc kubenswrapper[4811]: I0122 09:08:34.185580 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eb5e4023a072cbe0f3e1a802e0497cf3c121c9291366cc2088cb1628499372b4" Jan 22 09:08:34 crc kubenswrapper[4811]: I0122 09:08:34.185672 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484540-kbb7d" Jan 22 09:08:34 crc kubenswrapper[4811]: W0122 09:08:34.191886 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4d9bfd89_ad04_4234_aaa3_8cbaa1f0c770.slice/crio-951fa41bfaec855a580f53011848c9513b222ea9eb87ed8ba1bf49a281a07b2b WatchSource:0}: Error finding container 951fa41bfaec855a580f53011848c9513b222ea9eb87ed8ba1bf49a281a07b2b: Status 404 returned error can't find the container with id 951fa41bfaec855a580f53011848c9513b222ea9eb87ed8ba1bf49a281a07b2b Jan 22 09:08:34 crc kubenswrapper[4811]: I0122 09:08:34.211894 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mv8mw" event={"ID":"516e9ebd-6782-459d-99df-6902a5098c4e","Type":"ContainerStarted","Data":"0d9888b6b66485dc6c149295b75ae071d76f70d06f81f6884ac16dd967ed4df7"} Jan 22 09:08:34 crc kubenswrapper[4811]: I0122 09:08:34.219168 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"c7552110-67b1-4150-8f6a-dfe6fb065d8f","Type":"ContainerStarted","Data":"7c25052aa49998a19a542e8622ca613819f1489682c6f28cb2271a1d28f00b5f"} Jan 22 09:08:34 crc kubenswrapper[4811]: I0122 09:08:34.225460 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-jjjjp" Jan 22 09:08:34 crc kubenswrapper[4811]: I0122 09:08:34.260280 4811 patch_prober.go:28] interesting pod/router-default-5444994796-l8phl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 09:08:34 crc kubenswrapper[4811]: [-]has-synced failed: reason withheld Jan 22 09:08:34 crc kubenswrapper[4811]: [+]process-running ok Jan 22 09:08:34 crc kubenswrapper[4811]: healthz check failed Jan 22 09:08:34 crc kubenswrapper[4811]: I0122 09:08:34.260554 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-l8phl" podUID="2716a2dc-25e2-4a62-8264-41d299b3cd55" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 09:08:35 crc kubenswrapper[4811]: I0122 09:08:35.228780 4811 generic.go:334] "Generic (PLEG): container finished" podID="516e9ebd-6782-459d-99df-6902a5098c4e" containerID="ca710074422d9e8ddf779c356eae191788dc59693cb8e255d8ff055e0613336b" exitCode=0 Jan 22 09:08:35 crc kubenswrapper[4811]: I0122 09:08:35.228881 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mv8mw" event={"ID":"516e9ebd-6782-459d-99df-6902a5098c4e","Type":"ContainerDied","Data":"ca710074422d9e8ddf779c356eae191788dc59693cb8e255d8ff055e0613336b"} Jan 22 09:08:35 crc kubenswrapper[4811]: I0122 09:08:35.251661 4811 generic.go:334] "Generic (PLEG): container finished" podID="c7552110-67b1-4150-8f6a-dfe6fb065d8f" containerID="c5f2ab6f09be6419df3c290b51f8171f08bfee8b6dbd535d9330c9d99bbdbca1" exitCode=0 Jan 22 09:08:35 crc kubenswrapper[4811]: I0122 09:08:35.251777 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"c7552110-67b1-4150-8f6a-dfe6fb065d8f","Type":"ContainerDied","Data":"c5f2ab6f09be6419df3c290b51f8171f08bfee8b6dbd535d9330c9d99bbdbca1"} Jan 22 09:08:35 crc kubenswrapper[4811]: I0122 09:08:35.255938 4811 patch_prober.go:28] interesting pod/router-default-5444994796-l8phl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 09:08:35 crc kubenswrapper[4811]: [-]has-synced failed: reason withheld Jan 22 09:08:35 crc kubenswrapper[4811]: [+]process-running ok Jan 22 09:08:35 crc kubenswrapper[4811]: healthz check failed Jan 22 09:08:35 crc kubenswrapper[4811]: I0122 09:08:35.255990 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-l8phl" podUID="2716a2dc-25e2-4a62-8264-41d299b3cd55" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 09:08:35 crc kubenswrapper[4811]: I0122 09:08:35.258670 4811 generic.go:334] "Generic (PLEG): container finished" podID="4d9bfd89-ad04-4234-aaa3-8cbaa1f0c770" containerID="594acf32a0b5becba904fdf72e2095791885a9c673b9a9b04ee69311009e42d0" exitCode=0 Jan 22 09:08:35 crc kubenswrapper[4811]: I0122 09:08:35.258751 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4c4pw" event={"ID":"4d9bfd89-ad04-4234-aaa3-8cbaa1f0c770","Type":"ContainerDied","Data":"594acf32a0b5becba904fdf72e2095791885a9c673b9a9b04ee69311009e42d0"} Jan 22 09:08:35 crc kubenswrapper[4811]: I0122 09:08:35.258781 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4c4pw" event={"ID":"4d9bfd89-ad04-4234-aaa3-8cbaa1f0c770","Type":"ContainerStarted","Data":"951fa41bfaec855a580f53011848c9513b222ea9eb87ed8ba1bf49a281a07b2b"} Jan 22 09:08:35 crc kubenswrapper[4811]: I0122 09:08:35.268347 4811 generic.go:334] "Generic (PLEG): container finished" podID="e1f79d07-2bae-46a7-92f1-fdd65ec4e0c5" containerID="d72c220c62516c32a6f6ace59f5c9b0283137e273ccf08a90033bf783ed46a2f" exitCode=0 Jan 22 09:08:35 crc kubenswrapper[4811]: I0122 09:08:35.268510 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8bdsg" event={"ID":"e1f79d07-2bae-46a7-92f1-fdd65ec4e0c5","Type":"ContainerDied","Data":"d72c220c62516c32a6f6ace59f5c9b0283137e273ccf08a90033bf783ed46a2f"} Jan 22 09:08:35 crc kubenswrapper[4811]: I0122 09:08:35.352888 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 22 09:08:35 crc kubenswrapper[4811]: E0122 09:08:35.353168 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80663d92-2281-4a3d-9232-f1fc19873d88" containerName="collect-profiles" Jan 22 09:08:35 crc kubenswrapper[4811]: I0122 09:08:35.353211 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="80663d92-2281-4a3d-9232-f1fc19873d88" containerName="collect-profiles" Jan 22 09:08:35 crc kubenswrapper[4811]: I0122 09:08:35.353366 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="80663d92-2281-4a3d-9232-f1fc19873d88" containerName="collect-profiles" Jan 22 09:08:35 crc kubenswrapper[4811]: I0122 09:08:35.353894 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 22 09:08:35 crc kubenswrapper[4811]: I0122 09:08:35.355737 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 22 09:08:35 crc kubenswrapper[4811]: I0122 09:08:35.356885 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 22 09:08:35 crc kubenswrapper[4811]: I0122 09:08:35.362503 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 22 09:08:35 crc kubenswrapper[4811]: I0122 09:08:35.478857 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/280aa649-3efa-4218-952c-18a11bfd9a42-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"280aa649-3efa-4218-952c-18a11bfd9a42\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 22 09:08:35 crc kubenswrapper[4811]: I0122 09:08:35.479292 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/280aa649-3efa-4218-952c-18a11bfd9a42-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"280aa649-3efa-4218-952c-18a11bfd9a42\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 22 09:08:35 crc kubenswrapper[4811]: I0122 09:08:35.501448 4811 patch_prober.go:28] interesting pod/machine-config-daemon-txvcq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 09:08:35 crc kubenswrapper[4811]: I0122 09:08:35.501515 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 09:08:35 crc kubenswrapper[4811]: I0122 09:08:35.580766 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/280aa649-3efa-4218-952c-18a11bfd9a42-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"280aa649-3efa-4218-952c-18a11bfd9a42\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 22 09:08:35 crc kubenswrapper[4811]: I0122 09:08:35.580838 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/280aa649-3efa-4218-952c-18a11bfd9a42-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"280aa649-3efa-4218-952c-18a11bfd9a42\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 22 09:08:35 crc kubenswrapper[4811]: I0122 09:08:35.580863 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/280aa649-3efa-4218-952c-18a11bfd9a42-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"280aa649-3efa-4218-952c-18a11bfd9a42\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 22 09:08:35 crc kubenswrapper[4811]: I0122 09:08:35.610780 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/280aa649-3efa-4218-952c-18a11bfd9a42-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"280aa649-3efa-4218-952c-18a11bfd9a42\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 22 09:08:35 crc kubenswrapper[4811]: I0122 09:08:35.702175 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 22 09:08:36 crc kubenswrapper[4811]: I0122 09:08:36.225999 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 22 09:08:36 crc kubenswrapper[4811]: I0122 09:08:36.256343 4811 patch_prober.go:28] interesting pod/router-default-5444994796-l8phl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 09:08:36 crc kubenswrapper[4811]: [-]has-synced failed: reason withheld Jan 22 09:08:36 crc kubenswrapper[4811]: [+]process-running ok Jan 22 09:08:36 crc kubenswrapper[4811]: healthz check failed Jan 22 09:08:36 crc kubenswrapper[4811]: I0122 09:08:36.256417 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-l8phl" podUID="2716a2dc-25e2-4a62-8264-41d299b3cd55" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 09:08:36 crc kubenswrapper[4811]: W0122 09:08:36.261778 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod280aa649_3efa_4218_952c_18a11bfd9a42.slice/crio-f4ddd88382900b29e40d5b3451ad0d937e3f7464fb271357aca0bac37bd83b7e WatchSource:0}: Error finding container f4ddd88382900b29e40d5b3451ad0d937e3f7464fb271357aca0bac37bd83b7e: Status 404 returned error can't find the container with id f4ddd88382900b29e40d5b3451ad0d937e3f7464fb271357aca0bac37bd83b7e Jan 22 09:08:36 crc kubenswrapper[4811]: I0122 09:08:36.284824 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"280aa649-3efa-4218-952c-18a11bfd9a42","Type":"ContainerStarted","Data":"f4ddd88382900b29e40d5b3451ad0d937e3f7464fb271357aca0bac37bd83b7e"} Jan 22 09:08:36 crc kubenswrapper[4811]: I0122 09:08:36.447381 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 22 09:08:36 crc kubenswrapper[4811]: I0122 09:08:36.497552 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c7552110-67b1-4150-8f6a-dfe6fb065d8f-kube-api-access\") pod \"c7552110-67b1-4150-8f6a-dfe6fb065d8f\" (UID: \"c7552110-67b1-4150-8f6a-dfe6fb065d8f\") " Jan 22 09:08:36 crc kubenswrapper[4811]: I0122 09:08:36.497635 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c7552110-67b1-4150-8f6a-dfe6fb065d8f-kubelet-dir\") pod \"c7552110-67b1-4150-8f6a-dfe6fb065d8f\" (UID: \"c7552110-67b1-4150-8f6a-dfe6fb065d8f\") " Jan 22 09:08:36 crc kubenswrapper[4811]: I0122 09:08:36.497724 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c7552110-67b1-4150-8f6a-dfe6fb065d8f-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "c7552110-67b1-4150-8f6a-dfe6fb065d8f" (UID: "c7552110-67b1-4150-8f6a-dfe6fb065d8f"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 09:08:36 crc kubenswrapper[4811]: I0122 09:08:36.498498 4811 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c7552110-67b1-4150-8f6a-dfe6fb065d8f-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 22 09:08:36 crc kubenswrapper[4811]: I0122 09:08:36.501880 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7552110-67b1-4150-8f6a-dfe6fb065d8f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c7552110-67b1-4150-8f6a-dfe6fb065d8f" (UID: "c7552110-67b1-4150-8f6a-dfe6fb065d8f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:08:36 crc kubenswrapper[4811]: I0122 09:08:36.599501 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c7552110-67b1-4150-8f6a-dfe6fb065d8f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 22 09:08:37 crc kubenswrapper[4811]: I0122 09:08:37.256669 4811 patch_prober.go:28] interesting pod/router-default-5444994796-l8phl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 09:08:37 crc kubenswrapper[4811]: [-]has-synced failed: reason withheld Jan 22 09:08:37 crc kubenswrapper[4811]: [+]process-running ok Jan 22 09:08:37 crc kubenswrapper[4811]: healthz check failed Jan 22 09:08:37 crc kubenswrapper[4811]: I0122 09:08:37.256732 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-l8phl" podUID="2716a2dc-25e2-4a62-8264-41d299b3cd55" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 09:08:37 crc kubenswrapper[4811]: I0122 09:08:37.297774 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"280aa649-3efa-4218-952c-18a11bfd9a42","Type":"ContainerStarted","Data":"25e1400a3750ecb2dedff55b2e2659ec194a1724eb44548739f1350180125e26"} Jan 22 09:08:37 crc kubenswrapper[4811]: I0122 09:08:37.312459 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"c7552110-67b1-4150-8f6a-dfe6fb065d8f","Type":"ContainerDied","Data":"7c25052aa49998a19a542e8622ca613819f1489682c6f28cb2271a1d28f00b5f"} Jan 22 09:08:37 crc kubenswrapper[4811]: I0122 09:08:37.312483 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=2.312444755 podStartE2EDuration="2.312444755s" podCreationTimestamp="2026-01-22 09:08:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:08:37.309917049 +0000 UTC m=+161.632104171" watchObservedRunningTime="2026-01-22 09:08:37.312444755 +0000 UTC m=+161.634631878" Jan 22 09:08:37 crc kubenswrapper[4811]: I0122 09:08:37.312693 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7c25052aa49998a19a542e8622ca613819f1489682c6f28cb2271a1d28f00b5f" Jan 22 09:08:37 crc kubenswrapper[4811]: I0122 09:08:37.312822 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 22 09:08:38 crc kubenswrapper[4811]: I0122 09:08:38.045275 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-v2nnf" Jan 22 09:08:38 crc kubenswrapper[4811]: I0122 09:08:38.255833 4811 patch_prober.go:28] interesting pod/router-default-5444994796-l8phl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 09:08:38 crc kubenswrapper[4811]: [-]has-synced failed: reason withheld Jan 22 09:08:38 crc kubenswrapper[4811]: [+]process-running ok Jan 22 09:08:38 crc kubenswrapper[4811]: healthz check failed Jan 22 09:08:38 crc kubenswrapper[4811]: I0122 09:08:38.255894 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-l8phl" podUID="2716a2dc-25e2-4a62-8264-41d299b3cd55" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 09:08:38 crc kubenswrapper[4811]: I0122 09:08:38.331713 4811 generic.go:334] "Generic (PLEG): container finished" podID="280aa649-3efa-4218-952c-18a11bfd9a42" containerID="25e1400a3750ecb2dedff55b2e2659ec194a1724eb44548739f1350180125e26" exitCode=0 Jan 22 09:08:38 crc kubenswrapper[4811]: I0122 09:08:38.331758 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"280aa649-3efa-4218-952c-18a11bfd9a42","Type":"ContainerDied","Data":"25e1400a3750ecb2dedff55b2e2659ec194a1724eb44548739f1350180125e26"} Jan 22 09:08:39 crc kubenswrapper[4811]: I0122 09:08:39.257385 4811 patch_prober.go:28] interesting pod/router-default-5444994796-l8phl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 09:08:39 crc kubenswrapper[4811]: [-]has-synced failed: reason withheld Jan 22 09:08:39 crc kubenswrapper[4811]: [+]process-running ok Jan 22 09:08:39 crc kubenswrapper[4811]: healthz check failed Jan 22 09:08:39 crc kubenswrapper[4811]: I0122 09:08:39.257454 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-l8phl" podUID="2716a2dc-25e2-4a62-8264-41d299b3cd55" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 09:08:39 crc kubenswrapper[4811]: I0122 09:08:39.347428 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/de4b38a0-0c7a-4693-9f92-40fefd6bc9b4-metrics-certs\") pod \"network-metrics-daemon-bhj4l\" (UID: \"de4b38a0-0c7a-4693-9f92-40fefd6bc9b4\") " pod="openshift-multus/network-metrics-daemon-bhj4l" Jan 22 09:08:39 crc kubenswrapper[4811]: I0122 09:08:39.358750 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/de4b38a0-0c7a-4693-9f92-40fefd6bc9b4-metrics-certs\") pod \"network-metrics-daemon-bhj4l\" (UID: \"de4b38a0-0c7a-4693-9f92-40fefd6bc9b4\") " pod="openshift-multus/network-metrics-daemon-bhj4l" Jan 22 09:08:39 crc kubenswrapper[4811]: I0122 09:08:39.617000 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bhj4l" Jan 22 09:08:40 crc kubenswrapper[4811]: I0122 09:08:40.256320 4811 patch_prober.go:28] interesting pod/router-default-5444994796-l8phl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 09:08:40 crc kubenswrapper[4811]: [-]has-synced failed: reason withheld Jan 22 09:08:40 crc kubenswrapper[4811]: [+]process-running ok Jan 22 09:08:40 crc kubenswrapper[4811]: healthz check failed Jan 22 09:08:40 crc kubenswrapper[4811]: I0122 09:08:40.256394 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-l8phl" podUID="2716a2dc-25e2-4a62-8264-41d299b3cd55" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 09:08:41 crc kubenswrapper[4811]: I0122 09:08:41.256146 4811 patch_prober.go:28] interesting pod/router-default-5444994796-l8phl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 09:08:41 crc kubenswrapper[4811]: [-]has-synced failed: reason withheld Jan 22 09:08:41 crc kubenswrapper[4811]: [+]process-running ok Jan 22 09:08:41 crc kubenswrapper[4811]: healthz check failed Jan 22 09:08:41 crc kubenswrapper[4811]: I0122 09:08:41.256441 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-l8phl" podUID="2716a2dc-25e2-4a62-8264-41d299b3cd55" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 09:08:42 crc kubenswrapper[4811]: I0122 09:08:42.255322 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-l8phl" Jan 22 09:08:42 crc kubenswrapper[4811]: I0122 09:08:42.258464 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-l8phl" Jan 22 09:08:42 crc kubenswrapper[4811]: I0122 09:08:42.718081 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-hs4pm" Jan 22 09:08:43 crc kubenswrapper[4811]: I0122 09:08:43.650847 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-jhptg" Jan 22 09:08:43 crc kubenswrapper[4811]: I0122 09:08:43.663829 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-jhptg" Jan 22 09:08:44 crc kubenswrapper[4811]: I0122 09:08:44.058465 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 22 09:08:44 crc kubenswrapper[4811]: I0122 09:08:44.121706 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/280aa649-3efa-4218-952c-18a11bfd9a42-kube-api-access\") pod \"280aa649-3efa-4218-952c-18a11bfd9a42\" (UID: \"280aa649-3efa-4218-952c-18a11bfd9a42\") " Jan 22 09:08:44 crc kubenswrapper[4811]: I0122 09:08:44.121738 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/280aa649-3efa-4218-952c-18a11bfd9a42-kubelet-dir\") pod \"280aa649-3efa-4218-952c-18a11bfd9a42\" (UID: \"280aa649-3efa-4218-952c-18a11bfd9a42\") " Jan 22 09:08:44 crc kubenswrapper[4811]: I0122 09:08:44.122157 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/280aa649-3efa-4218-952c-18a11bfd9a42-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "280aa649-3efa-4218-952c-18a11bfd9a42" (UID: "280aa649-3efa-4218-952c-18a11bfd9a42"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 09:08:44 crc kubenswrapper[4811]: I0122 09:08:44.127599 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/280aa649-3efa-4218-952c-18a11bfd9a42-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "280aa649-3efa-4218-952c-18a11bfd9a42" (UID: "280aa649-3efa-4218-952c-18a11bfd9a42"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:08:44 crc kubenswrapper[4811]: I0122 09:08:44.223651 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/280aa649-3efa-4218-952c-18a11bfd9a42-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 22 09:08:44 crc kubenswrapper[4811]: I0122 09:08:44.223683 4811 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/280aa649-3efa-4218-952c-18a11bfd9a42-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 22 09:08:44 crc kubenswrapper[4811]: I0122 09:08:44.417252 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 22 09:08:44 crc kubenswrapper[4811]: I0122 09:08:44.418984 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"280aa649-3efa-4218-952c-18a11bfd9a42","Type":"ContainerDied","Data":"f4ddd88382900b29e40d5b3451ad0d937e3f7464fb271357aca0bac37bd83b7e"} Jan 22 09:08:44 crc kubenswrapper[4811]: I0122 09:08:44.419059 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f4ddd88382900b29e40d5b3451ad0d937e3f7464fb271357aca0bac37bd83b7e" Jan 22 09:08:44 crc kubenswrapper[4811]: I0122 09:08:44.539344 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-bhj4l"] Jan 22 09:08:45 crc kubenswrapper[4811]: I0122 09:08:45.427248 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-bhj4l" event={"ID":"de4b38a0-0c7a-4693-9f92-40fefd6bc9b4","Type":"ContainerStarted","Data":"cef2c695c8aacb342a9fca1dc4156a60c5f8a201c57d21d12ab29e9b4801c133"} Jan 22 09:08:45 crc kubenswrapper[4811]: I0122 09:08:45.427613 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-bhj4l" event={"ID":"de4b38a0-0c7a-4693-9f92-40fefd6bc9b4","Type":"ContainerStarted","Data":"a43a9b6cdd09c111a031bb66b0889764a2552845222ebbb26746be1803a8ef3d"} Jan 22 09:08:45 crc kubenswrapper[4811]: I0122 09:08:45.427648 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-bhj4l" event={"ID":"de4b38a0-0c7a-4693-9f92-40fefd6bc9b4","Type":"ContainerStarted","Data":"075a517144a87124aba00191f45fb66d350002de2c95b0a701dfc4d855e76d46"} Jan 22 09:08:45 crc kubenswrapper[4811]: I0122 09:08:45.449502 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-bhj4l" podStartSLOduration=148.449479478 podStartE2EDuration="2m28.449479478s" podCreationTimestamp="2026-01-22 09:06:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:08:45.447022676 +0000 UTC m=+169.769209799" watchObservedRunningTime="2026-01-22 09:08:45.449479478 +0000 UTC m=+169.771666602" Jan 22 09:08:50 crc kubenswrapper[4811]: I0122 09:08:50.722661 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-nczwv" Jan 22 09:08:58 crc kubenswrapper[4811]: E0122 09:08:58.397645 4811 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 22 09:08:58 crc kubenswrapper[4811]: E0122 09:08:58.398316 4811 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jbdwk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-b2fpc_openshift-marketplace(83b7219e-ce69-408d-92ad-7b58cc6d0b7a): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 22 09:08:58 crc kubenswrapper[4811]: E0122 09:08:58.399541 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-b2fpc" podUID="83b7219e-ce69-408d-92ad-7b58cc6d0b7a" Jan 22 09:08:59 crc kubenswrapper[4811]: E0122 09:08:59.810330 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-b2fpc" podUID="83b7219e-ce69-408d-92ad-7b58cc6d0b7a" Jan 22 09:09:00 crc kubenswrapper[4811]: I0122 09:09:00.550970 4811 generic.go:334] "Generic (PLEG): container finished" podID="6dcdfa22-db17-42c3-a366-f96b6dd7b27d" containerID="1226b756d34d1123ced56341c33600e0793fb2d1cb177975eeed804e8e9a86b0" exitCode=0 Jan 22 09:09:00 crc kubenswrapper[4811]: I0122 09:09:00.551409 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vjcnk" event={"ID":"6dcdfa22-db17-42c3-a366-f96b6dd7b27d","Type":"ContainerDied","Data":"1226b756d34d1123ced56341c33600e0793fb2d1cb177975eeed804e8e9a86b0"} Jan 22 09:09:00 crc kubenswrapper[4811]: I0122 09:09:00.560361 4811 generic.go:334] "Generic (PLEG): container finished" podID="bf6a8a20-43c6-48f6-ba8f-585a3e4dd7ad" containerID="39a398e4235cbcae5760156ddd15473991d9d2f54537c8d643fcc4fa36c1183e" exitCode=0 Jan 22 09:09:00 crc kubenswrapper[4811]: I0122 09:09:00.560432 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m45zk" event={"ID":"bf6a8a20-43c6-48f6-ba8f-585a3e4dd7ad","Type":"ContainerDied","Data":"39a398e4235cbcae5760156ddd15473991d9d2f54537c8d643fcc4fa36c1183e"} Jan 22 09:09:00 crc kubenswrapper[4811]: I0122 09:09:00.569806 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4c4pw" event={"ID":"4d9bfd89-ad04-4234-aaa3-8cbaa1f0c770","Type":"ContainerStarted","Data":"c9b5097f25584facce5df7f4a70aac41a1e8034baecbf12a25413da2ac36323d"} Jan 22 09:09:00 crc kubenswrapper[4811]: I0122 09:09:00.580349 4811 generic.go:334] "Generic (PLEG): container finished" podID="e1f79d07-2bae-46a7-92f1-fdd65ec4e0c5" containerID="8e0ad95261c9ccc563eb147eda269573c20dfbd2824a4a8cd9c830bc2ffee527" exitCode=0 Jan 22 09:09:00 crc kubenswrapper[4811]: I0122 09:09:00.580487 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8bdsg" event={"ID":"e1f79d07-2bae-46a7-92f1-fdd65ec4e0c5","Type":"ContainerDied","Data":"8e0ad95261c9ccc563eb147eda269573c20dfbd2824a4a8cd9c830bc2ffee527"} Jan 22 09:09:00 crc kubenswrapper[4811]: I0122 09:09:00.584079 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mv8mw" event={"ID":"516e9ebd-6782-459d-99df-6902a5098c4e","Type":"ContainerStarted","Data":"b20405eb2b8cefb35578dad5607f6f2fdc5710c15e0e20451660493e5a215037"} Jan 22 09:09:00 crc kubenswrapper[4811]: I0122 09:09:00.597985 4811 generic.go:334] "Generic (PLEG): container finished" podID="7b8f4ccb-0c41-4bc0-8cf6-bb4508afbb4d" containerID="abb8729caef7c1226ec63fb98b164d533c45977491b0fd47c35ed135e5b68258" exitCode=0 Jan 22 09:09:00 crc kubenswrapper[4811]: I0122 09:09:00.598165 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-68p7c" event={"ID":"7b8f4ccb-0c41-4bc0-8cf6-bb4508afbb4d","Type":"ContainerDied","Data":"abb8729caef7c1226ec63fb98b164d533c45977491b0fd47c35ed135e5b68258"} Jan 22 09:09:00 crc kubenswrapper[4811]: I0122 09:09:00.605104 4811 generic.go:334] "Generic (PLEG): container finished" podID="89c58cc2-8741-4c9f-95fa-c73db10026d3" containerID="fae9d29fa8390f035eec8bae566c123ade19d3326515392d5131b6296c858330" exitCode=0 Jan 22 09:09:00 crc kubenswrapper[4811]: I0122 09:09:00.605146 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l7qd2" event={"ID":"89c58cc2-8741-4c9f-95fa-c73db10026d3","Type":"ContainerDied","Data":"fae9d29fa8390f035eec8bae566c123ade19d3326515392d5131b6296c858330"} Jan 22 09:09:00 crc kubenswrapper[4811]: I0122 09:09:00.662659 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-gmwq7"] Jan 22 09:09:01 crc kubenswrapper[4811]: I0122 09:09:01.623531 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8bdsg" event={"ID":"e1f79d07-2bae-46a7-92f1-fdd65ec4e0c5","Type":"ContainerStarted","Data":"245fa80f25a7d46c4a891714e74ec740a748db07fd40081c0d04db67ccc42979"} Jan 22 09:09:01 crc kubenswrapper[4811]: I0122 09:09:01.628304 4811 generic.go:334] "Generic (PLEG): container finished" podID="516e9ebd-6782-459d-99df-6902a5098c4e" containerID="b20405eb2b8cefb35578dad5607f6f2fdc5710c15e0e20451660493e5a215037" exitCode=0 Jan 22 09:09:01 crc kubenswrapper[4811]: I0122 09:09:01.628371 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mv8mw" event={"ID":"516e9ebd-6782-459d-99df-6902a5098c4e","Type":"ContainerDied","Data":"b20405eb2b8cefb35578dad5607f6f2fdc5710c15e0e20451660493e5a215037"} Jan 22 09:09:01 crc kubenswrapper[4811]: I0122 09:09:01.631818 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-68p7c" event={"ID":"7b8f4ccb-0c41-4bc0-8cf6-bb4508afbb4d","Type":"ContainerStarted","Data":"6a2cd8a886b9cb0632a1eaec0778b05367ddf7198965b4fa9d8985e7de1a05d5"} Jan 22 09:09:01 crc kubenswrapper[4811]: I0122 09:09:01.637309 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l7qd2" event={"ID":"89c58cc2-8741-4c9f-95fa-c73db10026d3","Type":"ContainerStarted","Data":"e1e5b6b3124447a40bd159405b42eabe26d0220bd53b1c8a8d8c0e5528339e8a"} Jan 22 09:09:01 crc kubenswrapper[4811]: I0122 09:09:01.639743 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-8bdsg" podStartSLOduration=3.63107718 podStartE2EDuration="29.639724627s" podCreationTimestamp="2026-01-22 09:08:32 +0000 UTC" firstStartedPulling="2026-01-22 09:08:35.276366353 +0000 UTC m=+159.598553476" lastFinishedPulling="2026-01-22 09:09:01.285013799 +0000 UTC m=+185.607200923" observedRunningTime="2026-01-22 09:09:01.638788541 +0000 UTC m=+185.960975663" watchObservedRunningTime="2026-01-22 09:09:01.639724627 +0000 UTC m=+185.961911749" Jan 22 09:09:01 crc kubenswrapper[4811]: I0122 09:09:01.645400 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vjcnk" event={"ID":"6dcdfa22-db17-42c3-a366-f96b6dd7b27d","Type":"ContainerStarted","Data":"56da9657b335d49cad6285cb856da14ac6cc01c96d0e9ecdb73112baf4168d9c"} Jan 22 09:09:01 crc kubenswrapper[4811]: I0122 09:09:01.649172 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m45zk" event={"ID":"bf6a8a20-43c6-48f6-ba8f-585a3e4dd7ad","Type":"ContainerStarted","Data":"e9505f3f50387d72158fe1aedf94a25e8324b0f2cab0b97d9c2d5cf9b6f9ec3d"} Jan 22 09:09:01 crc kubenswrapper[4811]: I0122 09:09:01.651275 4811 generic.go:334] "Generic (PLEG): container finished" podID="4d9bfd89-ad04-4234-aaa3-8cbaa1f0c770" containerID="c9b5097f25584facce5df7f4a70aac41a1e8034baecbf12a25413da2ac36323d" exitCode=0 Jan 22 09:09:01 crc kubenswrapper[4811]: I0122 09:09:01.651320 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4c4pw" event={"ID":"4d9bfd89-ad04-4234-aaa3-8cbaa1f0c770","Type":"ContainerDied","Data":"c9b5097f25584facce5df7f4a70aac41a1e8034baecbf12a25413da2ac36323d"} Jan 22 09:09:01 crc kubenswrapper[4811]: I0122 09:09:01.677273 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-l7qd2" podStartSLOduration=2.413871934 podStartE2EDuration="31.677263087s" podCreationTimestamp="2026-01-22 09:08:30 +0000 UTC" firstStartedPulling="2026-01-22 09:08:31.987989539 +0000 UTC m=+156.310176663" lastFinishedPulling="2026-01-22 09:09:01.251380693 +0000 UTC m=+185.573567816" observedRunningTime="2026-01-22 09:09:01.658764242 +0000 UTC m=+185.980951365" watchObservedRunningTime="2026-01-22 09:09:01.677263087 +0000 UTC m=+185.999450210" Jan 22 09:09:01 crc kubenswrapper[4811]: I0122 09:09:01.678799 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-68p7c" podStartSLOduration=2.417813507 podStartE2EDuration="31.678794315s" podCreationTimestamp="2026-01-22 09:08:30 +0000 UTC" firstStartedPulling="2026-01-22 09:08:31.996701218 +0000 UTC m=+156.318888341" lastFinishedPulling="2026-01-22 09:09:01.257682026 +0000 UTC m=+185.579869149" observedRunningTime="2026-01-22 09:09:01.675447515 +0000 UTC m=+185.997634627" watchObservedRunningTime="2026-01-22 09:09:01.678794315 +0000 UTC m=+186.000981438" Jan 22 09:09:01 crc kubenswrapper[4811]: I0122 09:09:01.746681 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-m45zk" podStartSLOduration=2.3530535 podStartE2EDuration="31.746658416s" podCreationTimestamp="2026-01-22 09:08:30 +0000 UTC" firstStartedPulling="2026-01-22 09:08:31.990342496 +0000 UTC m=+156.312529610" lastFinishedPulling="2026-01-22 09:09:01.383947403 +0000 UTC m=+185.706134526" observedRunningTime="2026-01-22 09:09:01.743962773 +0000 UTC m=+186.066149886" watchObservedRunningTime="2026-01-22 09:09:01.746658416 +0000 UTC m=+186.068845539" Jan 22 09:09:01 crc kubenswrapper[4811]: I0122 09:09:01.767435 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vjcnk" podStartSLOduration=2.729311725 podStartE2EDuration="29.767410891s" podCreationTimestamp="2026-01-22 09:08:32 +0000 UTC" firstStartedPulling="2026-01-22 09:08:34.14480403 +0000 UTC m=+158.466991153" lastFinishedPulling="2026-01-22 09:09:01.182903196 +0000 UTC m=+185.505090319" observedRunningTime="2026-01-22 09:09:01.76556455 +0000 UTC m=+186.087751673" watchObservedRunningTime="2026-01-22 09:09:01.767410891 +0000 UTC m=+186.089598015" Jan 22 09:09:01 crc kubenswrapper[4811]: I0122 09:09:01.932734 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 09:09:02 crc kubenswrapper[4811]: I0122 09:09:02.658425 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4c4pw" event={"ID":"4d9bfd89-ad04-4234-aaa3-8cbaa1f0c770","Type":"ContainerStarted","Data":"d9bd0852868e147a78e9b073b9d68853846be9cb2f957126200402c62a64c504"} Jan 22 09:09:02 crc kubenswrapper[4811]: I0122 09:09:02.660484 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mv8mw" event={"ID":"516e9ebd-6782-459d-99df-6902a5098c4e","Type":"ContainerStarted","Data":"3d5449f78c39d103772dd07867211405484605c116e1d07b55e27a5acc81cb90"} Jan 22 09:09:02 crc kubenswrapper[4811]: I0122 09:09:02.684535 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-4c4pw" podStartSLOduration=2.722162303 podStartE2EDuration="29.684513378s" podCreationTimestamp="2026-01-22 09:08:33 +0000 UTC" firstStartedPulling="2026-01-22 09:08:35.262517957 +0000 UTC m=+159.584705081" lastFinishedPulling="2026-01-22 09:09:02.224869034 +0000 UTC m=+186.547056156" observedRunningTime="2026-01-22 09:09:02.683116985 +0000 UTC m=+187.005304108" watchObservedRunningTime="2026-01-22 09:09:02.684513378 +0000 UTC m=+187.006700491" Jan 22 09:09:02 crc kubenswrapper[4811]: I0122 09:09:02.710038 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-mv8mw" podStartSLOduration=2.812494454 podStartE2EDuration="29.710016733s" podCreationTimestamp="2026-01-22 09:08:33 +0000 UTC" firstStartedPulling="2026-01-22 09:08:35.230895643 +0000 UTC m=+159.553082766" lastFinishedPulling="2026-01-22 09:09:02.128417932 +0000 UTC m=+186.450605045" observedRunningTime="2026-01-22 09:09:02.704714454 +0000 UTC m=+187.026901577" watchObservedRunningTime="2026-01-22 09:09:02.710016733 +0000 UTC m=+187.032203856" Jan 22 09:09:02 crc kubenswrapper[4811]: I0122 09:09:02.763701 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vjcnk" Jan 22 09:09:02 crc kubenswrapper[4811]: I0122 09:09:02.763848 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vjcnk" Jan 22 09:09:03 crc kubenswrapper[4811]: I0122 09:09:03.191412 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-8bdsg" Jan 22 09:09:03 crc kubenswrapper[4811]: I0122 09:09:03.192365 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-8bdsg" Jan 22 09:09:03 crc kubenswrapper[4811]: I0122 09:09:03.319688 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5w8tw" Jan 22 09:09:03 crc kubenswrapper[4811]: I0122 09:09:03.595484 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-mv8mw" Jan 22 09:09:03 crc kubenswrapper[4811]: I0122 09:09:03.595885 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-mv8mw" Jan 22 09:09:03 crc kubenswrapper[4811]: I0122 09:09:03.800074 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-4c4pw" Jan 22 09:09:03 crc kubenswrapper[4811]: I0122 09:09:03.800113 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-4c4pw" Jan 22 09:09:03 crc kubenswrapper[4811]: I0122 09:09:03.829372 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-vjcnk" podUID="6dcdfa22-db17-42c3-a366-f96b6dd7b27d" containerName="registry-server" probeResult="failure" output=< Jan 22 09:09:03 crc kubenswrapper[4811]: timeout: failed to connect service ":50051" within 1s Jan 22 09:09:03 crc kubenswrapper[4811]: > Jan 22 09:09:04 crc kubenswrapper[4811]: I0122 09:09:04.222815 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-8bdsg" podUID="e1f79d07-2bae-46a7-92f1-fdd65ec4e0c5" containerName="registry-server" probeResult="failure" output=< Jan 22 09:09:04 crc kubenswrapper[4811]: timeout: failed to connect service ":50051" within 1s Jan 22 09:09:04 crc kubenswrapper[4811]: > Jan 22 09:09:04 crc kubenswrapper[4811]: I0122 09:09:04.624515 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mv8mw" podUID="516e9ebd-6782-459d-99df-6902a5098c4e" containerName="registry-server" probeResult="failure" output=< Jan 22 09:09:04 crc kubenswrapper[4811]: timeout: failed to connect service ":50051" within 1s Jan 22 09:09:04 crc kubenswrapper[4811]: > Jan 22 09:09:04 crc kubenswrapper[4811]: I0122 09:09:04.831096 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-4c4pw" podUID="4d9bfd89-ad04-4234-aaa3-8cbaa1f0c770" containerName="registry-server" probeResult="failure" output=< Jan 22 09:09:04 crc kubenswrapper[4811]: timeout: failed to connect service ":50051" within 1s Jan 22 09:09:04 crc kubenswrapper[4811]: > Jan 22 09:09:05 crc kubenswrapper[4811]: I0122 09:09:05.501782 4811 patch_prober.go:28] interesting pod/machine-config-daemon-txvcq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 09:09:05 crc kubenswrapper[4811]: I0122 09:09:05.501832 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 09:09:10 crc kubenswrapper[4811]: I0122 09:09:10.365908 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-l7qd2" Jan 22 09:09:10 crc kubenswrapper[4811]: I0122 09:09:10.366556 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-l7qd2" Jan 22 09:09:10 crc kubenswrapper[4811]: I0122 09:09:10.397790 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-l7qd2" Jan 22 09:09:10 crc kubenswrapper[4811]: I0122 09:09:10.584746 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-68p7c" Jan 22 09:09:10 crc kubenswrapper[4811]: I0122 09:09:10.584783 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-68p7c" Jan 22 09:09:10 crc kubenswrapper[4811]: I0122 09:09:10.615233 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-68p7c" Jan 22 09:09:10 crc kubenswrapper[4811]: I0122 09:09:10.747916 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-68p7c" Jan 22 09:09:10 crc kubenswrapper[4811]: I0122 09:09:10.749827 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-l7qd2" Jan 22 09:09:10 crc kubenswrapper[4811]: I0122 09:09:10.996664 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-m45zk" Jan 22 09:09:10 crc kubenswrapper[4811]: I0122 09:09:10.996733 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-m45zk" Jan 22 09:09:11 crc kubenswrapper[4811]: I0122 09:09:11.027612 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-m45zk" Jan 22 09:09:11 crc kubenswrapper[4811]: I0122 09:09:11.755006 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-m45zk" Jan 22 09:09:12 crc kubenswrapper[4811]: I0122 09:09:12.430445 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-m45zk"] Jan 22 09:09:12 crc kubenswrapper[4811]: I0122 09:09:12.795724 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vjcnk" Jan 22 09:09:12 crc kubenswrapper[4811]: I0122 09:09:12.839761 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vjcnk" Jan 22 09:09:13 crc kubenswrapper[4811]: I0122 09:09:13.228418 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-8bdsg" Jan 22 09:09:13 crc kubenswrapper[4811]: I0122 09:09:13.263032 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-8bdsg" Jan 22 09:09:13 crc kubenswrapper[4811]: I0122 09:09:13.627166 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-mv8mw" Jan 22 09:09:13 crc kubenswrapper[4811]: I0122 09:09:13.658528 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-mv8mw" Jan 22 09:09:13 crc kubenswrapper[4811]: I0122 09:09:13.729894 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-m45zk" podUID="bf6a8a20-43c6-48f6-ba8f-585a3e4dd7ad" containerName="registry-server" containerID="cri-o://e9505f3f50387d72158fe1aedf94a25e8324b0f2cab0b97d9c2d5cf9b6f9ec3d" gracePeriod=2 Jan 22 09:09:13 crc kubenswrapper[4811]: I0122 09:09:13.833560 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-4c4pw" Jan 22 09:09:13 crc kubenswrapper[4811]: I0122 09:09:13.874842 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-4c4pw" Jan 22 09:09:14 crc kubenswrapper[4811]: I0122 09:09:14.105790 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m45zk" Jan 22 09:09:14 crc kubenswrapper[4811]: I0122 09:09:14.129818 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l5q72\" (UniqueName: \"kubernetes.io/projected/bf6a8a20-43c6-48f6-ba8f-585a3e4dd7ad-kube-api-access-l5q72\") pod \"bf6a8a20-43c6-48f6-ba8f-585a3e4dd7ad\" (UID: \"bf6a8a20-43c6-48f6-ba8f-585a3e4dd7ad\") " Jan 22 09:09:14 crc kubenswrapper[4811]: I0122 09:09:14.129911 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf6a8a20-43c6-48f6-ba8f-585a3e4dd7ad-utilities\") pod \"bf6a8a20-43c6-48f6-ba8f-585a3e4dd7ad\" (UID: \"bf6a8a20-43c6-48f6-ba8f-585a3e4dd7ad\") " Jan 22 09:09:14 crc kubenswrapper[4811]: I0122 09:09:14.130643 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bf6a8a20-43c6-48f6-ba8f-585a3e4dd7ad-utilities" (OuterVolumeSpecName: "utilities") pod "bf6a8a20-43c6-48f6-ba8f-585a3e4dd7ad" (UID: "bf6a8a20-43c6-48f6-ba8f-585a3e4dd7ad"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:09:14 crc kubenswrapper[4811]: I0122 09:09:14.134703 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf6a8a20-43c6-48f6-ba8f-585a3e4dd7ad-kube-api-access-l5q72" (OuterVolumeSpecName: "kube-api-access-l5q72") pod "bf6a8a20-43c6-48f6-ba8f-585a3e4dd7ad" (UID: "bf6a8a20-43c6-48f6-ba8f-585a3e4dd7ad"). InnerVolumeSpecName "kube-api-access-l5q72". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:09:14 crc kubenswrapper[4811]: I0122 09:09:14.230489 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf6a8a20-43c6-48f6-ba8f-585a3e4dd7ad-catalog-content\") pod \"bf6a8a20-43c6-48f6-ba8f-585a3e4dd7ad\" (UID: \"bf6a8a20-43c6-48f6-ba8f-585a3e4dd7ad\") " Jan 22 09:09:14 crc kubenswrapper[4811]: I0122 09:09:14.230724 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l5q72\" (UniqueName: \"kubernetes.io/projected/bf6a8a20-43c6-48f6-ba8f-585a3e4dd7ad-kube-api-access-l5q72\") on node \"crc\" DevicePath \"\"" Jan 22 09:09:14 crc kubenswrapper[4811]: I0122 09:09:14.230744 4811 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf6a8a20-43c6-48f6-ba8f-585a3e4dd7ad-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 09:09:14 crc kubenswrapper[4811]: I0122 09:09:14.261589 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bf6a8a20-43c6-48f6-ba8f-585a3e4dd7ad-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bf6a8a20-43c6-48f6-ba8f-585a3e4dd7ad" (UID: "bf6a8a20-43c6-48f6-ba8f-585a3e4dd7ad"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:09:14 crc kubenswrapper[4811]: I0122 09:09:14.331552 4811 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf6a8a20-43c6-48f6-ba8f-585a3e4dd7ad-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 09:09:14 crc kubenswrapper[4811]: I0122 09:09:14.354588 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 22 09:09:14 crc kubenswrapper[4811]: E0122 09:09:14.354833 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf6a8a20-43c6-48f6-ba8f-585a3e4dd7ad" containerName="extract-utilities" Jan 22 09:09:14 crc kubenswrapper[4811]: I0122 09:09:14.354849 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf6a8a20-43c6-48f6-ba8f-585a3e4dd7ad" containerName="extract-utilities" Jan 22 09:09:14 crc kubenswrapper[4811]: E0122 09:09:14.354858 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf6a8a20-43c6-48f6-ba8f-585a3e4dd7ad" containerName="registry-server" Jan 22 09:09:14 crc kubenswrapper[4811]: I0122 09:09:14.354865 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf6a8a20-43c6-48f6-ba8f-585a3e4dd7ad" containerName="registry-server" Jan 22 09:09:14 crc kubenswrapper[4811]: E0122 09:09:14.354892 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf6a8a20-43c6-48f6-ba8f-585a3e4dd7ad" containerName="extract-content" Jan 22 09:09:14 crc kubenswrapper[4811]: I0122 09:09:14.354897 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf6a8a20-43c6-48f6-ba8f-585a3e4dd7ad" containerName="extract-content" Jan 22 09:09:14 crc kubenswrapper[4811]: E0122 09:09:14.354907 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7552110-67b1-4150-8f6a-dfe6fb065d8f" containerName="pruner" Jan 22 09:09:14 crc kubenswrapper[4811]: I0122 09:09:14.354912 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7552110-67b1-4150-8f6a-dfe6fb065d8f" containerName="pruner" Jan 22 09:09:14 crc kubenswrapper[4811]: E0122 09:09:14.354922 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="280aa649-3efa-4218-952c-18a11bfd9a42" containerName="pruner" Jan 22 09:09:14 crc kubenswrapper[4811]: I0122 09:09:14.354927 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="280aa649-3efa-4218-952c-18a11bfd9a42" containerName="pruner" Jan 22 09:09:14 crc kubenswrapper[4811]: I0122 09:09:14.355029 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="280aa649-3efa-4218-952c-18a11bfd9a42" containerName="pruner" Jan 22 09:09:14 crc kubenswrapper[4811]: I0122 09:09:14.355042 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7552110-67b1-4150-8f6a-dfe6fb065d8f" containerName="pruner" Jan 22 09:09:14 crc kubenswrapper[4811]: I0122 09:09:14.355050 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf6a8a20-43c6-48f6-ba8f-585a3e4dd7ad" containerName="registry-server" Jan 22 09:09:14 crc kubenswrapper[4811]: I0122 09:09:14.355382 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 22 09:09:14 crc kubenswrapper[4811]: I0122 09:09:14.357855 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 22 09:09:14 crc kubenswrapper[4811]: I0122 09:09:14.358045 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 22 09:09:14 crc kubenswrapper[4811]: I0122 09:09:14.368480 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 22 09:09:14 crc kubenswrapper[4811]: I0122 09:09:14.432411 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9bfa41bb-1da0-464f-aa9f-6fda0acc4a95-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"9bfa41bb-1da0-464f-aa9f-6fda0acc4a95\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 22 09:09:14 crc kubenswrapper[4811]: I0122 09:09:14.432689 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9bfa41bb-1da0-464f-aa9f-6fda0acc4a95-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"9bfa41bb-1da0-464f-aa9f-6fda0acc4a95\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 22 09:09:14 crc kubenswrapper[4811]: I0122 09:09:14.533703 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9bfa41bb-1da0-464f-aa9f-6fda0acc4a95-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"9bfa41bb-1da0-464f-aa9f-6fda0acc4a95\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 22 09:09:14 crc kubenswrapper[4811]: I0122 09:09:14.533914 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9bfa41bb-1da0-464f-aa9f-6fda0acc4a95-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"9bfa41bb-1da0-464f-aa9f-6fda0acc4a95\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 22 09:09:14 crc kubenswrapper[4811]: I0122 09:09:14.534344 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9bfa41bb-1da0-464f-aa9f-6fda0acc4a95-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"9bfa41bb-1da0-464f-aa9f-6fda0acc4a95\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 22 09:09:14 crc kubenswrapper[4811]: I0122 09:09:14.550450 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9bfa41bb-1da0-464f-aa9f-6fda0acc4a95-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"9bfa41bb-1da0-464f-aa9f-6fda0acc4a95\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 22 09:09:14 crc kubenswrapper[4811]: I0122 09:09:14.666719 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 22 09:09:14 crc kubenswrapper[4811]: I0122 09:09:14.744041 4811 generic.go:334] "Generic (PLEG): container finished" podID="bf6a8a20-43c6-48f6-ba8f-585a3e4dd7ad" containerID="e9505f3f50387d72158fe1aedf94a25e8324b0f2cab0b97d9c2d5cf9b6f9ec3d" exitCode=0 Jan 22 09:09:14 crc kubenswrapper[4811]: I0122 09:09:14.744112 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m45zk" Jan 22 09:09:14 crc kubenswrapper[4811]: I0122 09:09:14.744144 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m45zk" event={"ID":"bf6a8a20-43c6-48f6-ba8f-585a3e4dd7ad","Type":"ContainerDied","Data":"e9505f3f50387d72158fe1aedf94a25e8324b0f2cab0b97d9c2d5cf9b6f9ec3d"} Jan 22 09:09:14 crc kubenswrapper[4811]: I0122 09:09:14.744233 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m45zk" event={"ID":"bf6a8a20-43c6-48f6-ba8f-585a3e4dd7ad","Type":"ContainerDied","Data":"3cddf5fa3f09e085521df9275c3de6cbc84ae9f7194d69e1c95a70a23c4a9eb1"} Jan 22 09:09:14 crc kubenswrapper[4811]: I0122 09:09:14.744285 4811 scope.go:117] "RemoveContainer" containerID="e9505f3f50387d72158fe1aedf94a25e8324b0f2cab0b97d9c2d5cf9b6f9ec3d" Jan 22 09:09:14 crc kubenswrapper[4811]: I0122 09:09:14.781199 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-m45zk"] Jan 22 09:09:14 crc kubenswrapper[4811]: I0122 09:09:14.783985 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-m45zk"] Jan 22 09:09:14 crc kubenswrapper[4811]: I0122 09:09:14.788493 4811 scope.go:117] "RemoveContainer" containerID="39a398e4235cbcae5760156ddd15473991d9d2f54537c8d643fcc4fa36c1183e" Jan 22 09:09:14 crc kubenswrapper[4811]: I0122 09:09:14.805384 4811 scope.go:117] "RemoveContainer" containerID="d34f4ed0fdc8ae3cde89db0500a74822c5397895f1168dc62a61dd24d34b196e" Jan 22 09:09:14 crc kubenswrapper[4811]: I0122 09:09:14.820877 4811 scope.go:117] "RemoveContainer" containerID="e9505f3f50387d72158fe1aedf94a25e8324b0f2cab0b97d9c2d5cf9b6f9ec3d" Jan 22 09:09:14 crc kubenswrapper[4811]: E0122 09:09:14.821282 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e9505f3f50387d72158fe1aedf94a25e8324b0f2cab0b97d9c2d5cf9b6f9ec3d\": container with ID starting with e9505f3f50387d72158fe1aedf94a25e8324b0f2cab0b97d9c2d5cf9b6f9ec3d not found: ID does not exist" containerID="e9505f3f50387d72158fe1aedf94a25e8324b0f2cab0b97d9c2d5cf9b6f9ec3d" Jan 22 09:09:14 crc kubenswrapper[4811]: I0122 09:09:14.821319 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9505f3f50387d72158fe1aedf94a25e8324b0f2cab0b97d9c2d5cf9b6f9ec3d"} err="failed to get container status \"e9505f3f50387d72158fe1aedf94a25e8324b0f2cab0b97d9c2d5cf9b6f9ec3d\": rpc error: code = NotFound desc = could not find container \"e9505f3f50387d72158fe1aedf94a25e8324b0f2cab0b97d9c2d5cf9b6f9ec3d\": container with ID starting with e9505f3f50387d72158fe1aedf94a25e8324b0f2cab0b97d9c2d5cf9b6f9ec3d not found: ID does not exist" Jan 22 09:09:14 crc kubenswrapper[4811]: I0122 09:09:14.821575 4811 scope.go:117] "RemoveContainer" containerID="39a398e4235cbcae5760156ddd15473991d9d2f54537c8d643fcc4fa36c1183e" Jan 22 09:09:14 crc kubenswrapper[4811]: E0122 09:09:14.822594 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"39a398e4235cbcae5760156ddd15473991d9d2f54537c8d643fcc4fa36c1183e\": container with ID starting with 39a398e4235cbcae5760156ddd15473991d9d2f54537c8d643fcc4fa36c1183e not found: ID does not exist" containerID="39a398e4235cbcae5760156ddd15473991d9d2f54537c8d643fcc4fa36c1183e" Jan 22 09:09:14 crc kubenswrapper[4811]: I0122 09:09:14.822650 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39a398e4235cbcae5760156ddd15473991d9d2f54537c8d643fcc4fa36c1183e"} err="failed to get container status \"39a398e4235cbcae5760156ddd15473991d9d2f54537c8d643fcc4fa36c1183e\": rpc error: code = NotFound desc = could not find container \"39a398e4235cbcae5760156ddd15473991d9d2f54537c8d643fcc4fa36c1183e\": container with ID starting with 39a398e4235cbcae5760156ddd15473991d9d2f54537c8d643fcc4fa36c1183e not found: ID does not exist" Jan 22 09:09:14 crc kubenswrapper[4811]: I0122 09:09:14.822680 4811 scope.go:117] "RemoveContainer" containerID="d34f4ed0fdc8ae3cde89db0500a74822c5397895f1168dc62a61dd24d34b196e" Jan 22 09:09:14 crc kubenswrapper[4811]: E0122 09:09:14.823154 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d34f4ed0fdc8ae3cde89db0500a74822c5397895f1168dc62a61dd24d34b196e\": container with ID starting with d34f4ed0fdc8ae3cde89db0500a74822c5397895f1168dc62a61dd24d34b196e not found: ID does not exist" containerID="d34f4ed0fdc8ae3cde89db0500a74822c5397895f1168dc62a61dd24d34b196e" Jan 22 09:09:14 crc kubenswrapper[4811]: I0122 09:09:14.823175 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d34f4ed0fdc8ae3cde89db0500a74822c5397895f1168dc62a61dd24d34b196e"} err="failed to get container status \"d34f4ed0fdc8ae3cde89db0500a74822c5397895f1168dc62a61dd24d34b196e\": rpc error: code = NotFound desc = could not find container \"d34f4ed0fdc8ae3cde89db0500a74822c5397895f1168dc62a61dd24d34b196e\": container with ID starting with d34f4ed0fdc8ae3cde89db0500a74822c5397895f1168dc62a61dd24d34b196e not found: ID does not exist" Jan 22 09:09:15 crc kubenswrapper[4811]: I0122 09:09:15.058435 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 22 09:09:15 crc kubenswrapper[4811]: I0122 09:09:15.753421 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b2fpc" event={"ID":"83b7219e-ce69-408d-92ad-7b58cc6d0b7a","Type":"ContainerStarted","Data":"31dca2b72ef32a296d67da15a5e4de73093c5c233561360ea29d11b92207c61e"} Jan 22 09:09:15 crc kubenswrapper[4811]: I0122 09:09:15.754934 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"9bfa41bb-1da0-464f-aa9f-6fda0acc4a95","Type":"ContainerStarted","Data":"d8ddb193839f03d26447e47b678fb372e63c3aafc5b1a93df30e1d07ff950975"} Jan 22 09:09:15 crc kubenswrapper[4811]: I0122 09:09:15.754980 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"9bfa41bb-1da0-464f-aa9f-6fda0acc4a95","Type":"ContainerStarted","Data":"d42fb0855311094a40c158c4fae1c77f6b21c336624cd5e6af942de77c5d6f12"} Jan 22 09:09:15 crc kubenswrapper[4811]: I0122 09:09:15.783486 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=1.783469749 podStartE2EDuration="1.783469749s" podCreationTimestamp="2026-01-22 09:09:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:09:15.781539005 +0000 UTC m=+200.103726118" watchObservedRunningTime="2026-01-22 09:09:15.783469749 +0000 UTC m=+200.105656871" Jan 22 09:09:15 crc kubenswrapper[4811]: I0122 09:09:15.997270 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf6a8a20-43c6-48f6-ba8f-585a3e4dd7ad" path="/var/lib/kubelet/pods/bf6a8a20-43c6-48f6-ba8f-585a3e4dd7ad/volumes" Jan 22 09:09:16 crc kubenswrapper[4811]: I0122 09:09:16.764805 4811 generic.go:334] "Generic (PLEG): container finished" podID="83b7219e-ce69-408d-92ad-7b58cc6d0b7a" containerID="31dca2b72ef32a296d67da15a5e4de73093c5c233561360ea29d11b92207c61e" exitCode=0 Jan 22 09:09:16 crc kubenswrapper[4811]: I0122 09:09:16.765138 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b2fpc" event={"ID":"83b7219e-ce69-408d-92ad-7b58cc6d0b7a","Type":"ContainerDied","Data":"31dca2b72ef32a296d67da15a5e4de73093c5c233561360ea29d11b92207c61e"} Jan 22 09:09:16 crc kubenswrapper[4811]: I0122 09:09:16.767612 4811 generic.go:334] "Generic (PLEG): container finished" podID="9bfa41bb-1da0-464f-aa9f-6fda0acc4a95" containerID="d8ddb193839f03d26447e47b678fb372e63c3aafc5b1a93df30e1d07ff950975" exitCode=0 Jan 22 09:09:16 crc kubenswrapper[4811]: I0122 09:09:16.767703 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"9bfa41bb-1da0-464f-aa9f-6fda0acc4a95","Type":"ContainerDied","Data":"d8ddb193839f03d26447e47b678fb372e63c3aafc5b1a93df30e1d07ff950975"} Jan 22 09:09:16 crc kubenswrapper[4811]: I0122 09:09:16.825489 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8bdsg"] Jan 22 09:09:16 crc kubenswrapper[4811]: I0122 09:09:16.825953 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-8bdsg" podUID="e1f79d07-2bae-46a7-92f1-fdd65ec4e0c5" containerName="registry-server" containerID="cri-o://245fa80f25a7d46c4a891714e74ec740a748db07fd40081c0d04db67ccc42979" gracePeriod=2 Jan 22 09:09:17 crc kubenswrapper[4811]: I0122 09:09:17.024621 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4c4pw"] Jan 22 09:09:17 crc kubenswrapper[4811]: I0122 09:09:17.024838 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-4c4pw" podUID="4d9bfd89-ad04-4234-aaa3-8cbaa1f0c770" containerName="registry-server" containerID="cri-o://d9bd0852868e147a78e9b073b9d68853846be9cb2f957126200402c62a64c504" gracePeriod=2 Jan 22 09:09:17 crc kubenswrapper[4811]: I0122 09:09:17.231711 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8bdsg" Jan 22 09:09:17 crc kubenswrapper[4811]: I0122 09:09:17.271861 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1f79d07-2bae-46a7-92f1-fdd65ec4e0c5-catalog-content\") pod \"e1f79d07-2bae-46a7-92f1-fdd65ec4e0c5\" (UID: \"e1f79d07-2bae-46a7-92f1-fdd65ec4e0c5\") " Jan 22 09:09:17 crc kubenswrapper[4811]: I0122 09:09:17.271934 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jwcrp\" (UniqueName: \"kubernetes.io/projected/e1f79d07-2bae-46a7-92f1-fdd65ec4e0c5-kube-api-access-jwcrp\") pod \"e1f79d07-2bae-46a7-92f1-fdd65ec4e0c5\" (UID: \"e1f79d07-2bae-46a7-92f1-fdd65ec4e0c5\") " Jan 22 09:09:17 crc kubenswrapper[4811]: I0122 09:09:17.271992 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1f79d07-2bae-46a7-92f1-fdd65ec4e0c5-utilities\") pod \"e1f79d07-2bae-46a7-92f1-fdd65ec4e0c5\" (UID: \"e1f79d07-2bae-46a7-92f1-fdd65ec4e0c5\") " Jan 22 09:09:17 crc kubenswrapper[4811]: I0122 09:09:17.279807 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1f79d07-2bae-46a7-92f1-fdd65ec4e0c5-kube-api-access-jwcrp" (OuterVolumeSpecName: "kube-api-access-jwcrp") pod "e1f79d07-2bae-46a7-92f1-fdd65ec4e0c5" (UID: "e1f79d07-2bae-46a7-92f1-fdd65ec4e0c5"). InnerVolumeSpecName "kube-api-access-jwcrp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:09:17 crc kubenswrapper[4811]: I0122 09:09:17.282142 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e1f79d07-2bae-46a7-92f1-fdd65ec4e0c5-utilities" (OuterVolumeSpecName: "utilities") pod "e1f79d07-2bae-46a7-92f1-fdd65ec4e0c5" (UID: "e1f79d07-2bae-46a7-92f1-fdd65ec4e0c5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:09:17 crc kubenswrapper[4811]: I0122 09:09:17.295936 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e1f79d07-2bae-46a7-92f1-fdd65ec4e0c5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e1f79d07-2bae-46a7-92f1-fdd65ec4e0c5" (UID: "e1f79d07-2bae-46a7-92f1-fdd65ec4e0c5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:09:17 crc kubenswrapper[4811]: I0122 09:09:17.378331 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jwcrp\" (UniqueName: \"kubernetes.io/projected/e1f79d07-2bae-46a7-92f1-fdd65ec4e0c5-kube-api-access-jwcrp\") on node \"crc\" DevicePath \"\"" Jan 22 09:09:17 crc kubenswrapper[4811]: I0122 09:09:17.378380 4811 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1f79d07-2bae-46a7-92f1-fdd65ec4e0c5-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 09:09:17 crc kubenswrapper[4811]: I0122 09:09:17.378391 4811 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1f79d07-2bae-46a7-92f1-fdd65ec4e0c5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 09:09:17 crc kubenswrapper[4811]: I0122 09:09:17.382424 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4c4pw" Jan 22 09:09:17 crc kubenswrapper[4811]: I0122 09:09:17.580090 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d9bfd89-ad04-4234-aaa3-8cbaa1f0c770-catalog-content\") pod \"4d9bfd89-ad04-4234-aaa3-8cbaa1f0c770\" (UID: \"4d9bfd89-ad04-4234-aaa3-8cbaa1f0c770\") " Jan 22 09:09:17 crc kubenswrapper[4811]: I0122 09:09:17.580611 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d9bfd89-ad04-4234-aaa3-8cbaa1f0c770-utilities\") pod \"4d9bfd89-ad04-4234-aaa3-8cbaa1f0c770\" (UID: \"4d9bfd89-ad04-4234-aaa3-8cbaa1f0c770\") " Jan 22 09:09:17 crc kubenswrapper[4811]: I0122 09:09:17.580719 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ss7cx\" (UniqueName: \"kubernetes.io/projected/4d9bfd89-ad04-4234-aaa3-8cbaa1f0c770-kube-api-access-ss7cx\") pod \"4d9bfd89-ad04-4234-aaa3-8cbaa1f0c770\" (UID: \"4d9bfd89-ad04-4234-aaa3-8cbaa1f0c770\") " Jan 22 09:09:17 crc kubenswrapper[4811]: I0122 09:09:17.581214 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d9bfd89-ad04-4234-aaa3-8cbaa1f0c770-utilities" (OuterVolumeSpecName: "utilities") pod "4d9bfd89-ad04-4234-aaa3-8cbaa1f0c770" (UID: "4d9bfd89-ad04-4234-aaa3-8cbaa1f0c770"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:09:17 crc kubenswrapper[4811]: I0122 09:09:17.586718 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d9bfd89-ad04-4234-aaa3-8cbaa1f0c770-kube-api-access-ss7cx" (OuterVolumeSpecName: "kube-api-access-ss7cx") pod "4d9bfd89-ad04-4234-aaa3-8cbaa1f0c770" (UID: "4d9bfd89-ad04-4234-aaa3-8cbaa1f0c770"). InnerVolumeSpecName "kube-api-access-ss7cx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:09:17 crc kubenswrapper[4811]: I0122 09:09:17.682462 4811 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d9bfd89-ad04-4234-aaa3-8cbaa1f0c770-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 09:09:17 crc kubenswrapper[4811]: I0122 09:09:17.682773 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ss7cx\" (UniqueName: \"kubernetes.io/projected/4d9bfd89-ad04-4234-aaa3-8cbaa1f0c770-kube-api-access-ss7cx\") on node \"crc\" DevicePath \"\"" Jan 22 09:09:17 crc kubenswrapper[4811]: I0122 09:09:17.686806 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d9bfd89-ad04-4234-aaa3-8cbaa1f0c770-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4d9bfd89-ad04-4234-aaa3-8cbaa1f0c770" (UID: "4d9bfd89-ad04-4234-aaa3-8cbaa1f0c770"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:09:17 crc kubenswrapper[4811]: I0122 09:09:17.775669 4811 generic.go:334] "Generic (PLEG): container finished" podID="4d9bfd89-ad04-4234-aaa3-8cbaa1f0c770" containerID="d9bd0852868e147a78e9b073b9d68853846be9cb2f957126200402c62a64c504" exitCode=0 Jan 22 09:09:17 crc kubenswrapper[4811]: I0122 09:09:17.775720 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4c4pw" event={"ID":"4d9bfd89-ad04-4234-aaa3-8cbaa1f0c770","Type":"ContainerDied","Data":"d9bd0852868e147a78e9b073b9d68853846be9cb2f957126200402c62a64c504"} Jan 22 09:09:17 crc kubenswrapper[4811]: I0122 09:09:17.775781 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4c4pw" event={"ID":"4d9bfd89-ad04-4234-aaa3-8cbaa1f0c770","Type":"ContainerDied","Data":"951fa41bfaec855a580f53011848c9513b222ea9eb87ed8ba1bf49a281a07b2b"} Jan 22 09:09:17 crc kubenswrapper[4811]: I0122 09:09:17.775800 4811 scope.go:117] "RemoveContainer" containerID="d9bd0852868e147a78e9b073b9d68853846be9cb2f957126200402c62a64c504" Jan 22 09:09:17 crc kubenswrapper[4811]: I0122 09:09:17.776697 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4c4pw" Jan 22 09:09:17 crc kubenswrapper[4811]: I0122 09:09:17.784704 4811 generic.go:334] "Generic (PLEG): container finished" podID="e1f79d07-2bae-46a7-92f1-fdd65ec4e0c5" containerID="245fa80f25a7d46c4a891714e74ec740a748db07fd40081c0d04db67ccc42979" exitCode=0 Jan 22 09:09:17 crc kubenswrapper[4811]: I0122 09:09:17.784756 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8bdsg" event={"ID":"e1f79d07-2bae-46a7-92f1-fdd65ec4e0c5","Type":"ContainerDied","Data":"245fa80f25a7d46c4a891714e74ec740a748db07fd40081c0d04db67ccc42979"} Jan 22 09:09:17 crc kubenswrapper[4811]: I0122 09:09:17.784774 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8bdsg" event={"ID":"e1f79d07-2bae-46a7-92f1-fdd65ec4e0c5","Type":"ContainerDied","Data":"a031f05894b10d98bc1b50cc06f66e1b6d6e4f5db0f90643bcb1e91ffadd97a6"} Jan 22 09:09:17 crc kubenswrapper[4811]: I0122 09:09:17.784853 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8bdsg" Jan 22 09:09:17 crc kubenswrapper[4811]: I0122 09:09:17.785137 4811 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d9bfd89-ad04-4234-aaa3-8cbaa1f0c770-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 09:09:17 crc kubenswrapper[4811]: I0122 09:09:17.795961 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b2fpc" event={"ID":"83b7219e-ce69-408d-92ad-7b58cc6d0b7a","Type":"ContainerStarted","Data":"49bb649bd0cc8ef4d9efdbfd1bacfcba0e98e844590ed51581a77e0de9734f21"} Jan 22 09:09:17 crc kubenswrapper[4811]: I0122 09:09:17.807846 4811 scope.go:117] "RemoveContainer" containerID="c9b5097f25584facce5df7f4a70aac41a1e8034baecbf12a25413da2ac36323d" Jan 22 09:09:17 crc kubenswrapper[4811]: I0122 09:09:17.830053 4811 scope.go:117] "RemoveContainer" containerID="594acf32a0b5becba904fdf72e2095791885a9c673b9a9b04ee69311009e42d0" Jan 22 09:09:17 crc kubenswrapper[4811]: I0122 09:09:17.838057 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-b2fpc" podStartSLOduration=2.556262747 podStartE2EDuration="47.838046231s" podCreationTimestamp="2026-01-22 09:08:30 +0000 UTC" firstStartedPulling="2026-01-22 09:08:32.001402825 +0000 UTC m=+156.323589938" lastFinishedPulling="2026-01-22 09:09:17.283186299 +0000 UTC m=+201.605373422" observedRunningTime="2026-01-22 09:09:17.824688379 +0000 UTC m=+202.146875502" watchObservedRunningTime="2026-01-22 09:09:17.838046231 +0000 UTC m=+202.160233355" Jan 22 09:09:17 crc kubenswrapper[4811]: I0122 09:09:17.838969 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4c4pw"] Jan 22 09:09:17 crc kubenswrapper[4811]: I0122 09:09:17.842267 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-4c4pw"] Jan 22 09:09:17 crc kubenswrapper[4811]: I0122 09:09:17.860928 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8bdsg"] Jan 22 09:09:17 crc kubenswrapper[4811]: I0122 09:09:17.866516 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-8bdsg"] Jan 22 09:09:17 crc kubenswrapper[4811]: I0122 09:09:17.869532 4811 scope.go:117] "RemoveContainer" containerID="d9bd0852868e147a78e9b073b9d68853846be9cb2f957126200402c62a64c504" Jan 22 09:09:17 crc kubenswrapper[4811]: E0122 09:09:17.871894 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d9bd0852868e147a78e9b073b9d68853846be9cb2f957126200402c62a64c504\": container with ID starting with d9bd0852868e147a78e9b073b9d68853846be9cb2f957126200402c62a64c504 not found: ID does not exist" containerID="d9bd0852868e147a78e9b073b9d68853846be9cb2f957126200402c62a64c504" Jan 22 09:09:17 crc kubenswrapper[4811]: I0122 09:09:17.871924 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9bd0852868e147a78e9b073b9d68853846be9cb2f957126200402c62a64c504"} err="failed to get container status \"d9bd0852868e147a78e9b073b9d68853846be9cb2f957126200402c62a64c504\": rpc error: code = NotFound desc = could not find container \"d9bd0852868e147a78e9b073b9d68853846be9cb2f957126200402c62a64c504\": container with ID starting with d9bd0852868e147a78e9b073b9d68853846be9cb2f957126200402c62a64c504 not found: ID does not exist" Jan 22 09:09:17 crc kubenswrapper[4811]: I0122 09:09:17.871942 4811 scope.go:117] "RemoveContainer" containerID="c9b5097f25584facce5df7f4a70aac41a1e8034baecbf12a25413da2ac36323d" Jan 22 09:09:17 crc kubenswrapper[4811]: E0122 09:09:17.872919 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c9b5097f25584facce5df7f4a70aac41a1e8034baecbf12a25413da2ac36323d\": container with ID starting with c9b5097f25584facce5df7f4a70aac41a1e8034baecbf12a25413da2ac36323d not found: ID does not exist" containerID="c9b5097f25584facce5df7f4a70aac41a1e8034baecbf12a25413da2ac36323d" Jan 22 09:09:17 crc kubenswrapper[4811]: I0122 09:09:17.872941 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9b5097f25584facce5df7f4a70aac41a1e8034baecbf12a25413da2ac36323d"} err="failed to get container status \"c9b5097f25584facce5df7f4a70aac41a1e8034baecbf12a25413da2ac36323d\": rpc error: code = NotFound desc = could not find container \"c9b5097f25584facce5df7f4a70aac41a1e8034baecbf12a25413da2ac36323d\": container with ID starting with c9b5097f25584facce5df7f4a70aac41a1e8034baecbf12a25413da2ac36323d not found: ID does not exist" Jan 22 09:09:17 crc kubenswrapper[4811]: I0122 09:09:17.872960 4811 scope.go:117] "RemoveContainer" containerID="594acf32a0b5becba904fdf72e2095791885a9c673b9a9b04ee69311009e42d0" Jan 22 09:09:17 crc kubenswrapper[4811]: E0122 09:09:17.873246 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"594acf32a0b5becba904fdf72e2095791885a9c673b9a9b04ee69311009e42d0\": container with ID starting with 594acf32a0b5becba904fdf72e2095791885a9c673b9a9b04ee69311009e42d0 not found: ID does not exist" containerID="594acf32a0b5becba904fdf72e2095791885a9c673b9a9b04ee69311009e42d0" Jan 22 09:09:17 crc kubenswrapper[4811]: I0122 09:09:17.873266 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"594acf32a0b5becba904fdf72e2095791885a9c673b9a9b04ee69311009e42d0"} err="failed to get container status \"594acf32a0b5becba904fdf72e2095791885a9c673b9a9b04ee69311009e42d0\": rpc error: code = NotFound desc = could not find container \"594acf32a0b5becba904fdf72e2095791885a9c673b9a9b04ee69311009e42d0\": container with ID starting with 594acf32a0b5becba904fdf72e2095791885a9c673b9a9b04ee69311009e42d0 not found: ID does not exist" Jan 22 09:09:17 crc kubenswrapper[4811]: I0122 09:09:17.873278 4811 scope.go:117] "RemoveContainer" containerID="245fa80f25a7d46c4a891714e74ec740a748db07fd40081c0d04db67ccc42979" Jan 22 09:09:17 crc kubenswrapper[4811]: I0122 09:09:17.902704 4811 scope.go:117] "RemoveContainer" containerID="8e0ad95261c9ccc563eb147eda269573c20dfbd2824a4a8cd9c830bc2ffee527" Jan 22 09:09:17 crc kubenswrapper[4811]: I0122 09:09:17.923545 4811 scope.go:117] "RemoveContainer" containerID="d72c220c62516c32a6f6ace59f5c9b0283137e273ccf08a90033bf783ed46a2f" Jan 22 09:09:17 crc kubenswrapper[4811]: I0122 09:09:17.979907 4811 scope.go:117] "RemoveContainer" containerID="245fa80f25a7d46c4a891714e74ec740a748db07fd40081c0d04db67ccc42979" Jan 22 09:09:17 crc kubenswrapper[4811]: E0122 09:09:17.980948 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"245fa80f25a7d46c4a891714e74ec740a748db07fd40081c0d04db67ccc42979\": container with ID starting with 245fa80f25a7d46c4a891714e74ec740a748db07fd40081c0d04db67ccc42979 not found: ID does not exist" containerID="245fa80f25a7d46c4a891714e74ec740a748db07fd40081c0d04db67ccc42979" Jan 22 09:09:17 crc kubenswrapper[4811]: I0122 09:09:17.981077 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"245fa80f25a7d46c4a891714e74ec740a748db07fd40081c0d04db67ccc42979"} err="failed to get container status \"245fa80f25a7d46c4a891714e74ec740a748db07fd40081c0d04db67ccc42979\": rpc error: code = NotFound desc = could not find container \"245fa80f25a7d46c4a891714e74ec740a748db07fd40081c0d04db67ccc42979\": container with ID starting with 245fa80f25a7d46c4a891714e74ec740a748db07fd40081c0d04db67ccc42979 not found: ID does not exist" Jan 22 09:09:17 crc kubenswrapper[4811]: I0122 09:09:17.981180 4811 scope.go:117] "RemoveContainer" containerID="8e0ad95261c9ccc563eb147eda269573c20dfbd2824a4a8cd9c830bc2ffee527" Jan 22 09:09:17 crc kubenswrapper[4811]: E0122 09:09:17.982656 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e0ad95261c9ccc563eb147eda269573c20dfbd2824a4a8cd9c830bc2ffee527\": container with ID starting with 8e0ad95261c9ccc563eb147eda269573c20dfbd2824a4a8cd9c830bc2ffee527 not found: ID does not exist" containerID="8e0ad95261c9ccc563eb147eda269573c20dfbd2824a4a8cd9c830bc2ffee527" Jan 22 09:09:17 crc kubenswrapper[4811]: I0122 09:09:17.982695 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e0ad95261c9ccc563eb147eda269573c20dfbd2824a4a8cd9c830bc2ffee527"} err="failed to get container status \"8e0ad95261c9ccc563eb147eda269573c20dfbd2824a4a8cd9c830bc2ffee527\": rpc error: code = NotFound desc = could not find container \"8e0ad95261c9ccc563eb147eda269573c20dfbd2824a4a8cd9c830bc2ffee527\": container with ID starting with 8e0ad95261c9ccc563eb147eda269573c20dfbd2824a4a8cd9c830bc2ffee527 not found: ID does not exist" Jan 22 09:09:17 crc kubenswrapper[4811]: I0122 09:09:17.982722 4811 scope.go:117] "RemoveContainer" containerID="d72c220c62516c32a6f6ace59f5c9b0283137e273ccf08a90033bf783ed46a2f" Jan 22 09:09:17 crc kubenswrapper[4811]: E0122 09:09:17.985791 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d72c220c62516c32a6f6ace59f5c9b0283137e273ccf08a90033bf783ed46a2f\": container with ID starting with d72c220c62516c32a6f6ace59f5c9b0283137e273ccf08a90033bf783ed46a2f not found: ID does not exist" containerID="d72c220c62516c32a6f6ace59f5c9b0283137e273ccf08a90033bf783ed46a2f" Jan 22 09:09:17 crc kubenswrapper[4811]: I0122 09:09:17.985841 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d72c220c62516c32a6f6ace59f5c9b0283137e273ccf08a90033bf783ed46a2f"} err="failed to get container status \"d72c220c62516c32a6f6ace59f5c9b0283137e273ccf08a90033bf783ed46a2f\": rpc error: code = NotFound desc = could not find container \"d72c220c62516c32a6f6ace59f5c9b0283137e273ccf08a90033bf783ed46a2f\": container with ID starting with d72c220c62516c32a6f6ace59f5c9b0283137e273ccf08a90033bf783ed46a2f not found: ID does not exist" Jan 22 09:09:18 crc kubenswrapper[4811]: I0122 09:09:18.005874 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d9bfd89-ad04-4234-aaa3-8cbaa1f0c770" path="/var/lib/kubelet/pods/4d9bfd89-ad04-4234-aaa3-8cbaa1f0c770/volumes" Jan 22 09:09:18 crc kubenswrapper[4811]: I0122 09:09:18.006605 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1f79d07-2bae-46a7-92f1-fdd65ec4e0c5" path="/var/lib/kubelet/pods/e1f79d07-2bae-46a7-92f1-fdd65ec4e0c5/volumes" Jan 22 09:09:18 crc kubenswrapper[4811]: I0122 09:09:18.074863 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 22 09:09:18 crc kubenswrapper[4811]: I0122 09:09:18.200522 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9bfa41bb-1da0-464f-aa9f-6fda0acc4a95-kubelet-dir\") pod \"9bfa41bb-1da0-464f-aa9f-6fda0acc4a95\" (UID: \"9bfa41bb-1da0-464f-aa9f-6fda0acc4a95\") " Jan 22 09:09:18 crc kubenswrapper[4811]: I0122 09:09:18.200644 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9bfa41bb-1da0-464f-aa9f-6fda0acc4a95-kube-api-access\") pod \"9bfa41bb-1da0-464f-aa9f-6fda0acc4a95\" (UID: \"9bfa41bb-1da0-464f-aa9f-6fda0acc4a95\") " Jan 22 09:09:18 crc kubenswrapper[4811]: I0122 09:09:18.200706 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9bfa41bb-1da0-464f-aa9f-6fda0acc4a95-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "9bfa41bb-1da0-464f-aa9f-6fda0acc4a95" (UID: "9bfa41bb-1da0-464f-aa9f-6fda0acc4a95"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 09:09:18 crc kubenswrapper[4811]: I0122 09:09:18.201058 4811 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9bfa41bb-1da0-464f-aa9f-6fda0acc4a95-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 22 09:09:18 crc kubenswrapper[4811]: I0122 09:09:18.203822 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9bfa41bb-1da0-464f-aa9f-6fda0acc4a95-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "9bfa41bb-1da0-464f-aa9f-6fda0acc4a95" (UID: "9bfa41bb-1da0-464f-aa9f-6fda0acc4a95"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:09:18 crc kubenswrapper[4811]: I0122 09:09:18.301919 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9bfa41bb-1da0-464f-aa9f-6fda0acc4a95-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 22 09:09:18 crc kubenswrapper[4811]: I0122 09:09:18.802734 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"9bfa41bb-1da0-464f-aa9f-6fda0acc4a95","Type":"ContainerDied","Data":"d42fb0855311094a40c158c4fae1c77f6b21c336624cd5e6af942de77c5d6f12"} Jan 22 09:09:18 crc kubenswrapper[4811]: I0122 09:09:18.803153 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d42fb0855311094a40c158c4fae1c77f6b21c336624cd5e6af942de77c5d6f12" Jan 22 09:09:18 crc kubenswrapper[4811]: I0122 09:09:18.803276 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 22 09:09:20 crc kubenswrapper[4811]: I0122 09:09:20.787121 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-b2fpc" Jan 22 09:09:20 crc kubenswrapper[4811]: I0122 09:09:20.787463 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-b2fpc" Jan 22 09:09:20 crc kubenswrapper[4811]: I0122 09:09:20.825337 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-b2fpc" Jan 22 09:09:21 crc kubenswrapper[4811]: I0122 09:09:21.746546 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 22 09:09:21 crc kubenswrapper[4811]: E0122 09:09:21.746996 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d9bfd89-ad04-4234-aaa3-8cbaa1f0c770" containerName="extract-utilities" Jan 22 09:09:21 crc kubenswrapper[4811]: I0122 09:09:21.747079 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d9bfd89-ad04-4234-aaa3-8cbaa1f0c770" containerName="extract-utilities" Jan 22 09:09:21 crc kubenswrapper[4811]: E0122 09:09:21.747137 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1f79d07-2bae-46a7-92f1-fdd65ec4e0c5" containerName="extract-utilities" Jan 22 09:09:21 crc kubenswrapper[4811]: I0122 09:09:21.747190 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1f79d07-2bae-46a7-92f1-fdd65ec4e0c5" containerName="extract-utilities" Jan 22 09:09:21 crc kubenswrapper[4811]: E0122 09:09:21.747253 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1f79d07-2bae-46a7-92f1-fdd65ec4e0c5" containerName="registry-server" Jan 22 09:09:21 crc kubenswrapper[4811]: I0122 09:09:21.747303 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1f79d07-2bae-46a7-92f1-fdd65ec4e0c5" containerName="registry-server" Jan 22 09:09:21 crc kubenswrapper[4811]: E0122 09:09:21.747359 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d9bfd89-ad04-4234-aaa3-8cbaa1f0c770" containerName="extract-content" Jan 22 09:09:21 crc kubenswrapper[4811]: I0122 09:09:21.747411 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d9bfd89-ad04-4234-aaa3-8cbaa1f0c770" containerName="extract-content" Jan 22 09:09:21 crc kubenswrapper[4811]: E0122 09:09:21.747459 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1f79d07-2bae-46a7-92f1-fdd65ec4e0c5" containerName="extract-content" Jan 22 09:09:21 crc kubenswrapper[4811]: I0122 09:09:21.747509 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1f79d07-2bae-46a7-92f1-fdd65ec4e0c5" containerName="extract-content" Jan 22 09:09:21 crc kubenswrapper[4811]: E0122 09:09:21.747559 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9bfa41bb-1da0-464f-aa9f-6fda0acc4a95" containerName="pruner" Jan 22 09:09:21 crc kubenswrapper[4811]: I0122 09:09:21.747604 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="9bfa41bb-1da0-464f-aa9f-6fda0acc4a95" containerName="pruner" Jan 22 09:09:21 crc kubenswrapper[4811]: E0122 09:09:21.747676 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d9bfd89-ad04-4234-aaa3-8cbaa1f0c770" containerName="registry-server" Jan 22 09:09:21 crc kubenswrapper[4811]: I0122 09:09:21.747733 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d9bfd89-ad04-4234-aaa3-8cbaa1f0c770" containerName="registry-server" Jan 22 09:09:21 crc kubenswrapper[4811]: I0122 09:09:21.747890 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d9bfd89-ad04-4234-aaa3-8cbaa1f0c770" containerName="registry-server" Jan 22 09:09:21 crc kubenswrapper[4811]: I0122 09:09:21.747949 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1f79d07-2bae-46a7-92f1-fdd65ec4e0c5" containerName="registry-server" Jan 22 09:09:21 crc kubenswrapper[4811]: I0122 09:09:21.748001 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="9bfa41bb-1da0-464f-aa9f-6fda0acc4a95" containerName="pruner" Jan 22 09:09:21 crc kubenswrapper[4811]: I0122 09:09:21.748479 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 22 09:09:21 crc kubenswrapper[4811]: I0122 09:09:21.751434 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 22 09:09:21 crc kubenswrapper[4811]: I0122 09:09:21.754608 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 22 09:09:21 crc kubenswrapper[4811]: I0122 09:09:21.758339 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 22 09:09:21 crc kubenswrapper[4811]: I0122 09:09:21.850815 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/740119c0-67c2-4467-8095-b99b843e9d53-kubelet-dir\") pod \"installer-9-crc\" (UID: \"740119c0-67c2-4467-8095-b99b843e9d53\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 22 09:09:21 crc kubenswrapper[4811]: I0122 09:09:21.850870 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/740119c0-67c2-4467-8095-b99b843e9d53-kube-api-access\") pod \"installer-9-crc\" (UID: \"740119c0-67c2-4467-8095-b99b843e9d53\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 22 09:09:21 crc kubenswrapper[4811]: I0122 09:09:21.851515 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/740119c0-67c2-4467-8095-b99b843e9d53-var-lock\") pod \"installer-9-crc\" (UID: \"740119c0-67c2-4467-8095-b99b843e9d53\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 22 09:09:21 crc kubenswrapper[4811]: I0122 09:09:21.953552 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/740119c0-67c2-4467-8095-b99b843e9d53-var-lock\") pod \"installer-9-crc\" (UID: \"740119c0-67c2-4467-8095-b99b843e9d53\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 22 09:09:21 crc kubenswrapper[4811]: I0122 09:09:21.953668 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/740119c0-67c2-4467-8095-b99b843e9d53-kubelet-dir\") pod \"installer-9-crc\" (UID: \"740119c0-67c2-4467-8095-b99b843e9d53\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 22 09:09:21 crc kubenswrapper[4811]: I0122 09:09:21.953691 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/740119c0-67c2-4467-8095-b99b843e9d53-var-lock\") pod \"installer-9-crc\" (UID: \"740119c0-67c2-4467-8095-b99b843e9d53\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 22 09:09:21 crc kubenswrapper[4811]: I0122 09:09:21.953705 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/740119c0-67c2-4467-8095-b99b843e9d53-kube-api-access\") pod \"installer-9-crc\" (UID: \"740119c0-67c2-4467-8095-b99b843e9d53\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 22 09:09:21 crc kubenswrapper[4811]: I0122 09:09:21.953841 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/740119c0-67c2-4467-8095-b99b843e9d53-kubelet-dir\") pod \"installer-9-crc\" (UID: \"740119c0-67c2-4467-8095-b99b843e9d53\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 22 09:09:21 crc kubenswrapper[4811]: I0122 09:09:21.972857 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/740119c0-67c2-4467-8095-b99b843e9d53-kube-api-access\") pod \"installer-9-crc\" (UID: \"740119c0-67c2-4467-8095-b99b843e9d53\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 22 09:09:22 crc kubenswrapper[4811]: I0122 09:09:22.061485 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 22 09:09:22 crc kubenswrapper[4811]: I0122 09:09:22.498581 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 22 09:09:22 crc kubenswrapper[4811]: I0122 09:09:22.841442 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"740119c0-67c2-4467-8095-b99b843e9d53","Type":"ContainerStarted","Data":"2ee44217b263139a95a41ab7712c75bc7c9a43b2c8e09b7036d6f5ddee6307b4"} Jan 22 09:09:22 crc kubenswrapper[4811]: I0122 09:09:22.841681 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"740119c0-67c2-4467-8095-b99b843e9d53","Type":"ContainerStarted","Data":"e89de37c7f64e0148dc6233eb1597c37a5c7ce81facf81ea06a7c8e100ac8799"} Jan 22 09:09:22 crc kubenswrapper[4811]: I0122 09:09:22.853728 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=1.8537056010000001 podStartE2EDuration="1.853705601s" podCreationTimestamp="2026-01-22 09:09:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:09:22.851899031 +0000 UTC m=+207.174086154" watchObservedRunningTime="2026-01-22 09:09:22.853705601 +0000 UTC m=+207.175892724" Jan 22 09:09:25 crc kubenswrapper[4811]: I0122 09:09:25.688979 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-gmwq7" podUID="708ddef5-479e-44ef-a189-c41123a73bbe" containerName="oauth-openshift" containerID="cri-o://aff357e7d0b9e0d1714f204b08bc9fc9f9f7f007005874fbf29fcf7244e246b1" gracePeriod=15 Jan 22 09:09:25 crc kubenswrapper[4811]: I0122 09:09:25.869349 4811 generic.go:334] "Generic (PLEG): container finished" podID="708ddef5-479e-44ef-a189-c41123a73bbe" containerID="aff357e7d0b9e0d1714f204b08bc9fc9f9f7f007005874fbf29fcf7244e246b1" exitCode=0 Jan 22 09:09:25 crc kubenswrapper[4811]: I0122 09:09:25.869444 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-gmwq7" event={"ID":"708ddef5-479e-44ef-a189-c41123a73bbe","Type":"ContainerDied","Data":"aff357e7d0b9e0d1714f204b08bc9fc9f9f7f007005874fbf29fcf7244e246b1"} Jan 22 09:09:26 crc kubenswrapper[4811]: I0122 09:09:26.056221 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-gmwq7" Jan 22 09:09:26 crc kubenswrapper[4811]: I0122 09:09:26.108045 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/708ddef5-479e-44ef-a189-c41123a73bbe-v4-0-config-system-cliconfig\") pod \"708ddef5-479e-44ef-a189-c41123a73bbe\" (UID: \"708ddef5-479e-44ef-a189-c41123a73bbe\") " Jan 22 09:09:26 crc kubenswrapper[4811]: I0122 09:09:26.108091 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/708ddef5-479e-44ef-a189-c41123a73bbe-audit-policies\") pod \"708ddef5-479e-44ef-a189-c41123a73bbe\" (UID: \"708ddef5-479e-44ef-a189-c41123a73bbe\") " Jan 22 09:09:26 crc kubenswrapper[4811]: I0122 09:09:26.108118 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/708ddef5-479e-44ef-a189-c41123a73bbe-v4-0-config-user-idp-0-file-data\") pod \"708ddef5-479e-44ef-a189-c41123a73bbe\" (UID: \"708ddef5-479e-44ef-a189-c41123a73bbe\") " Jan 22 09:09:26 crc kubenswrapper[4811]: I0122 09:09:26.108140 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/708ddef5-479e-44ef-a189-c41123a73bbe-v4-0-config-system-trusted-ca-bundle\") pod \"708ddef5-479e-44ef-a189-c41123a73bbe\" (UID: \"708ddef5-479e-44ef-a189-c41123a73bbe\") " Jan 22 09:09:26 crc kubenswrapper[4811]: I0122 09:09:26.108164 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/708ddef5-479e-44ef-a189-c41123a73bbe-v4-0-config-user-template-provider-selection\") pod \"708ddef5-479e-44ef-a189-c41123a73bbe\" (UID: \"708ddef5-479e-44ef-a189-c41123a73bbe\") " Jan 22 09:09:26 crc kubenswrapper[4811]: I0122 09:09:26.108204 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/708ddef5-479e-44ef-a189-c41123a73bbe-v4-0-config-system-service-ca\") pod \"708ddef5-479e-44ef-a189-c41123a73bbe\" (UID: \"708ddef5-479e-44ef-a189-c41123a73bbe\") " Jan 22 09:09:26 crc kubenswrapper[4811]: I0122 09:09:26.108236 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g7z6s\" (UniqueName: \"kubernetes.io/projected/708ddef5-479e-44ef-a189-c41123a73bbe-kube-api-access-g7z6s\") pod \"708ddef5-479e-44ef-a189-c41123a73bbe\" (UID: \"708ddef5-479e-44ef-a189-c41123a73bbe\") " Jan 22 09:09:26 crc kubenswrapper[4811]: I0122 09:09:26.108253 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/708ddef5-479e-44ef-a189-c41123a73bbe-v4-0-config-system-session\") pod \"708ddef5-479e-44ef-a189-c41123a73bbe\" (UID: \"708ddef5-479e-44ef-a189-c41123a73bbe\") " Jan 22 09:09:26 crc kubenswrapper[4811]: I0122 09:09:26.108297 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/708ddef5-479e-44ef-a189-c41123a73bbe-v4-0-config-user-template-error\") pod \"708ddef5-479e-44ef-a189-c41123a73bbe\" (UID: \"708ddef5-479e-44ef-a189-c41123a73bbe\") " Jan 22 09:09:26 crc kubenswrapper[4811]: I0122 09:09:26.108329 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/708ddef5-479e-44ef-a189-c41123a73bbe-v4-0-config-user-template-login\") pod \"708ddef5-479e-44ef-a189-c41123a73bbe\" (UID: \"708ddef5-479e-44ef-a189-c41123a73bbe\") " Jan 22 09:09:26 crc kubenswrapper[4811]: I0122 09:09:26.108352 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/708ddef5-479e-44ef-a189-c41123a73bbe-audit-dir\") pod \"708ddef5-479e-44ef-a189-c41123a73bbe\" (UID: \"708ddef5-479e-44ef-a189-c41123a73bbe\") " Jan 22 09:09:26 crc kubenswrapper[4811]: I0122 09:09:26.108374 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/708ddef5-479e-44ef-a189-c41123a73bbe-v4-0-config-system-serving-cert\") pod \"708ddef5-479e-44ef-a189-c41123a73bbe\" (UID: \"708ddef5-479e-44ef-a189-c41123a73bbe\") " Jan 22 09:09:26 crc kubenswrapper[4811]: I0122 09:09:26.108405 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/708ddef5-479e-44ef-a189-c41123a73bbe-v4-0-config-system-ocp-branding-template\") pod \"708ddef5-479e-44ef-a189-c41123a73bbe\" (UID: \"708ddef5-479e-44ef-a189-c41123a73bbe\") " Jan 22 09:09:26 crc kubenswrapper[4811]: I0122 09:09:26.108432 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/708ddef5-479e-44ef-a189-c41123a73bbe-v4-0-config-system-router-certs\") pod \"708ddef5-479e-44ef-a189-c41123a73bbe\" (UID: \"708ddef5-479e-44ef-a189-c41123a73bbe\") " Jan 22 09:09:26 crc kubenswrapper[4811]: I0122 09:09:26.113788 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/708ddef5-479e-44ef-a189-c41123a73bbe-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "708ddef5-479e-44ef-a189-c41123a73bbe" (UID: "708ddef5-479e-44ef-a189-c41123a73bbe"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 09:09:26 crc kubenswrapper[4811]: I0122 09:09:26.114340 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/708ddef5-479e-44ef-a189-c41123a73bbe-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "708ddef5-479e-44ef-a189-c41123a73bbe" (UID: "708ddef5-479e-44ef-a189-c41123a73bbe"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:09:26 crc kubenswrapper[4811]: I0122 09:09:26.114640 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/708ddef5-479e-44ef-a189-c41123a73bbe-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "708ddef5-479e-44ef-a189-c41123a73bbe" (UID: "708ddef5-479e-44ef-a189-c41123a73bbe"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:09:26 crc kubenswrapper[4811]: I0122 09:09:26.117075 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/708ddef5-479e-44ef-a189-c41123a73bbe-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "708ddef5-479e-44ef-a189-c41123a73bbe" (UID: "708ddef5-479e-44ef-a189-c41123a73bbe"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:09:26 crc kubenswrapper[4811]: I0122 09:09:26.120078 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/708ddef5-479e-44ef-a189-c41123a73bbe-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "708ddef5-479e-44ef-a189-c41123a73bbe" (UID: "708ddef5-479e-44ef-a189-c41123a73bbe"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:09:26 crc kubenswrapper[4811]: I0122 09:09:26.121967 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/708ddef5-479e-44ef-a189-c41123a73bbe-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "708ddef5-479e-44ef-a189-c41123a73bbe" (UID: "708ddef5-479e-44ef-a189-c41123a73bbe"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:09:26 crc kubenswrapper[4811]: I0122 09:09:26.122232 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/708ddef5-479e-44ef-a189-c41123a73bbe-kube-api-access-g7z6s" (OuterVolumeSpecName: "kube-api-access-g7z6s") pod "708ddef5-479e-44ef-a189-c41123a73bbe" (UID: "708ddef5-479e-44ef-a189-c41123a73bbe"). InnerVolumeSpecName "kube-api-access-g7z6s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:09:26 crc kubenswrapper[4811]: I0122 09:09:26.122393 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/708ddef5-479e-44ef-a189-c41123a73bbe-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "708ddef5-479e-44ef-a189-c41123a73bbe" (UID: "708ddef5-479e-44ef-a189-c41123a73bbe"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:09:26 crc kubenswrapper[4811]: I0122 09:09:26.123218 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/708ddef5-479e-44ef-a189-c41123a73bbe-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "708ddef5-479e-44ef-a189-c41123a73bbe" (UID: "708ddef5-479e-44ef-a189-c41123a73bbe"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:09:26 crc kubenswrapper[4811]: I0122 09:09:26.125760 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/708ddef5-479e-44ef-a189-c41123a73bbe-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "708ddef5-479e-44ef-a189-c41123a73bbe" (UID: "708ddef5-479e-44ef-a189-c41123a73bbe"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:09:26 crc kubenswrapper[4811]: I0122 09:09:26.126177 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/708ddef5-479e-44ef-a189-c41123a73bbe-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "708ddef5-479e-44ef-a189-c41123a73bbe" (UID: "708ddef5-479e-44ef-a189-c41123a73bbe"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:09:26 crc kubenswrapper[4811]: I0122 09:09:26.126519 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/708ddef5-479e-44ef-a189-c41123a73bbe-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "708ddef5-479e-44ef-a189-c41123a73bbe" (UID: "708ddef5-479e-44ef-a189-c41123a73bbe"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:09:26 crc kubenswrapper[4811]: I0122 09:09:26.130140 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/708ddef5-479e-44ef-a189-c41123a73bbe-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "708ddef5-479e-44ef-a189-c41123a73bbe" (UID: "708ddef5-479e-44ef-a189-c41123a73bbe"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:09:26 crc kubenswrapper[4811]: I0122 09:09:26.130677 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/708ddef5-479e-44ef-a189-c41123a73bbe-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "708ddef5-479e-44ef-a189-c41123a73bbe" (UID: "708ddef5-479e-44ef-a189-c41123a73bbe"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:09:26 crc kubenswrapper[4811]: I0122 09:09:26.209765 4811 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/708ddef5-479e-44ef-a189-c41123a73bbe-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 22 09:09:26 crc kubenswrapper[4811]: I0122 09:09:26.209800 4811 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/708ddef5-479e-44ef-a189-c41123a73bbe-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 22 09:09:26 crc kubenswrapper[4811]: I0122 09:09:26.209815 4811 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/708ddef5-479e-44ef-a189-c41123a73bbe-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 22 09:09:26 crc kubenswrapper[4811]: I0122 09:09:26.209828 4811 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/708ddef5-479e-44ef-a189-c41123a73bbe-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 22 09:09:26 crc kubenswrapper[4811]: I0122 09:09:26.209841 4811 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/708ddef5-479e-44ef-a189-c41123a73bbe-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 09:09:26 crc kubenswrapper[4811]: I0122 09:09:26.209855 4811 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/708ddef5-479e-44ef-a189-c41123a73bbe-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 22 09:09:26 crc kubenswrapper[4811]: I0122 09:09:26.209866 4811 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/708ddef5-479e-44ef-a189-c41123a73bbe-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 22 09:09:26 crc kubenswrapper[4811]: I0122 09:09:26.209877 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g7z6s\" (UniqueName: \"kubernetes.io/projected/708ddef5-479e-44ef-a189-c41123a73bbe-kube-api-access-g7z6s\") on node \"crc\" DevicePath \"\"" Jan 22 09:09:26 crc kubenswrapper[4811]: I0122 09:09:26.209887 4811 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/708ddef5-479e-44ef-a189-c41123a73bbe-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 22 09:09:26 crc kubenswrapper[4811]: I0122 09:09:26.209897 4811 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/708ddef5-479e-44ef-a189-c41123a73bbe-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 22 09:09:26 crc kubenswrapper[4811]: I0122 09:09:26.209906 4811 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/708ddef5-479e-44ef-a189-c41123a73bbe-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 22 09:09:26 crc kubenswrapper[4811]: I0122 09:09:26.209928 4811 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/708ddef5-479e-44ef-a189-c41123a73bbe-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 22 09:09:26 crc kubenswrapper[4811]: I0122 09:09:26.209938 4811 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/708ddef5-479e-44ef-a189-c41123a73bbe-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 09:09:26 crc kubenswrapper[4811]: I0122 09:09:26.209949 4811 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/708ddef5-479e-44ef-a189-c41123a73bbe-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 22 09:09:26 crc kubenswrapper[4811]: I0122 09:09:26.876299 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-gmwq7" event={"ID":"708ddef5-479e-44ef-a189-c41123a73bbe","Type":"ContainerDied","Data":"91d56d670615843faf27681599a20642ab001d934408a138f460299b37dfa017"} Jan 22 09:09:26 crc kubenswrapper[4811]: I0122 09:09:26.876365 4811 scope.go:117] "RemoveContainer" containerID="aff357e7d0b9e0d1714f204b08bc9fc9f9f7f007005874fbf29fcf7244e246b1" Jan 22 09:09:26 crc kubenswrapper[4811]: I0122 09:09:26.876443 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-gmwq7" Jan 22 09:09:26 crc kubenswrapper[4811]: I0122 09:09:26.905389 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-gmwq7"] Jan 22 09:09:26 crc kubenswrapper[4811]: I0122 09:09:26.909099 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-gmwq7"] Jan 22 09:09:28 crc kubenswrapper[4811]: I0122 09:09:28.001309 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="708ddef5-479e-44ef-a189-c41123a73bbe" path="/var/lib/kubelet/pods/708ddef5-479e-44ef-a189-c41123a73bbe/volumes" Jan 22 09:09:29 crc kubenswrapper[4811]: I0122 09:09:29.362675 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-56c748df47-fzfx9"] Jan 22 09:09:29 crc kubenswrapper[4811]: E0122 09:09:29.362941 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="708ddef5-479e-44ef-a189-c41123a73bbe" containerName="oauth-openshift" Jan 22 09:09:29 crc kubenswrapper[4811]: I0122 09:09:29.362956 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="708ddef5-479e-44ef-a189-c41123a73bbe" containerName="oauth-openshift" Jan 22 09:09:29 crc kubenswrapper[4811]: I0122 09:09:29.363083 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="708ddef5-479e-44ef-a189-c41123a73bbe" containerName="oauth-openshift" Jan 22 09:09:29 crc kubenswrapper[4811]: I0122 09:09:29.363477 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-56c748df47-fzfx9" Jan 22 09:09:29 crc kubenswrapper[4811]: I0122 09:09:29.368271 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 22 09:09:29 crc kubenswrapper[4811]: I0122 09:09:29.369454 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 22 09:09:29 crc kubenswrapper[4811]: I0122 09:09:29.369600 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 22 09:09:29 crc kubenswrapper[4811]: I0122 09:09:29.370168 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 22 09:09:29 crc kubenswrapper[4811]: I0122 09:09:29.370270 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 22 09:09:29 crc kubenswrapper[4811]: I0122 09:09:29.370614 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 22 09:09:29 crc kubenswrapper[4811]: I0122 09:09:29.370723 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 22 09:09:29 crc kubenswrapper[4811]: I0122 09:09:29.370780 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 22 09:09:29 crc kubenswrapper[4811]: I0122 09:09:29.370789 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 22 09:09:29 crc kubenswrapper[4811]: I0122 09:09:29.371084 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 22 09:09:29 crc kubenswrapper[4811]: I0122 09:09:29.371264 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 22 09:09:29 crc kubenswrapper[4811]: I0122 09:09:29.371377 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 22 09:09:29 crc kubenswrapper[4811]: I0122 09:09:29.384418 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 22 09:09:29 crc kubenswrapper[4811]: I0122 09:09:29.384993 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-56c748df47-fzfx9"] Jan 22 09:09:29 crc kubenswrapper[4811]: I0122 09:09:29.385960 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 22 09:09:29 crc kubenswrapper[4811]: I0122 09:09:29.394669 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 22 09:09:29 crc kubenswrapper[4811]: I0122 09:09:29.446227 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/35f14da1-e093-47e4-9594-5874f3ac440c-v4-0-config-system-cliconfig\") pod \"oauth-openshift-56c748df47-fzfx9\" (UID: \"35f14da1-e093-47e4-9594-5874f3ac440c\") " pod="openshift-authentication/oauth-openshift-56c748df47-fzfx9" Jan 22 09:09:29 crc kubenswrapper[4811]: I0122 09:09:29.446342 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/35f14da1-e093-47e4-9594-5874f3ac440c-audit-dir\") pod \"oauth-openshift-56c748df47-fzfx9\" (UID: \"35f14da1-e093-47e4-9594-5874f3ac440c\") " pod="openshift-authentication/oauth-openshift-56c748df47-fzfx9" Jan 22 09:09:29 crc kubenswrapper[4811]: I0122 09:09:29.446446 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/35f14da1-e093-47e4-9594-5874f3ac440c-v4-0-config-user-template-login\") pod \"oauth-openshift-56c748df47-fzfx9\" (UID: \"35f14da1-e093-47e4-9594-5874f3ac440c\") " pod="openshift-authentication/oauth-openshift-56c748df47-fzfx9" Jan 22 09:09:29 crc kubenswrapper[4811]: I0122 09:09:29.446531 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/35f14da1-e093-47e4-9594-5874f3ac440c-v4-0-config-user-template-error\") pod \"oauth-openshift-56c748df47-fzfx9\" (UID: \"35f14da1-e093-47e4-9594-5874f3ac440c\") " pod="openshift-authentication/oauth-openshift-56c748df47-fzfx9" Jan 22 09:09:29 crc kubenswrapper[4811]: I0122 09:09:29.446608 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/35f14da1-e093-47e4-9594-5874f3ac440c-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-56c748df47-fzfx9\" (UID: \"35f14da1-e093-47e4-9594-5874f3ac440c\") " pod="openshift-authentication/oauth-openshift-56c748df47-fzfx9" Jan 22 09:09:29 crc kubenswrapper[4811]: I0122 09:09:29.446733 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/35f14da1-e093-47e4-9594-5874f3ac440c-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-56c748df47-fzfx9\" (UID: \"35f14da1-e093-47e4-9594-5874f3ac440c\") " pod="openshift-authentication/oauth-openshift-56c748df47-fzfx9" Jan 22 09:09:29 crc kubenswrapper[4811]: I0122 09:09:29.446835 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/35f14da1-e093-47e4-9594-5874f3ac440c-v4-0-config-system-serving-cert\") pod \"oauth-openshift-56c748df47-fzfx9\" (UID: \"35f14da1-e093-47e4-9594-5874f3ac440c\") " pod="openshift-authentication/oauth-openshift-56c748df47-fzfx9" Jan 22 09:09:29 crc kubenswrapper[4811]: I0122 09:09:29.446913 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/35f14da1-e093-47e4-9594-5874f3ac440c-audit-policies\") pod \"oauth-openshift-56c748df47-fzfx9\" (UID: \"35f14da1-e093-47e4-9594-5874f3ac440c\") " pod="openshift-authentication/oauth-openshift-56c748df47-fzfx9" Jan 22 09:09:29 crc kubenswrapper[4811]: I0122 09:09:29.446992 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/35f14da1-e093-47e4-9594-5874f3ac440c-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-56c748df47-fzfx9\" (UID: \"35f14da1-e093-47e4-9594-5874f3ac440c\") " pod="openshift-authentication/oauth-openshift-56c748df47-fzfx9" Jan 22 09:09:29 crc kubenswrapper[4811]: I0122 09:09:29.447056 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/35f14da1-e093-47e4-9594-5874f3ac440c-v4-0-config-system-router-certs\") pod \"oauth-openshift-56c748df47-fzfx9\" (UID: \"35f14da1-e093-47e4-9594-5874f3ac440c\") " pod="openshift-authentication/oauth-openshift-56c748df47-fzfx9" Jan 22 09:09:29 crc kubenswrapper[4811]: I0122 09:09:29.447124 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/35f14da1-e093-47e4-9594-5874f3ac440c-v4-0-config-system-session\") pod \"oauth-openshift-56c748df47-fzfx9\" (UID: \"35f14da1-e093-47e4-9594-5874f3ac440c\") " pod="openshift-authentication/oauth-openshift-56c748df47-fzfx9" Jan 22 09:09:29 crc kubenswrapper[4811]: I0122 09:09:29.447184 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/35f14da1-e093-47e4-9594-5874f3ac440c-v4-0-config-system-service-ca\") pod \"oauth-openshift-56c748df47-fzfx9\" (UID: \"35f14da1-e093-47e4-9594-5874f3ac440c\") " pod="openshift-authentication/oauth-openshift-56c748df47-fzfx9" Jan 22 09:09:29 crc kubenswrapper[4811]: I0122 09:09:29.447263 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/35f14da1-e093-47e4-9594-5874f3ac440c-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-56c748df47-fzfx9\" (UID: \"35f14da1-e093-47e4-9594-5874f3ac440c\") " pod="openshift-authentication/oauth-openshift-56c748df47-fzfx9" Jan 22 09:09:29 crc kubenswrapper[4811]: I0122 09:09:29.447399 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kts7x\" (UniqueName: \"kubernetes.io/projected/35f14da1-e093-47e4-9594-5874f3ac440c-kube-api-access-kts7x\") pod \"oauth-openshift-56c748df47-fzfx9\" (UID: \"35f14da1-e093-47e4-9594-5874f3ac440c\") " pod="openshift-authentication/oauth-openshift-56c748df47-fzfx9" Jan 22 09:09:29 crc kubenswrapper[4811]: I0122 09:09:29.548413 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/35f14da1-e093-47e4-9594-5874f3ac440c-v4-0-config-system-cliconfig\") pod \"oauth-openshift-56c748df47-fzfx9\" (UID: \"35f14da1-e093-47e4-9594-5874f3ac440c\") " pod="openshift-authentication/oauth-openshift-56c748df47-fzfx9" Jan 22 09:09:29 crc kubenswrapper[4811]: I0122 09:09:29.548548 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/35f14da1-e093-47e4-9594-5874f3ac440c-audit-dir\") pod \"oauth-openshift-56c748df47-fzfx9\" (UID: \"35f14da1-e093-47e4-9594-5874f3ac440c\") " pod="openshift-authentication/oauth-openshift-56c748df47-fzfx9" Jan 22 09:09:29 crc kubenswrapper[4811]: I0122 09:09:29.548680 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/35f14da1-e093-47e4-9594-5874f3ac440c-v4-0-config-user-template-login\") pod \"oauth-openshift-56c748df47-fzfx9\" (UID: \"35f14da1-e093-47e4-9594-5874f3ac440c\") " pod="openshift-authentication/oauth-openshift-56c748df47-fzfx9" Jan 22 09:09:29 crc kubenswrapper[4811]: I0122 09:09:29.548768 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/35f14da1-e093-47e4-9594-5874f3ac440c-v4-0-config-user-template-error\") pod \"oauth-openshift-56c748df47-fzfx9\" (UID: \"35f14da1-e093-47e4-9594-5874f3ac440c\") " pod="openshift-authentication/oauth-openshift-56c748df47-fzfx9" Jan 22 09:09:29 crc kubenswrapper[4811]: I0122 09:09:29.548841 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/35f14da1-e093-47e4-9594-5874f3ac440c-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-56c748df47-fzfx9\" (UID: \"35f14da1-e093-47e4-9594-5874f3ac440c\") " pod="openshift-authentication/oauth-openshift-56c748df47-fzfx9" Jan 22 09:09:29 crc kubenswrapper[4811]: I0122 09:09:29.548915 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/35f14da1-e093-47e4-9594-5874f3ac440c-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-56c748df47-fzfx9\" (UID: \"35f14da1-e093-47e4-9594-5874f3ac440c\") " pod="openshift-authentication/oauth-openshift-56c748df47-fzfx9" Jan 22 09:09:29 crc kubenswrapper[4811]: I0122 09:09:29.549012 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/35f14da1-e093-47e4-9594-5874f3ac440c-v4-0-config-system-serving-cert\") pod \"oauth-openshift-56c748df47-fzfx9\" (UID: \"35f14da1-e093-47e4-9594-5874f3ac440c\") " pod="openshift-authentication/oauth-openshift-56c748df47-fzfx9" Jan 22 09:09:29 crc kubenswrapper[4811]: I0122 09:09:29.549095 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/35f14da1-e093-47e4-9594-5874f3ac440c-audit-policies\") pod \"oauth-openshift-56c748df47-fzfx9\" (UID: \"35f14da1-e093-47e4-9594-5874f3ac440c\") " pod="openshift-authentication/oauth-openshift-56c748df47-fzfx9" Jan 22 09:09:29 crc kubenswrapper[4811]: I0122 09:09:29.549162 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/35f14da1-e093-47e4-9594-5874f3ac440c-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-56c748df47-fzfx9\" (UID: \"35f14da1-e093-47e4-9594-5874f3ac440c\") " pod="openshift-authentication/oauth-openshift-56c748df47-fzfx9" Jan 22 09:09:29 crc kubenswrapper[4811]: I0122 09:09:29.549248 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/35f14da1-e093-47e4-9594-5874f3ac440c-v4-0-config-system-router-certs\") pod \"oauth-openshift-56c748df47-fzfx9\" (UID: \"35f14da1-e093-47e4-9594-5874f3ac440c\") " pod="openshift-authentication/oauth-openshift-56c748df47-fzfx9" Jan 22 09:09:29 crc kubenswrapper[4811]: I0122 09:09:29.549330 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/35f14da1-e093-47e4-9594-5874f3ac440c-v4-0-config-system-session\") pod \"oauth-openshift-56c748df47-fzfx9\" (UID: \"35f14da1-e093-47e4-9594-5874f3ac440c\") " pod="openshift-authentication/oauth-openshift-56c748df47-fzfx9" Jan 22 09:09:29 crc kubenswrapper[4811]: I0122 09:09:29.549410 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/35f14da1-e093-47e4-9594-5874f3ac440c-v4-0-config-system-service-ca\") pod \"oauth-openshift-56c748df47-fzfx9\" (UID: \"35f14da1-e093-47e4-9594-5874f3ac440c\") " pod="openshift-authentication/oauth-openshift-56c748df47-fzfx9" Jan 22 09:09:29 crc kubenswrapper[4811]: I0122 09:09:29.549474 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/35f14da1-e093-47e4-9594-5874f3ac440c-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-56c748df47-fzfx9\" (UID: \"35f14da1-e093-47e4-9594-5874f3ac440c\") " pod="openshift-authentication/oauth-openshift-56c748df47-fzfx9" Jan 22 09:09:29 crc kubenswrapper[4811]: I0122 09:09:29.549570 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kts7x\" (UniqueName: \"kubernetes.io/projected/35f14da1-e093-47e4-9594-5874f3ac440c-kube-api-access-kts7x\") pod \"oauth-openshift-56c748df47-fzfx9\" (UID: \"35f14da1-e093-47e4-9594-5874f3ac440c\") " pod="openshift-authentication/oauth-openshift-56c748df47-fzfx9" Jan 22 09:09:29 crc kubenswrapper[4811]: I0122 09:09:29.549886 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/35f14da1-e093-47e4-9594-5874f3ac440c-audit-policies\") pod \"oauth-openshift-56c748df47-fzfx9\" (UID: \"35f14da1-e093-47e4-9594-5874f3ac440c\") " pod="openshift-authentication/oauth-openshift-56c748df47-fzfx9" Jan 22 09:09:29 crc kubenswrapper[4811]: I0122 09:09:29.549354 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/35f14da1-e093-47e4-9594-5874f3ac440c-audit-dir\") pod \"oauth-openshift-56c748df47-fzfx9\" (UID: \"35f14da1-e093-47e4-9594-5874f3ac440c\") " pod="openshift-authentication/oauth-openshift-56c748df47-fzfx9" Jan 22 09:09:29 crc kubenswrapper[4811]: I0122 09:09:29.549273 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/35f14da1-e093-47e4-9594-5874f3ac440c-v4-0-config-system-cliconfig\") pod \"oauth-openshift-56c748df47-fzfx9\" (UID: \"35f14da1-e093-47e4-9594-5874f3ac440c\") " pod="openshift-authentication/oauth-openshift-56c748df47-fzfx9" Jan 22 09:09:29 crc kubenswrapper[4811]: I0122 09:09:29.550716 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/35f14da1-e093-47e4-9594-5874f3ac440c-v4-0-config-system-service-ca\") pod \"oauth-openshift-56c748df47-fzfx9\" (UID: \"35f14da1-e093-47e4-9594-5874f3ac440c\") " pod="openshift-authentication/oauth-openshift-56c748df47-fzfx9" Jan 22 09:09:29 crc kubenswrapper[4811]: I0122 09:09:29.555323 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/35f14da1-e093-47e4-9594-5874f3ac440c-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-56c748df47-fzfx9\" (UID: \"35f14da1-e093-47e4-9594-5874f3ac440c\") " pod="openshift-authentication/oauth-openshift-56c748df47-fzfx9" Jan 22 09:09:29 crc kubenswrapper[4811]: I0122 09:09:29.555840 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/35f14da1-e093-47e4-9594-5874f3ac440c-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-56c748df47-fzfx9\" (UID: \"35f14da1-e093-47e4-9594-5874f3ac440c\") " pod="openshift-authentication/oauth-openshift-56c748df47-fzfx9" Jan 22 09:09:29 crc kubenswrapper[4811]: I0122 09:09:29.556001 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/35f14da1-e093-47e4-9594-5874f3ac440c-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-56c748df47-fzfx9\" (UID: \"35f14da1-e093-47e4-9594-5874f3ac440c\") " pod="openshift-authentication/oauth-openshift-56c748df47-fzfx9" Jan 22 09:09:29 crc kubenswrapper[4811]: I0122 09:09:29.556351 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/35f14da1-e093-47e4-9594-5874f3ac440c-v4-0-config-user-template-error\") pod \"oauth-openshift-56c748df47-fzfx9\" (UID: \"35f14da1-e093-47e4-9594-5874f3ac440c\") " pod="openshift-authentication/oauth-openshift-56c748df47-fzfx9" Jan 22 09:09:29 crc kubenswrapper[4811]: I0122 09:09:29.556769 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/35f14da1-e093-47e4-9594-5874f3ac440c-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-56c748df47-fzfx9\" (UID: \"35f14da1-e093-47e4-9594-5874f3ac440c\") " pod="openshift-authentication/oauth-openshift-56c748df47-fzfx9" Jan 22 09:09:29 crc kubenswrapper[4811]: I0122 09:09:29.557271 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/35f14da1-e093-47e4-9594-5874f3ac440c-v4-0-config-system-router-certs\") pod \"oauth-openshift-56c748df47-fzfx9\" (UID: \"35f14da1-e093-47e4-9594-5874f3ac440c\") " pod="openshift-authentication/oauth-openshift-56c748df47-fzfx9" Jan 22 09:09:29 crc kubenswrapper[4811]: I0122 09:09:29.557459 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/35f14da1-e093-47e4-9594-5874f3ac440c-v4-0-config-system-session\") pod \"oauth-openshift-56c748df47-fzfx9\" (UID: \"35f14da1-e093-47e4-9594-5874f3ac440c\") " pod="openshift-authentication/oauth-openshift-56c748df47-fzfx9" Jan 22 09:09:29 crc kubenswrapper[4811]: I0122 09:09:29.558235 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/35f14da1-e093-47e4-9594-5874f3ac440c-v4-0-config-system-serving-cert\") pod \"oauth-openshift-56c748df47-fzfx9\" (UID: \"35f14da1-e093-47e4-9594-5874f3ac440c\") " pod="openshift-authentication/oauth-openshift-56c748df47-fzfx9" Jan 22 09:09:29 crc kubenswrapper[4811]: I0122 09:09:29.558808 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/35f14da1-e093-47e4-9594-5874f3ac440c-v4-0-config-user-template-login\") pod \"oauth-openshift-56c748df47-fzfx9\" (UID: \"35f14da1-e093-47e4-9594-5874f3ac440c\") " pod="openshift-authentication/oauth-openshift-56c748df47-fzfx9" Jan 22 09:09:29 crc kubenswrapper[4811]: I0122 09:09:29.562985 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kts7x\" (UniqueName: \"kubernetes.io/projected/35f14da1-e093-47e4-9594-5874f3ac440c-kube-api-access-kts7x\") pod \"oauth-openshift-56c748df47-fzfx9\" (UID: \"35f14da1-e093-47e4-9594-5874f3ac440c\") " pod="openshift-authentication/oauth-openshift-56c748df47-fzfx9" Jan 22 09:09:29 crc kubenswrapper[4811]: I0122 09:09:29.687029 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-56c748df47-fzfx9" Jan 22 09:09:30 crc kubenswrapper[4811]: I0122 09:09:30.057200 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-56c748df47-fzfx9"] Jan 22 09:09:30 crc kubenswrapper[4811]: W0122 09:09:30.061866 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod35f14da1_e093_47e4_9594_5874f3ac440c.slice/crio-9d60dfaefc470326003389a706124f3e34a9d6ee37c18a971856a8a6cba17c8c WatchSource:0}: Error finding container 9d60dfaefc470326003389a706124f3e34a9d6ee37c18a971856a8a6cba17c8c: Status 404 returned error can't find the container with id 9d60dfaefc470326003389a706124f3e34a9d6ee37c18a971856a8a6cba17c8c Jan 22 09:09:30 crc kubenswrapper[4811]: I0122 09:09:30.814455 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-b2fpc" Jan 22 09:09:30 crc kubenswrapper[4811]: I0122 09:09:30.905289 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-56c748df47-fzfx9" event={"ID":"35f14da1-e093-47e4-9594-5874f3ac440c","Type":"ContainerStarted","Data":"c76fba25f2030b3e4cb8c7f8adc76da8c23d5e43460e64de937fe1f9a469bdc8"} Jan 22 09:09:30 crc kubenswrapper[4811]: I0122 09:09:30.905340 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-56c748df47-fzfx9" event={"ID":"35f14da1-e093-47e4-9594-5874f3ac440c","Type":"ContainerStarted","Data":"9d60dfaefc470326003389a706124f3e34a9d6ee37c18a971856a8a6cba17c8c"} Jan 22 09:09:30 crc kubenswrapper[4811]: I0122 09:09:30.906250 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-56c748df47-fzfx9" Jan 22 09:09:30 crc kubenswrapper[4811]: I0122 09:09:30.914246 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-56c748df47-fzfx9" Jan 22 09:09:30 crc kubenswrapper[4811]: I0122 09:09:30.929231 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-56c748df47-fzfx9" podStartSLOduration=30.929207186 podStartE2EDuration="30.929207186s" podCreationTimestamp="2026-01-22 09:09:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:09:30.923938278 +0000 UTC m=+215.246125401" watchObservedRunningTime="2026-01-22 09:09:30.929207186 +0000 UTC m=+215.251394309" Jan 22 09:09:31 crc kubenswrapper[4811]: I0122 09:09:31.223705 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-b2fpc"] Jan 22 09:09:31 crc kubenswrapper[4811]: I0122 09:09:31.223934 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-b2fpc" podUID="83b7219e-ce69-408d-92ad-7b58cc6d0b7a" containerName="registry-server" containerID="cri-o://49bb649bd0cc8ef4d9efdbfd1bacfcba0e98e844590ed51581a77e0de9734f21" gracePeriod=2 Jan 22 09:09:31 crc kubenswrapper[4811]: I0122 09:09:31.598776 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b2fpc" Jan 22 09:09:31 crc kubenswrapper[4811]: I0122 09:09:31.679888 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83b7219e-ce69-408d-92ad-7b58cc6d0b7a-catalog-content\") pod \"83b7219e-ce69-408d-92ad-7b58cc6d0b7a\" (UID: \"83b7219e-ce69-408d-92ad-7b58cc6d0b7a\") " Jan 22 09:09:31 crc kubenswrapper[4811]: I0122 09:09:31.679984 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83b7219e-ce69-408d-92ad-7b58cc6d0b7a-utilities\") pod \"83b7219e-ce69-408d-92ad-7b58cc6d0b7a\" (UID: \"83b7219e-ce69-408d-92ad-7b58cc6d0b7a\") " Jan 22 09:09:31 crc kubenswrapper[4811]: I0122 09:09:31.680011 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jbdwk\" (UniqueName: \"kubernetes.io/projected/83b7219e-ce69-408d-92ad-7b58cc6d0b7a-kube-api-access-jbdwk\") pod \"83b7219e-ce69-408d-92ad-7b58cc6d0b7a\" (UID: \"83b7219e-ce69-408d-92ad-7b58cc6d0b7a\") " Jan 22 09:09:31 crc kubenswrapper[4811]: I0122 09:09:31.680657 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83b7219e-ce69-408d-92ad-7b58cc6d0b7a-utilities" (OuterVolumeSpecName: "utilities") pod "83b7219e-ce69-408d-92ad-7b58cc6d0b7a" (UID: "83b7219e-ce69-408d-92ad-7b58cc6d0b7a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:09:31 crc kubenswrapper[4811]: I0122 09:09:31.690393 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83b7219e-ce69-408d-92ad-7b58cc6d0b7a-kube-api-access-jbdwk" (OuterVolumeSpecName: "kube-api-access-jbdwk") pod "83b7219e-ce69-408d-92ad-7b58cc6d0b7a" (UID: "83b7219e-ce69-408d-92ad-7b58cc6d0b7a"). InnerVolumeSpecName "kube-api-access-jbdwk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:09:31 crc kubenswrapper[4811]: I0122 09:09:31.717016 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83b7219e-ce69-408d-92ad-7b58cc6d0b7a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "83b7219e-ce69-408d-92ad-7b58cc6d0b7a" (UID: "83b7219e-ce69-408d-92ad-7b58cc6d0b7a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:09:31 crc kubenswrapper[4811]: I0122 09:09:31.781878 4811 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83b7219e-ce69-408d-92ad-7b58cc6d0b7a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 09:09:31 crc kubenswrapper[4811]: I0122 09:09:31.781914 4811 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83b7219e-ce69-408d-92ad-7b58cc6d0b7a-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 09:09:31 crc kubenswrapper[4811]: I0122 09:09:31.781924 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jbdwk\" (UniqueName: \"kubernetes.io/projected/83b7219e-ce69-408d-92ad-7b58cc6d0b7a-kube-api-access-jbdwk\") on node \"crc\" DevicePath \"\"" Jan 22 09:09:31 crc kubenswrapper[4811]: I0122 09:09:31.913768 4811 generic.go:334] "Generic (PLEG): container finished" podID="83b7219e-ce69-408d-92ad-7b58cc6d0b7a" containerID="49bb649bd0cc8ef4d9efdbfd1bacfcba0e98e844590ed51581a77e0de9734f21" exitCode=0 Jan 22 09:09:31 crc kubenswrapper[4811]: I0122 09:09:31.913818 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b2fpc" event={"ID":"83b7219e-ce69-408d-92ad-7b58cc6d0b7a","Type":"ContainerDied","Data":"49bb649bd0cc8ef4d9efdbfd1bacfcba0e98e844590ed51581a77e0de9734f21"} Jan 22 09:09:31 crc kubenswrapper[4811]: I0122 09:09:31.913881 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b2fpc" event={"ID":"83b7219e-ce69-408d-92ad-7b58cc6d0b7a","Type":"ContainerDied","Data":"36bf1c8002b997bf04229ee30293db5167d9a96c4e11059124d6827482afc5b1"} Jan 22 09:09:31 crc kubenswrapper[4811]: I0122 09:09:31.913912 4811 scope.go:117] "RemoveContainer" containerID="49bb649bd0cc8ef4d9efdbfd1bacfcba0e98e844590ed51581a77e0de9734f21" Jan 22 09:09:31 crc kubenswrapper[4811]: I0122 09:09:31.913951 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b2fpc" Jan 22 09:09:31 crc kubenswrapper[4811]: I0122 09:09:31.931572 4811 scope.go:117] "RemoveContainer" containerID="31dca2b72ef32a296d67da15a5e4de73093c5c233561360ea29d11b92207c61e" Jan 22 09:09:31 crc kubenswrapper[4811]: I0122 09:09:31.943545 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-b2fpc"] Jan 22 09:09:31 crc kubenswrapper[4811]: I0122 09:09:31.946319 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-b2fpc"] Jan 22 09:09:31 crc kubenswrapper[4811]: I0122 09:09:31.968045 4811 scope.go:117] "RemoveContainer" containerID="8908c5e30b5991f658952c022b1b47ce84f924f8c76d071c5b3e52d46c51fdfc" Jan 22 09:09:31 crc kubenswrapper[4811]: I0122 09:09:31.981762 4811 scope.go:117] "RemoveContainer" containerID="49bb649bd0cc8ef4d9efdbfd1bacfcba0e98e844590ed51581a77e0de9734f21" Jan 22 09:09:31 crc kubenswrapper[4811]: E0122 09:09:31.982103 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"49bb649bd0cc8ef4d9efdbfd1bacfcba0e98e844590ed51581a77e0de9734f21\": container with ID starting with 49bb649bd0cc8ef4d9efdbfd1bacfcba0e98e844590ed51581a77e0de9734f21 not found: ID does not exist" containerID="49bb649bd0cc8ef4d9efdbfd1bacfcba0e98e844590ed51581a77e0de9734f21" Jan 22 09:09:31 crc kubenswrapper[4811]: I0122 09:09:31.982138 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49bb649bd0cc8ef4d9efdbfd1bacfcba0e98e844590ed51581a77e0de9734f21"} err="failed to get container status \"49bb649bd0cc8ef4d9efdbfd1bacfcba0e98e844590ed51581a77e0de9734f21\": rpc error: code = NotFound desc = could not find container \"49bb649bd0cc8ef4d9efdbfd1bacfcba0e98e844590ed51581a77e0de9734f21\": container with ID starting with 49bb649bd0cc8ef4d9efdbfd1bacfcba0e98e844590ed51581a77e0de9734f21 not found: ID does not exist" Jan 22 09:09:31 crc kubenswrapper[4811]: I0122 09:09:31.982162 4811 scope.go:117] "RemoveContainer" containerID="31dca2b72ef32a296d67da15a5e4de73093c5c233561360ea29d11b92207c61e" Jan 22 09:09:31 crc kubenswrapper[4811]: E0122 09:09:31.982413 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31dca2b72ef32a296d67da15a5e4de73093c5c233561360ea29d11b92207c61e\": container with ID starting with 31dca2b72ef32a296d67da15a5e4de73093c5c233561360ea29d11b92207c61e not found: ID does not exist" containerID="31dca2b72ef32a296d67da15a5e4de73093c5c233561360ea29d11b92207c61e" Jan 22 09:09:31 crc kubenswrapper[4811]: I0122 09:09:31.982506 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31dca2b72ef32a296d67da15a5e4de73093c5c233561360ea29d11b92207c61e"} err="failed to get container status \"31dca2b72ef32a296d67da15a5e4de73093c5c233561360ea29d11b92207c61e\": rpc error: code = NotFound desc = could not find container \"31dca2b72ef32a296d67da15a5e4de73093c5c233561360ea29d11b92207c61e\": container with ID starting with 31dca2b72ef32a296d67da15a5e4de73093c5c233561360ea29d11b92207c61e not found: ID does not exist" Jan 22 09:09:31 crc kubenswrapper[4811]: I0122 09:09:31.982578 4811 scope.go:117] "RemoveContainer" containerID="8908c5e30b5991f658952c022b1b47ce84f924f8c76d071c5b3e52d46c51fdfc" Jan 22 09:09:31 crc kubenswrapper[4811]: E0122 09:09:31.982933 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8908c5e30b5991f658952c022b1b47ce84f924f8c76d071c5b3e52d46c51fdfc\": container with ID starting with 8908c5e30b5991f658952c022b1b47ce84f924f8c76d071c5b3e52d46c51fdfc not found: ID does not exist" containerID="8908c5e30b5991f658952c022b1b47ce84f924f8c76d071c5b3e52d46c51fdfc" Jan 22 09:09:31 crc kubenswrapper[4811]: I0122 09:09:31.982967 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8908c5e30b5991f658952c022b1b47ce84f924f8c76d071c5b3e52d46c51fdfc"} err="failed to get container status \"8908c5e30b5991f658952c022b1b47ce84f924f8c76d071c5b3e52d46c51fdfc\": rpc error: code = NotFound desc = could not find container \"8908c5e30b5991f658952c022b1b47ce84f924f8c76d071c5b3e52d46c51fdfc\": container with ID starting with 8908c5e30b5991f658952c022b1b47ce84f924f8c76d071c5b3e52d46c51fdfc not found: ID does not exist" Jan 22 09:09:31 crc kubenswrapper[4811]: I0122 09:09:31.996583 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83b7219e-ce69-408d-92ad-7b58cc6d0b7a" path="/var/lib/kubelet/pods/83b7219e-ce69-408d-92ad-7b58cc6d0b7a/volumes" Jan 22 09:09:35 crc kubenswrapper[4811]: I0122 09:09:35.501076 4811 patch_prober.go:28] interesting pod/machine-config-daemon-txvcq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 09:09:35 crc kubenswrapper[4811]: I0122 09:09:35.501647 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 09:09:35 crc kubenswrapper[4811]: I0122 09:09:35.501703 4811 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" Jan 22 09:09:35 crc kubenswrapper[4811]: I0122 09:09:35.502371 4811 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bbb067d04c1683ffd06d01aab41bb5988fdca44b8bd16012d35c42795d0d8011"} pod="openshift-machine-config-operator/machine-config-daemon-txvcq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 09:09:35 crc kubenswrapper[4811]: I0122 09:09:35.502428 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" containerName="machine-config-daemon" containerID="cri-o://bbb067d04c1683ffd06d01aab41bb5988fdca44b8bd16012d35c42795d0d8011" gracePeriod=600 Jan 22 09:09:35 crc kubenswrapper[4811]: I0122 09:09:35.951824 4811 generic.go:334] "Generic (PLEG): container finished" podID="84068a6b-e189-419b-87f5-f31428f6eafe" containerID="bbb067d04c1683ffd06d01aab41bb5988fdca44b8bd16012d35c42795d0d8011" exitCode=0 Jan 22 09:09:35 crc kubenswrapper[4811]: I0122 09:09:35.951887 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" event={"ID":"84068a6b-e189-419b-87f5-f31428f6eafe","Type":"ContainerDied","Data":"bbb067d04c1683ffd06d01aab41bb5988fdca44b8bd16012d35c42795d0d8011"} Jan 22 09:09:35 crc kubenswrapper[4811]: I0122 09:09:35.952145 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" event={"ID":"84068a6b-e189-419b-87f5-f31428f6eafe","Type":"ContainerStarted","Data":"1b2f0e7c21faa08c5ffc1625c27cd1cb01040f89d6aab01c53b541a45ff7e759"} Jan 22 09:09:40 crc kubenswrapper[4811]: I0122 09:09:40.738981 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-68p7c"] Jan 22 09:09:40 crc kubenswrapper[4811]: I0122 09:09:40.739983 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-68p7c" podUID="7b8f4ccb-0c41-4bc0-8cf6-bb4508afbb4d" containerName="registry-server" containerID="cri-o://6a2cd8a886b9cb0632a1eaec0778b05367ddf7198965b4fa9d8985e7de1a05d5" gracePeriod=30 Jan 22 09:09:40 crc kubenswrapper[4811]: I0122 09:09:40.754065 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-l7qd2"] Jan 22 09:09:40 crc kubenswrapper[4811]: I0122 09:09:40.754404 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-l7qd2" podUID="89c58cc2-8741-4c9f-95fa-c73db10026d3" containerName="registry-server" containerID="cri-o://e1e5b6b3124447a40bd159405b42eabe26d0220bd53b1c8a8d8c0e5528339e8a" gracePeriod=30 Jan 22 09:09:40 crc kubenswrapper[4811]: I0122 09:09:40.763005 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-zzx6v"] Jan 22 09:09:40 crc kubenswrapper[4811]: I0122 09:09:40.763207 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-zzx6v" podUID="30317a78-afdc-4c04-95b6-d2c8fedfb790" containerName="marketplace-operator" containerID="cri-o://437f918afa8141131585b9e72b8d2b113639d64be20e4eda6db965d16f343f38" gracePeriod=30 Jan 22 09:09:40 crc kubenswrapper[4811]: I0122 09:09:40.771847 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vjcnk"] Jan 22 09:09:40 crc kubenswrapper[4811]: I0122 09:09:40.772020 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-vjcnk" podUID="6dcdfa22-db17-42c3-a366-f96b6dd7b27d" containerName="registry-server" containerID="cri-o://56da9657b335d49cad6285cb856da14ac6cc01c96d0e9ecdb73112baf4168d9c" gracePeriod=30 Jan 22 09:09:40 crc kubenswrapper[4811]: I0122 09:09:40.787761 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mv8mw"] Jan 22 09:09:40 crc kubenswrapper[4811]: I0122 09:09:40.788067 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-mv8mw" podUID="516e9ebd-6782-459d-99df-6902a5098c4e" containerName="registry-server" containerID="cri-o://3d5449f78c39d103772dd07867211405484605c116e1d07b55e27a5acc81cb90" gracePeriod=30 Jan 22 09:09:40 crc kubenswrapper[4811]: I0122 09:09:40.789347 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-cnjv9"] Jan 22 09:09:40 crc kubenswrapper[4811]: E0122 09:09:40.789601 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83b7219e-ce69-408d-92ad-7b58cc6d0b7a" containerName="extract-content" Jan 22 09:09:40 crc kubenswrapper[4811]: I0122 09:09:40.789619 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="83b7219e-ce69-408d-92ad-7b58cc6d0b7a" containerName="extract-content" Jan 22 09:09:40 crc kubenswrapper[4811]: E0122 09:09:40.790670 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83b7219e-ce69-408d-92ad-7b58cc6d0b7a" containerName="extract-utilities" Jan 22 09:09:40 crc kubenswrapper[4811]: I0122 09:09:40.790679 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="83b7219e-ce69-408d-92ad-7b58cc6d0b7a" containerName="extract-utilities" Jan 22 09:09:40 crc kubenswrapper[4811]: E0122 09:09:40.790692 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83b7219e-ce69-408d-92ad-7b58cc6d0b7a" containerName="registry-server" Jan 22 09:09:40 crc kubenswrapper[4811]: I0122 09:09:40.790699 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="83b7219e-ce69-408d-92ad-7b58cc6d0b7a" containerName="registry-server" Jan 22 09:09:40 crc kubenswrapper[4811]: I0122 09:09:40.790886 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="83b7219e-ce69-408d-92ad-7b58cc6d0b7a" containerName="registry-server" Jan 22 09:09:40 crc kubenswrapper[4811]: I0122 09:09:40.791340 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-cnjv9" Jan 22 09:09:40 crc kubenswrapper[4811]: I0122 09:09:40.799986 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-cnjv9"] Jan 22 09:09:40 crc kubenswrapper[4811]: I0122 09:09:40.994252 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f62c6396-82e9-4314-912a-42f5265b03bb-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-cnjv9\" (UID: \"f62c6396-82e9-4314-912a-42f5265b03bb\") " pod="openshift-marketplace/marketplace-operator-79b997595-cnjv9" Jan 22 09:09:40 crc kubenswrapper[4811]: I0122 09:09:40.994336 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f62c6396-82e9-4314-912a-42f5265b03bb-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-cnjv9\" (UID: \"f62c6396-82e9-4314-912a-42f5265b03bb\") " pod="openshift-marketplace/marketplace-operator-79b997595-cnjv9" Jan 22 09:09:40 crc kubenswrapper[4811]: I0122 09:09:40.994363 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k26mg\" (UniqueName: \"kubernetes.io/projected/f62c6396-82e9-4314-912a-42f5265b03bb-kube-api-access-k26mg\") pod \"marketplace-operator-79b997595-cnjv9\" (UID: \"f62c6396-82e9-4314-912a-42f5265b03bb\") " pod="openshift-marketplace/marketplace-operator-79b997595-cnjv9" Jan 22 09:09:41 crc kubenswrapper[4811]: I0122 09:09:41.005675 4811 generic.go:334] "Generic (PLEG): container finished" podID="516e9ebd-6782-459d-99df-6902a5098c4e" containerID="3d5449f78c39d103772dd07867211405484605c116e1d07b55e27a5acc81cb90" exitCode=0 Jan 22 09:09:41 crc kubenswrapper[4811]: I0122 09:09:41.005759 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mv8mw" event={"ID":"516e9ebd-6782-459d-99df-6902a5098c4e","Type":"ContainerDied","Data":"3d5449f78c39d103772dd07867211405484605c116e1d07b55e27a5acc81cb90"} Jan 22 09:09:41 crc kubenswrapper[4811]: I0122 09:09:41.008160 4811 generic.go:334] "Generic (PLEG): container finished" podID="30317a78-afdc-4c04-95b6-d2c8fedfb790" containerID="437f918afa8141131585b9e72b8d2b113639d64be20e4eda6db965d16f343f38" exitCode=0 Jan 22 09:09:41 crc kubenswrapper[4811]: I0122 09:09:41.008212 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-zzx6v" event={"ID":"30317a78-afdc-4c04-95b6-d2c8fedfb790","Type":"ContainerDied","Data":"437f918afa8141131585b9e72b8d2b113639d64be20e4eda6db965d16f343f38"} Jan 22 09:09:41 crc kubenswrapper[4811]: I0122 09:09:41.011349 4811 generic.go:334] "Generic (PLEG): container finished" podID="7b8f4ccb-0c41-4bc0-8cf6-bb4508afbb4d" containerID="6a2cd8a886b9cb0632a1eaec0778b05367ddf7198965b4fa9d8985e7de1a05d5" exitCode=0 Jan 22 09:09:41 crc kubenswrapper[4811]: I0122 09:09:41.011433 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-68p7c" event={"ID":"7b8f4ccb-0c41-4bc0-8cf6-bb4508afbb4d","Type":"ContainerDied","Data":"6a2cd8a886b9cb0632a1eaec0778b05367ddf7198965b4fa9d8985e7de1a05d5"} Jan 22 09:09:41 crc kubenswrapper[4811]: I0122 09:09:41.014324 4811 generic.go:334] "Generic (PLEG): container finished" podID="89c58cc2-8741-4c9f-95fa-c73db10026d3" containerID="e1e5b6b3124447a40bd159405b42eabe26d0220bd53b1c8a8d8c0e5528339e8a" exitCode=0 Jan 22 09:09:41 crc kubenswrapper[4811]: I0122 09:09:41.014434 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l7qd2" event={"ID":"89c58cc2-8741-4c9f-95fa-c73db10026d3","Type":"ContainerDied","Data":"e1e5b6b3124447a40bd159405b42eabe26d0220bd53b1c8a8d8c0e5528339e8a"} Jan 22 09:09:41 crc kubenswrapper[4811]: I0122 09:09:41.016397 4811 generic.go:334] "Generic (PLEG): container finished" podID="6dcdfa22-db17-42c3-a366-f96b6dd7b27d" containerID="56da9657b335d49cad6285cb856da14ac6cc01c96d0e9ecdb73112baf4168d9c" exitCode=0 Jan 22 09:09:41 crc kubenswrapper[4811]: I0122 09:09:41.016431 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vjcnk" event={"ID":"6dcdfa22-db17-42c3-a366-f96b6dd7b27d","Type":"ContainerDied","Data":"56da9657b335d49cad6285cb856da14ac6cc01c96d0e9ecdb73112baf4168d9c"} Jan 22 09:09:41 crc kubenswrapper[4811]: I0122 09:09:41.076381 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-68p7c" Jan 22 09:09:41 crc kubenswrapper[4811]: I0122 09:09:41.098654 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f62c6396-82e9-4314-912a-42f5265b03bb-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-cnjv9\" (UID: \"f62c6396-82e9-4314-912a-42f5265b03bb\") " pod="openshift-marketplace/marketplace-operator-79b997595-cnjv9" Jan 22 09:09:41 crc kubenswrapper[4811]: I0122 09:09:41.098693 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k26mg\" (UniqueName: \"kubernetes.io/projected/f62c6396-82e9-4314-912a-42f5265b03bb-kube-api-access-k26mg\") pod \"marketplace-operator-79b997595-cnjv9\" (UID: \"f62c6396-82e9-4314-912a-42f5265b03bb\") " pod="openshift-marketplace/marketplace-operator-79b997595-cnjv9" Jan 22 09:09:41 crc kubenswrapper[4811]: I0122 09:09:41.098813 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f62c6396-82e9-4314-912a-42f5265b03bb-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-cnjv9\" (UID: \"f62c6396-82e9-4314-912a-42f5265b03bb\") " pod="openshift-marketplace/marketplace-operator-79b997595-cnjv9" Jan 22 09:09:41 crc kubenswrapper[4811]: I0122 09:09:41.100529 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f62c6396-82e9-4314-912a-42f5265b03bb-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-cnjv9\" (UID: \"f62c6396-82e9-4314-912a-42f5265b03bb\") " pod="openshift-marketplace/marketplace-operator-79b997595-cnjv9" Jan 22 09:09:41 crc kubenswrapper[4811]: I0122 09:09:41.113102 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f62c6396-82e9-4314-912a-42f5265b03bb-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-cnjv9\" (UID: \"f62c6396-82e9-4314-912a-42f5265b03bb\") " pod="openshift-marketplace/marketplace-operator-79b997595-cnjv9" Jan 22 09:09:41 crc kubenswrapper[4811]: I0122 09:09:41.121271 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k26mg\" (UniqueName: \"kubernetes.io/projected/f62c6396-82e9-4314-912a-42f5265b03bb-kube-api-access-k26mg\") pod \"marketplace-operator-79b997595-cnjv9\" (UID: \"f62c6396-82e9-4314-912a-42f5265b03bb\") " pod="openshift-marketplace/marketplace-operator-79b997595-cnjv9" Jan 22 09:09:41 crc kubenswrapper[4811]: I0122 09:09:41.172862 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l7qd2" Jan 22 09:09:41 crc kubenswrapper[4811]: I0122 09:09:41.182545 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-zzx6v" Jan 22 09:09:41 crc kubenswrapper[4811]: I0122 09:09:41.203515 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b8f4ccb-0c41-4bc0-8cf6-bb4508afbb4d-utilities\") pod \"7b8f4ccb-0c41-4bc0-8cf6-bb4508afbb4d\" (UID: \"7b8f4ccb-0c41-4bc0-8cf6-bb4508afbb4d\") " Jan 22 09:09:41 crc kubenswrapper[4811]: I0122 09:09:41.203591 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v78kd\" (UniqueName: \"kubernetes.io/projected/7b8f4ccb-0c41-4bc0-8cf6-bb4508afbb4d-kube-api-access-v78kd\") pod \"7b8f4ccb-0c41-4bc0-8cf6-bb4508afbb4d\" (UID: \"7b8f4ccb-0c41-4bc0-8cf6-bb4508afbb4d\") " Jan 22 09:09:41 crc kubenswrapper[4811]: I0122 09:09:41.203684 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b8f4ccb-0c41-4bc0-8cf6-bb4508afbb4d-catalog-content\") pod \"7b8f4ccb-0c41-4bc0-8cf6-bb4508afbb4d\" (UID: \"7b8f4ccb-0c41-4bc0-8cf6-bb4508afbb4d\") " Jan 22 09:09:41 crc kubenswrapper[4811]: I0122 09:09:41.215652 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b8f4ccb-0c41-4bc0-8cf6-bb4508afbb4d-utilities" (OuterVolumeSpecName: "utilities") pod "7b8f4ccb-0c41-4bc0-8cf6-bb4508afbb4d" (UID: "7b8f4ccb-0c41-4bc0-8cf6-bb4508afbb4d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:09:41 crc kubenswrapper[4811]: I0122 09:09:41.217389 4811 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b8f4ccb-0c41-4bc0-8cf6-bb4508afbb4d-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 09:09:41 crc kubenswrapper[4811]: I0122 09:09:41.247249 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b8f4ccb-0c41-4bc0-8cf6-bb4508afbb4d-kube-api-access-v78kd" (OuterVolumeSpecName: "kube-api-access-v78kd") pod "7b8f4ccb-0c41-4bc0-8cf6-bb4508afbb4d" (UID: "7b8f4ccb-0c41-4bc0-8cf6-bb4508afbb4d"). InnerVolumeSpecName "kube-api-access-v78kd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:09:41 crc kubenswrapper[4811]: I0122 09:09:41.264216 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mv8mw" Jan 22 09:09:41 crc kubenswrapper[4811]: I0122 09:09:41.269022 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vjcnk" Jan 22 09:09:41 crc kubenswrapper[4811]: I0122 09:09:41.287942 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b8f4ccb-0c41-4bc0-8cf6-bb4508afbb4d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7b8f4ccb-0c41-4bc0-8cf6-bb4508afbb4d" (UID: "7b8f4ccb-0c41-4bc0-8cf6-bb4508afbb4d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:09:41 crc kubenswrapper[4811]: I0122 09:09:41.318250 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89c58cc2-8741-4c9f-95fa-c73db10026d3-catalog-content\") pod \"89c58cc2-8741-4c9f-95fa-c73db10026d3\" (UID: \"89c58cc2-8741-4c9f-95fa-c73db10026d3\") " Jan 22 09:09:41 crc kubenswrapper[4811]: I0122 09:09:41.318302 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lczdl\" (UniqueName: \"kubernetes.io/projected/89c58cc2-8741-4c9f-95fa-c73db10026d3-kube-api-access-lczdl\") pod \"89c58cc2-8741-4c9f-95fa-c73db10026d3\" (UID: \"89c58cc2-8741-4c9f-95fa-c73db10026d3\") " Jan 22 09:09:41 crc kubenswrapper[4811]: I0122 09:09:41.318330 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cznrp\" (UniqueName: \"kubernetes.io/projected/516e9ebd-6782-459d-99df-6902a5098c4e-kube-api-access-cznrp\") pod \"516e9ebd-6782-459d-99df-6902a5098c4e\" (UID: \"516e9ebd-6782-459d-99df-6902a5098c4e\") " Jan 22 09:09:41 crc kubenswrapper[4811]: I0122 09:09:41.318377 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6dcdfa22-db17-42c3-a366-f96b6dd7b27d-catalog-content\") pod \"6dcdfa22-db17-42c3-a366-f96b6dd7b27d\" (UID: \"6dcdfa22-db17-42c3-a366-f96b6dd7b27d\") " Jan 22 09:09:41 crc kubenswrapper[4811]: I0122 09:09:41.318412 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9g4pc\" (UniqueName: \"kubernetes.io/projected/6dcdfa22-db17-42c3-a366-f96b6dd7b27d-kube-api-access-9g4pc\") pod \"6dcdfa22-db17-42c3-a366-f96b6dd7b27d\" (UID: \"6dcdfa22-db17-42c3-a366-f96b6dd7b27d\") " Jan 22 09:09:41 crc kubenswrapper[4811]: I0122 09:09:41.318438 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/30317a78-afdc-4c04-95b6-d2c8fedfb790-marketplace-trusted-ca\") pod \"30317a78-afdc-4c04-95b6-d2c8fedfb790\" (UID: \"30317a78-afdc-4c04-95b6-d2c8fedfb790\") " Jan 22 09:09:41 crc kubenswrapper[4811]: I0122 09:09:41.318459 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6dcdfa22-db17-42c3-a366-f96b6dd7b27d-utilities\") pod \"6dcdfa22-db17-42c3-a366-f96b6dd7b27d\" (UID: \"6dcdfa22-db17-42c3-a366-f96b6dd7b27d\") " Jan 22 09:09:41 crc kubenswrapper[4811]: I0122 09:09:41.318482 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/516e9ebd-6782-459d-99df-6902a5098c4e-catalog-content\") pod \"516e9ebd-6782-459d-99df-6902a5098c4e\" (UID: \"516e9ebd-6782-459d-99df-6902a5098c4e\") " Jan 22 09:09:41 crc kubenswrapper[4811]: I0122 09:09:41.318500 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/516e9ebd-6782-459d-99df-6902a5098c4e-utilities\") pod \"516e9ebd-6782-459d-99df-6902a5098c4e\" (UID: \"516e9ebd-6782-459d-99df-6902a5098c4e\") " Jan 22 09:09:41 crc kubenswrapper[4811]: I0122 09:09:41.318542 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/30317a78-afdc-4c04-95b6-d2c8fedfb790-marketplace-operator-metrics\") pod \"30317a78-afdc-4c04-95b6-d2c8fedfb790\" (UID: \"30317a78-afdc-4c04-95b6-d2c8fedfb790\") " Jan 22 09:09:41 crc kubenswrapper[4811]: I0122 09:09:41.318561 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7xk7s\" (UniqueName: \"kubernetes.io/projected/30317a78-afdc-4c04-95b6-d2c8fedfb790-kube-api-access-7xk7s\") pod \"30317a78-afdc-4c04-95b6-d2c8fedfb790\" (UID: \"30317a78-afdc-4c04-95b6-d2c8fedfb790\") " Jan 22 09:09:41 crc kubenswrapper[4811]: I0122 09:09:41.318585 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89c58cc2-8741-4c9f-95fa-c73db10026d3-utilities\") pod \"89c58cc2-8741-4c9f-95fa-c73db10026d3\" (UID: \"89c58cc2-8741-4c9f-95fa-c73db10026d3\") " Jan 22 09:09:41 crc kubenswrapper[4811]: I0122 09:09:41.318775 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v78kd\" (UniqueName: \"kubernetes.io/projected/7b8f4ccb-0c41-4bc0-8cf6-bb4508afbb4d-kube-api-access-v78kd\") on node \"crc\" DevicePath \"\"" Jan 22 09:09:41 crc kubenswrapper[4811]: I0122 09:09:41.318796 4811 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b8f4ccb-0c41-4bc0-8cf6-bb4508afbb4d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 09:09:41 crc kubenswrapper[4811]: I0122 09:09:41.319469 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89c58cc2-8741-4c9f-95fa-c73db10026d3-utilities" (OuterVolumeSpecName: "utilities") pod "89c58cc2-8741-4c9f-95fa-c73db10026d3" (UID: "89c58cc2-8741-4c9f-95fa-c73db10026d3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:09:41 crc kubenswrapper[4811]: I0122 09:09:41.320040 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30317a78-afdc-4c04-95b6-d2c8fedfb790-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "30317a78-afdc-4c04-95b6-d2c8fedfb790" (UID: "30317a78-afdc-4c04-95b6-d2c8fedfb790"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:09:41 crc kubenswrapper[4811]: I0122 09:09:41.321643 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/516e9ebd-6782-459d-99df-6902a5098c4e-kube-api-access-cznrp" (OuterVolumeSpecName: "kube-api-access-cznrp") pod "516e9ebd-6782-459d-99df-6902a5098c4e" (UID: "516e9ebd-6782-459d-99df-6902a5098c4e"). InnerVolumeSpecName "kube-api-access-cznrp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:09:41 crc kubenswrapper[4811]: I0122 09:09:41.321819 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6dcdfa22-db17-42c3-a366-f96b6dd7b27d-utilities" (OuterVolumeSpecName: "utilities") pod "6dcdfa22-db17-42c3-a366-f96b6dd7b27d" (UID: "6dcdfa22-db17-42c3-a366-f96b6dd7b27d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:09:41 crc kubenswrapper[4811]: I0122 09:09:41.321940 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/516e9ebd-6782-459d-99df-6902a5098c4e-utilities" (OuterVolumeSpecName: "utilities") pod "516e9ebd-6782-459d-99df-6902a5098c4e" (UID: "516e9ebd-6782-459d-99df-6902a5098c4e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:09:41 crc kubenswrapper[4811]: I0122 09:09:41.322476 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30317a78-afdc-4c04-95b6-d2c8fedfb790-kube-api-access-7xk7s" (OuterVolumeSpecName: "kube-api-access-7xk7s") pod "30317a78-afdc-4c04-95b6-d2c8fedfb790" (UID: "30317a78-afdc-4c04-95b6-d2c8fedfb790"). InnerVolumeSpecName "kube-api-access-7xk7s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:09:41 crc kubenswrapper[4811]: I0122 09:09:41.324562 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6dcdfa22-db17-42c3-a366-f96b6dd7b27d-kube-api-access-9g4pc" (OuterVolumeSpecName: "kube-api-access-9g4pc") pod "6dcdfa22-db17-42c3-a366-f96b6dd7b27d" (UID: "6dcdfa22-db17-42c3-a366-f96b6dd7b27d"). InnerVolumeSpecName "kube-api-access-9g4pc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:09:41 crc kubenswrapper[4811]: I0122 09:09:41.328617 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30317a78-afdc-4c04-95b6-d2c8fedfb790-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "30317a78-afdc-4c04-95b6-d2c8fedfb790" (UID: "30317a78-afdc-4c04-95b6-d2c8fedfb790"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:09:41 crc kubenswrapper[4811]: I0122 09:09:41.328714 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89c58cc2-8741-4c9f-95fa-c73db10026d3-kube-api-access-lczdl" (OuterVolumeSpecName: "kube-api-access-lczdl") pod "89c58cc2-8741-4c9f-95fa-c73db10026d3" (UID: "89c58cc2-8741-4c9f-95fa-c73db10026d3"). InnerVolumeSpecName "kube-api-access-lczdl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:09:41 crc kubenswrapper[4811]: I0122 09:09:41.342277 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-cnjv9" Jan 22 09:09:41 crc kubenswrapper[4811]: I0122 09:09:41.345936 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6dcdfa22-db17-42c3-a366-f96b6dd7b27d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6dcdfa22-db17-42c3-a366-f96b6dd7b27d" (UID: "6dcdfa22-db17-42c3-a366-f96b6dd7b27d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:09:41 crc kubenswrapper[4811]: I0122 09:09:41.378787 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89c58cc2-8741-4c9f-95fa-c73db10026d3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "89c58cc2-8741-4c9f-95fa-c73db10026d3" (UID: "89c58cc2-8741-4c9f-95fa-c73db10026d3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:09:41 crc kubenswrapper[4811]: I0122 09:09:41.429104 4811 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/30317a78-afdc-4c04-95b6-d2c8fedfb790-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 22 09:09:41 crc kubenswrapper[4811]: I0122 09:09:41.429141 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7xk7s\" (UniqueName: \"kubernetes.io/projected/30317a78-afdc-4c04-95b6-d2c8fedfb790-kube-api-access-7xk7s\") on node \"crc\" DevicePath \"\"" Jan 22 09:09:41 crc kubenswrapper[4811]: I0122 09:09:41.429154 4811 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89c58cc2-8741-4c9f-95fa-c73db10026d3-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 09:09:41 crc kubenswrapper[4811]: I0122 09:09:41.429166 4811 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89c58cc2-8741-4c9f-95fa-c73db10026d3-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 09:09:41 crc kubenswrapper[4811]: I0122 09:09:41.429175 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lczdl\" (UniqueName: \"kubernetes.io/projected/89c58cc2-8741-4c9f-95fa-c73db10026d3-kube-api-access-lczdl\") on node \"crc\" DevicePath \"\"" Jan 22 09:09:41 crc kubenswrapper[4811]: I0122 09:09:41.429184 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cznrp\" (UniqueName: \"kubernetes.io/projected/516e9ebd-6782-459d-99df-6902a5098c4e-kube-api-access-cznrp\") on node \"crc\" DevicePath \"\"" Jan 22 09:09:41 crc kubenswrapper[4811]: I0122 09:09:41.429193 4811 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6dcdfa22-db17-42c3-a366-f96b6dd7b27d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 09:09:41 crc kubenswrapper[4811]: I0122 09:09:41.429210 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9g4pc\" (UniqueName: \"kubernetes.io/projected/6dcdfa22-db17-42c3-a366-f96b6dd7b27d-kube-api-access-9g4pc\") on node \"crc\" DevicePath \"\"" Jan 22 09:09:41 crc kubenswrapper[4811]: I0122 09:09:41.429220 4811 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/30317a78-afdc-4c04-95b6-d2c8fedfb790-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 22 09:09:41 crc kubenswrapper[4811]: I0122 09:09:41.429230 4811 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6dcdfa22-db17-42c3-a366-f96b6dd7b27d-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 09:09:41 crc kubenswrapper[4811]: I0122 09:09:41.429239 4811 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/516e9ebd-6782-459d-99df-6902a5098c4e-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 09:09:41 crc kubenswrapper[4811]: I0122 09:09:41.435878 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/516e9ebd-6782-459d-99df-6902a5098c4e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "516e9ebd-6782-459d-99df-6902a5098c4e" (UID: "516e9ebd-6782-459d-99df-6902a5098c4e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:09:41 crc kubenswrapper[4811]: I0122 09:09:41.529937 4811 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/516e9ebd-6782-459d-99df-6902a5098c4e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 09:09:41 crc kubenswrapper[4811]: I0122 09:09:41.703020 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-cnjv9"] Jan 22 09:09:41 crc kubenswrapper[4811]: W0122 09:09:41.707922 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf62c6396_82e9_4314_912a_42f5265b03bb.slice/crio-240ba9936999de94e125848cbaff97cf2128a1142b17e19312af29f4e1c6637b WatchSource:0}: Error finding container 240ba9936999de94e125848cbaff97cf2128a1142b17e19312af29f4e1c6637b: Status 404 returned error can't find the container with id 240ba9936999de94e125848cbaff97cf2128a1142b17e19312af29f4e1c6637b Jan 22 09:09:42 crc kubenswrapper[4811]: I0122 09:09:42.034605 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l7qd2" Jan 22 09:09:42 crc kubenswrapper[4811]: I0122 09:09:42.034593 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l7qd2" event={"ID":"89c58cc2-8741-4c9f-95fa-c73db10026d3","Type":"ContainerDied","Data":"7e6f6e29896ec06ef96ad4813f48636184a16baff07a7883108877da644cc914"} Jan 22 09:09:42 crc kubenswrapper[4811]: I0122 09:09:42.035897 4811 scope.go:117] "RemoveContainer" containerID="e1e5b6b3124447a40bd159405b42eabe26d0220bd53b1c8a8d8c0e5528339e8a" Jan 22 09:09:42 crc kubenswrapper[4811]: I0122 09:09:42.044751 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vjcnk" event={"ID":"6dcdfa22-db17-42c3-a366-f96b6dd7b27d","Type":"ContainerDied","Data":"86d46a6016d455fcc92e2286c93bec599c8d86693b389b92d80381dd69ee9c9d"} Jan 22 09:09:42 crc kubenswrapper[4811]: I0122 09:09:42.044849 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vjcnk" Jan 22 09:09:42 crc kubenswrapper[4811]: I0122 09:09:42.048040 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mv8mw" event={"ID":"516e9ebd-6782-459d-99df-6902a5098c4e","Type":"ContainerDied","Data":"0d9888b6b66485dc6c149295b75ae071d76f70d06f81f6884ac16dd967ed4df7"} Jan 22 09:09:42 crc kubenswrapper[4811]: I0122 09:09:42.048122 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mv8mw" Jan 22 09:09:42 crc kubenswrapper[4811]: I0122 09:09:42.054557 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-68p7c" Jan 22 09:09:42 crc kubenswrapper[4811]: I0122 09:09:42.054578 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-68p7c" event={"ID":"7b8f4ccb-0c41-4bc0-8cf6-bb4508afbb4d","Type":"ContainerDied","Data":"1fc500ba620a0793258596b0f4f5310f9cc7f3a12a1de7093ef009fd1d1441c1"} Jan 22 09:09:42 crc kubenswrapper[4811]: I0122 09:09:42.056775 4811 scope.go:117] "RemoveContainer" containerID="fae9d29fa8390f035eec8bae566c123ade19d3326515392d5131b6296c858330" Jan 22 09:09:42 crc kubenswrapper[4811]: I0122 09:09:42.058979 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-zzx6v" Jan 22 09:09:42 crc kubenswrapper[4811]: I0122 09:09:42.059310 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-zzx6v" event={"ID":"30317a78-afdc-4c04-95b6-d2c8fedfb790","Type":"ContainerDied","Data":"6d2f32e2ee5ab2e046fbb742d164b70fcaca0fb853f7f3f4ad8c780a145a242f"} Jan 22 09:09:42 crc kubenswrapper[4811]: I0122 09:09:42.060736 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-l7qd2"] Jan 22 09:09:42 crc kubenswrapper[4811]: I0122 09:09:42.063731 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-cnjv9" event={"ID":"f62c6396-82e9-4314-912a-42f5265b03bb","Type":"ContainerStarted","Data":"4e8728c8a5e69e4908dac65224ba1bb26b06570f19d81abd8ff082fa97e5bb03"} Jan 22 09:09:42 crc kubenswrapper[4811]: I0122 09:09:42.063758 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-cnjv9" event={"ID":"f62c6396-82e9-4314-912a-42f5265b03bb","Type":"ContainerStarted","Data":"240ba9936999de94e125848cbaff97cf2128a1142b17e19312af29f4e1c6637b"} Jan 22 09:09:42 crc kubenswrapper[4811]: I0122 09:09:42.063973 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-cnjv9" Jan 22 09:09:42 crc kubenswrapper[4811]: I0122 09:09:42.068114 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-l7qd2"] Jan 22 09:09:42 crc kubenswrapper[4811]: I0122 09:09:42.075569 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-68p7c"] Jan 22 09:09:42 crc kubenswrapper[4811]: I0122 09:09:42.081284 4811 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-cnjv9 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.57:8080/healthz\": dial tcp 10.217.0.57:8080: connect: connection refused" start-of-body= Jan 22 09:09:42 crc kubenswrapper[4811]: I0122 09:09:42.081334 4811 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-cnjv9" podUID="f62c6396-82e9-4314-912a-42f5265b03bb" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.57:8080/healthz\": dial tcp 10.217.0.57:8080: connect: connection refused" Jan 22 09:09:42 crc kubenswrapper[4811]: I0122 09:09:42.083425 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-68p7c"] Jan 22 09:09:42 crc kubenswrapper[4811]: I0122 09:09:42.084965 4811 scope.go:117] "RemoveContainer" containerID="a2454da840dd1fef32f320db4ccf5bfb75eab9c691cafa0ebace0e014218b97a" Jan 22 09:09:42 crc kubenswrapper[4811]: I0122 09:09:42.101725 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mv8mw"] Jan 22 09:09:42 crc kubenswrapper[4811]: I0122 09:09:42.103206 4811 scope.go:117] "RemoveContainer" containerID="56da9657b335d49cad6285cb856da14ac6cc01c96d0e9ecdb73112baf4168d9c" Jan 22 09:09:42 crc kubenswrapper[4811]: I0122 09:09:42.109719 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-mv8mw"] Jan 22 09:09:42 crc kubenswrapper[4811]: I0122 09:09:42.111877 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vjcnk"] Jan 22 09:09:42 crc kubenswrapper[4811]: I0122 09:09:42.115384 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-vjcnk"] Jan 22 09:09:42 crc kubenswrapper[4811]: I0122 09:09:42.127461 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-cnjv9" podStartSLOduration=2.127439812 podStartE2EDuration="2.127439812s" podCreationTimestamp="2026-01-22 09:09:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:09:42.127121322 +0000 UTC m=+226.449308436" watchObservedRunningTime="2026-01-22 09:09:42.127439812 +0000 UTC m=+226.449626935" Jan 22 09:09:42 crc kubenswrapper[4811]: I0122 09:09:42.141620 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-zzx6v"] Jan 22 09:09:42 crc kubenswrapper[4811]: I0122 09:09:42.141744 4811 scope.go:117] "RemoveContainer" containerID="1226b756d34d1123ced56341c33600e0793fb2d1cb177975eeed804e8e9a86b0" Jan 22 09:09:42 crc kubenswrapper[4811]: I0122 09:09:42.145249 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-zzx6v"] Jan 22 09:09:42 crc kubenswrapper[4811]: I0122 09:09:42.163717 4811 scope.go:117] "RemoveContainer" containerID="cbbff8bf691befc1df4db594182f6bc094b32ee5a6245914086552e773a3c89b" Jan 22 09:09:42 crc kubenswrapper[4811]: I0122 09:09:42.189749 4811 scope.go:117] "RemoveContainer" containerID="3d5449f78c39d103772dd07867211405484605c116e1d07b55e27a5acc81cb90" Jan 22 09:09:42 crc kubenswrapper[4811]: I0122 09:09:42.205323 4811 scope.go:117] "RemoveContainer" containerID="b20405eb2b8cefb35578dad5607f6f2fdc5710c15e0e20451660493e5a215037" Jan 22 09:09:42 crc kubenswrapper[4811]: I0122 09:09:42.225517 4811 scope.go:117] "RemoveContainer" containerID="ca710074422d9e8ddf779c356eae191788dc59693cb8e255d8ff055e0613336b" Jan 22 09:09:42 crc kubenswrapper[4811]: I0122 09:09:42.241659 4811 scope.go:117] "RemoveContainer" containerID="6a2cd8a886b9cb0632a1eaec0778b05367ddf7198965b4fa9d8985e7de1a05d5" Jan 22 09:09:42 crc kubenswrapper[4811]: I0122 09:09:42.253094 4811 scope.go:117] "RemoveContainer" containerID="abb8729caef7c1226ec63fb98b164d533c45977491b0fd47c35ed135e5b68258" Jan 22 09:09:42 crc kubenswrapper[4811]: I0122 09:09:42.263772 4811 scope.go:117] "RemoveContainer" containerID="984de5c265d50e22d4510c5c7070710a10a3f0811cf88961e837b67744d08630" Jan 22 09:09:42 crc kubenswrapper[4811]: I0122 09:09:42.272753 4811 scope.go:117] "RemoveContainer" containerID="437f918afa8141131585b9e72b8d2b113639d64be20e4eda6db965d16f343f38" Jan 22 09:09:42 crc kubenswrapper[4811]: I0122 09:09:42.954507 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-pbqkb"] Jan 22 09:09:42 crc kubenswrapper[4811]: E0122 09:09:42.954775 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6dcdfa22-db17-42c3-a366-f96b6dd7b27d" containerName="extract-content" Jan 22 09:09:42 crc kubenswrapper[4811]: I0122 09:09:42.954795 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="6dcdfa22-db17-42c3-a366-f96b6dd7b27d" containerName="extract-content" Jan 22 09:09:42 crc kubenswrapper[4811]: E0122 09:09:42.954805 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89c58cc2-8741-4c9f-95fa-c73db10026d3" containerName="extract-content" Jan 22 09:09:42 crc kubenswrapper[4811]: I0122 09:09:42.954812 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="89c58cc2-8741-4c9f-95fa-c73db10026d3" containerName="extract-content" Jan 22 09:09:42 crc kubenswrapper[4811]: E0122 09:09:42.954820 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89c58cc2-8741-4c9f-95fa-c73db10026d3" containerName="registry-server" Jan 22 09:09:42 crc kubenswrapper[4811]: I0122 09:09:42.954826 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="89c58cc2-8741-4c9f-95fa-c73db10026d3" containerName="registry-server" Jan 22 09:09:42 crc kubenswrapper[4811]: E0122 09:09:42.954833 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6dcdfa22-db17-42c3-a366-f96b6dd7b27d" containerName="registry-server" Jan 22 09:09:42 crc kubenswrapper[4811]: I0122 09:09:42.954839 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="6dcdfa22-db17-42c3-a366-f96b6dd7b27d" containerName="registry-server" Jan 22 09:09:42 crc kubenswrapper[4811]: E0122 09:09:42.954850 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b8f4ccb-0c41-4bc0-8cf6-bb4508afbb4d" containerName="extract-utilities" Jan 22 09:09:42 crc kubenswrapper[4811]: I0122 09:09:42.954855 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b8f4ccb-0c41-4bc0-8cf6-bb4508afbb4d" containerName="extract-utilities" Jan 22 09:09:42 crc kubenswrapper[4811]: E0122 09:09:42.954866 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="516e9ebd-6782-459d-99df-6902a5098c4e" containerName="registry-server" Jan 22 09:09:42 crc kubenswrapper[4811]: I0122 09:09:42.954872 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="516e9ebd-6782-459d-99df-6902a5098c4e" containerName="registry-server" Jan 22 09:09:42 crc kubenswrapper[4811]: E0122 09:09:42.954880 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="516e9ebd-6782-459d-99df-6902a5098c4e" containerName="extract-content" Jan 22 09:09:42 crc kubenswrapper[4811]: I0122 09:09:42.954887 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="516e9ebd-6782-459d-99df-6902a5098c4e" containerName="extract-content" Jan 22 09:09:42 crc kubenswrapper[4811]: E0122 09:09:42.954896 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b8f4ccb-0c41-4bc0-8cf6-bb4508afbb4d" containerName="extract-content" Jan 22 09:09:42 crc kubenswrapper[4811]: I0122 09:09:42.954903 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b8f4ccb-0c41-4bc0-8cf6-bb4508afbb4d" containerName="extract-content" Jan 22 09:09:42 crc kubenswrapper[4811]: E0122 09:09:42.954910 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6dcdfa22-db17-42c3-a366-f96b6dd7b27d" containerName="extract-utilities" Jan 22 09:09:42 crc kubenswrapper[4811]: I0122 09:09:42.954915 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="6dcdfa22-db17-42c3-a366-f96b6dd7b27d" containerName="extract-utilities" Jan 22 09:09:42 crc kubenswrapper[4811]: E0122 09:09:42.954921 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30317a78-afdc-4c04-95b6-d2c8fedfb790" containerName="marketplace-operator" Jan 22 09:09:42 crc kubenswrapper[4811]: I0122 09:09:42.954927 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="30317a78-afdc-4c04-95b6-d2c8fedfb790" containerName="marketplace-operator" Jan 22 09:09:42 crc kubenswrapper[4811]: E0122 09:09:42.954933 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="516e9ebd-6782-459d-99df-6902a5098c4e" containerName="extract-utilities" Jan 22 09:09:42 crc kubenswrapper[4811]: I0122 09:09:42.954939 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="516e9ebd-6782-459d-99df-6902a5098c4e" containerName="extract-utilities" Jan 22 09:09:42 crc kubenswrapper[4811]: E0122 09:09:42.954949 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89c58cc2-8741-4c9f-95fa-c73db10026d3" containerName="extract-utilities" Jan 22 09:09:42 crc kubenswrapper[4811]: I0122 09:09:42.954954 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="89c58cc2-8741-4c9f-95fa-c73db10026d3" containerName="extract-utilities" Jan 22 09:09:42 crc kubenswrapper[4811]: E0122 09:09:42.954961 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b8f4ccb-0c41-4bc0-8cf6-bb4508afbb4d" containerName="registry-server" Jan 22 09:09:42 crc kubenswrapper[4811]: I0122 09:09:42.954966 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b8f4ccb-0c41-4bc0-8cf6-bb4508afbb4d" containerName="registry-server" Jan 22 09:09:42 crc kubenswrapper[4811]: I0122 09:09:42.955047 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="6dcdfa22-db17-42c3-a366-f96b6dd7b27d" containerName="registry-server" Jan 22 09:09:42 crc kubenswrapper[4811]: I0122 09:09:42.955062 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="516e9ebd-6782-459d-99df-6902a5098c4e" containerName="registry-server" Jan 22 09:09:42 crc kubenswrapper[4811]: I0122 09:09:42.955068 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="30317a78-afdc-4c04-95b6-d2c8fedfb790" containerName="marketplace-operator" Jan 22 09:09:42 crc kubenswrapper[4811]: I0122 09:09:42.955077 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="89c58cc2-8741-4c9f-95fa-c73db10026d3" containerName="registry-server" Jan 22 09:09:42 crc kubenswrapper[4811]: I0122 09:09:42.955082 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b8f4ccb-0c41-4bc0-8cf6-bb4508afbb4d" containerName="registry-server" Jan 22 09:09:42 crc kubenswrapper[4811]: I0122 09:09:42.955782 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pbqkb" Jan 22 09:09:42 crc kubenswrapper[4811]: I0122 09:09:42.957786 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 22 09:09:42 crc kubenswrapper[4811]: I0122 09:09:42.970616 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pbqkb"] Jan 22 09:09:43 crc kubenswrapper[4811]: I0122 09:09:43.080072 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-cnjv9" Jan 22 09:09:43 crc kubenswrapper[4811]: I0122 09:09:43.155043 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aedb9efe-c04f-46f9-9b3c-c231b81440e7-utilities\") pod \"redhat-marketplace-pbqkb\" (UID: \"aedb9efe-c04f-46f9-9b3c-c231b81440e7\") " pod="openshift-marketplace/redhat-marketplace-pbqkb" Jan 22 09:09:43 crc kubenswrapper[4811]: I0122 09:09:43.155094 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aedb9efe-c04f-46f9-9b3c-c231b81440e7-catalog-content\") pod \"redhat-marketplace-pbqkb\" (UID: \"aedb9efe-c04f-46f9-9b3c-c231b81440e7\") " pod="openshift-marketplace/redhat-marketplace-pbqkb" Jan 22 09:09:43 crc kubenswrapper[4811]: I0122 09:09:43.155140 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sztqv\" (UniqueName: \"kubernetes.io/projected/aedb9efe-c04f-46f9-9b3c-c231b81440e7-kube-api-access-sztqv\") pod \"redhat-marketplace-pbqkb\" (UID: \"aedb9efe-c04f-46f9-9b3c-c231b81440e7\") " pod="openshift-marketplace/redhat-marketplace-pbqkb" Jan 22 09:09:43 crc kubenswrapper[4811]: I0122 09:09:43.159552 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-wct2f"] Jan 22 09:09:43 crc kubenswrapper[4811]: I0122 09:09:43.160880 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wct2f" Jan 22 09:09:43 crc kubenswrapper[4811]: I0122 09:09:43.165381 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 22 09:09:43 crc kubenswrapper[4811]: I0122 09:09:43.173429 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wct2f"] Jan 22 09:09:43 crc kubenswrapper[4811]: I0122 09:09:43.256007 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aedb9efe-c04f-46f9-9b3c-c231b81440e7-utilities\") pod \"redhat-marketplace-pbqkb\" (UID: \"aedb9efe-c04f-46f9-9b3c-c231b81440e7\") " pod="openshift-marketplace/redhat-marketplace-pbqkb" Jan 22 09:09:43 crc kubenswrapper[4811]: I0122 09:09:43.256250 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aedb9efe-c04f-46f9-9b3c-c231b81440e7-catalog-content\") pod \"redhat-marketplace-pbqkb\" (UID: \"aedb9efe-c04f-46f9-9b3c-c231b81440e7\") " pod="openshift-marketplace/redhat-marketplace-pbqkb" Jan 22 09:09:43 crc kubenswrapper[4811]: I0122 09:09:43.256366 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aedb9efe-c04f-46f9-9b3c-c231b81440e7-utilities\") pod \"redhat-marketplace-pbqkb\" (UID: \"aedb9efe-c04f-46f9-9b3c-c231b81440e7\") " pod="openshift-marketplace/redhat-marketplace-pbqkb" Jan 22 09:09:43 crc kubenswrapper[4811]: I0122 09:09:43.256479 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sztqv\" (UniqueName: \"kubernetes.io/projected/aedb9efe-c04f-46f9-9b3c-c231b81440e7-kube-api-access-sztqv\") pod \"redhat-marketplace-pbqkb\" (UID: \"aedb9efe-c04f-46f9-9b3c-c231b81440e7\") " pod="openshift-marketplace/redhat-marketplace-pbqkb" Jan 22 09:09:43 crc kubenswrapper[4811]: I0122 09:09:43.256982 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aedb9efe-c04f-46f9-9b3c-c231b81440e7-catalog-content\") pod \"redhat-marketplace-pbqkb\" (UID: \"aedb9efe-c04f-46f9-9b3c-c231b81440e7\") " pod="openshift-marketplace/redhat-marketplace-pbqkb" Jan 22 09:09:43 crc kubenswrapper[4811]: I0122 09:09:43.276645 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sztqv\" (UniqueName: \"kubernetes.io/projected/aedb9efe-c04f-46f9-9b3c-c231b81440e7-kube-api-access-sztqv\") pod \"redhat-marketplace-pbqkb\" (UID: \"aedb9efe-c04f-46f9-9b3c-c231b81440e7\") " pod="openshift-marketplace/redhat-marketplace-pbqkb" Jan 22 09:09:43 crc kubenswrapper[4811]: I0122 09:09:43.358133 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d36eba9-846c-4d4e-8623-513ada4d04d7-utilities\") pod \"certified-operators-wct2f\" (UID: \"6d36eba9-846c-4d4e-8623-513ada4d04d7\") " pod="openshift-marketplace/certified-operators-wct2f" Jan 22 09:09:43 crc kubenswrapper[4811]: I0122 09:09:43.358227 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bg4qd\" (UniqueName: \"kubernetes.io/projected/6d36eba9-846c-4d4e-8623-513ada4d04d7-kube-api-access-bg4qd\") pod \"certified-operators-wct2f\" (UID: \"6d36eba9-846c-4d4e-8623-513ada4d04d7\") " pod="openshift-marketplace/certified-operators-wct2f" Jan 22 09:09:43 crc kubenswrapper[4811]: I0122 09:09:43.358257 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d36eba9-846c-4d4e-8623-513ada4d04d7-catalog-content\") pod \"certified-operators-wct2f\" (UID: \"6d36eba9-846c-4d4e-8623-513ada4d04d7\") " pod="openshift-marketplace/certified-operators-wct2f" Jan 22 09:09:43 crc kubenswrapper[4811]: I0122 09:09:43.459209 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bg4qd\" (UniqueName: \"kubernetes.io/projected/6d36eba9-846c-4d4e-8623-513ada4d04d7-kube-api-access-bg4qd\") pod \"certified-operators-wct2f\" (UID: \"6d36eba9-846c-4d4e-8623-513ada4d04d7\") " pod="openshift-marketplace/certified-operators-wct2f" Jan 22 09:09:43 crc kubenswrapper[4811]: I0122 09:09:43.459271 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d36eba9-846c-4d4e-8623-513ada4d04d7-catalog-content\") pod \"certified-operators-wct2f\" (UID: \"6d36eba9-846c-4d4e-8623-513ada4d04d7\") " pod="openshift-marketplace/certified-operators-wct2f" Jan 22 09:09:43 crc kubenswrapper[4811]: I0122 09:09:43.459316 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d36eba9-846c-4d4e-8623-513ada4d04d7-utilities\") pod \"certified-operators-wct2f\" (UID: \"6d36eba9-846c-4d4e-8623-513ada4d04d7\") " pod="openshift-marketplace/certified-operators-wct2f" Jan 22 09:09:43 crc kubenswrapper[4811]: I0122 09:09:43.459898 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d36eba9-846c-4d4e-8623-513ada4d04d7-utilities\") pod \"certified-operators-wct2f\" (UID: \"6d36eba9-846c-4d4e-8623-513ada4d04d7\") " pod="openshift-marketplace/certified-operators-wct2f" Jan 22 09:09:43 crc kubenswrapper[4811]: I0122 09:09:43.459934 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d36eba9-846c-4d4e-8623-513ada4d04d7-catalog-content\") pod \"certified-operators-wct2f\" (UID: \"6d36eba9-846c-4d4e-8623-513ada4d04d7\") " pod="openshift-marketplace/certified-operators-wct2f" Jan 22 09:09:43 crc kubenswrapper[4811]: I0122 09:09:43.473600 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bg4qd\" (UniqueName: \"kubernetes.io/projected/6d36eba9-846c-4d4e-8623-513ada4d04d7-kube-api-access-bg4qd\") pod \"certified-operators-wct2f\" (UID: \"6d36eba9-846c-4d4e-8623-513ada4d04d7\") " pod="openshift-marketplace/certified-operators-wct2f" Jan 22 09:09:43 crc kubenswrapper[4811]: I0122 09:09:43.484492 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wct2f" Jan 22 09:09:43 crc kubenswrapper[4811]: I0122 09:09:43.571928 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pbqkb" Jan 22 09:09:43 crc kubenswrapper[4811]: I0122 09:09:43.846028 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wct2f"] Jan 22 09:09:43 crc kubenswrapper[4811]: I0122 09:09:43.940477 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pbqkb"] Jan 22 09:09:44 crc kubenswrapper[4811]: I0122 09:09:43.999792 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30317a78-afdc-4c04-95b6-d2c8fedfb790" path="/var/lib/kubelet/pods/30317a78-afdc-4c04-95b6-d2c8fedfb790/volumes" Jan 22 09:09:44 crc kubenswrapper[4811]: I0122 09:09:44.000560 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="516e9ebd-6782-459d-99df-6902a5098c4e" path="/var/lib/kubelet/pods/516e9ebd-6782-459d-99df-6902a5098c4e/volumes" Jan 22 09:09:44 crc kubenswrapper[4811]: I0122 09:09:44.001198 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6dcdfa22-db17-42c3-a366-f96b6dd7b27d" path="/var/lib/kubelet/pods/6dcdfa22-db17-42c3-a366-f96b6dd7b27d/volumes" Jan 22 09:09:44 crc kubenswrapper[4811]: I0122 09:09:44.002285 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b8f4ccb-0c41-4bc0-8cf6-bb4508afbb4d" path="/var/lib/kubelet/pods/7b8f4ccb-0c41-4bc0-8cf6-bb4508afbb4d/volumes" Jan 22 09:09:44 crc kubenswrapper[4811]: I0122 09:09:44.002906 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89c58cc2-8741-4c9f-95fa-c73db10026d3" path="/var/lib/kubelet/pods/89c58cc2-8741-4c9f-95fa-c73db10026d3/volumes" Jan 22 09:09:44 crc kubenswrapper[4811]: I0122 09:09:44.084920 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pbqkb" event={"ID":"aedb9efe-c04f-46f9-9b3c-c231b81440e7","Type":"ContainerStarted","Data":"0222eeb5f148022cd6ccdc416f6bcde6dff30740a320ae99b9d6b045d15509ea"} Jan 22 09:09:44 crc kubenswrapper[4811]: I0122 09:09:44.087147 4811 generic.go:334] "Generic (PLEG): container finished" podID="6d36eba9-846c-4d4e-8623-513ada4d04d7" containerID="58293b9f5d817867de03dcee540bfb8240a0357fb782ef3e86ff9fc089ea39ad" exitCode=0 Jan 22 09:09:44 crc kubenswrapper[4811]: I0122 09:09:44.087215 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wct2f" event={"ID":"6d36eba9-846c-4d4e-8623-513ada4d04d7","Type":"ContainerDied","Data":"58293b9f5d817867de03dcee540bfb8240a0357fb782ef3e86ff9fc089ea39ad"} Jan 22 09:09:44 crc kubenswrapper[4811]: I0122 09:09:44.087293 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wct2f" event={"ID":"6d36eba9-846c-4d4e-8623-513ada4d04d7","Type":"ContainerStarted","Data":"4893534545c2c32079de3354d902fee3a3222fe348699234b938078100c061fd"} Jan 22 09:09:45 crc kubenswrapper[4811]: I0122 09:09:45.106742 4811 generic.go:334] "Generic (PLEG): container finished" podID="aedb9efe-c04f-46f9-9b3c-c231b81440e7" containerID="2733bf071b50c773371a40550fe1c97eaa66983890f757c01696570d5522ba1a" exitCode=0 Jan 22 09:09:45 crc kubenswrapper[4811]: I0122 09:09:45.106816 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pbqkb" event={"ID":"aedb9efe-c04f-46f9-9b3c-c231b81440e7","Type":"ContainerDied","Data":"2733bf071b50c773371a40550fe1c97eaa66983890f757c01696570d5522ba1a"} Jan 22 09:09:45 crc kubenswrapper[4811]: I0122 09:09:45.355274 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-xn9pt"] Jan 22 09:09:45 crc kubenswrapper[4811]: I0122 09:09:45.356394 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xn9pt" Jan 22 09:09:45 crc kubenswrapper[4811]: I0122 09:09:45.358422 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 22 09:09:45 crc kubenswrapper[4811]: I0122 09:09:45.366767 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xn9pt"] Jan 22 09:09:45 crc kubenswrapper[4811]: I0122 09:09:45.389504 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73a05511-7e27-41cd-9da6-e9277550936d-utilities\") pod \"community-operators-xn9pt\" (UID: \"73a05511-7e27-41cd-9da6-e9277550936d\") " pod="openshift-marketplace/community-operators-xn9pt" Jan 22 09:09:45 crc kubenswrapper[4811]: I0122 09:09:45.389597 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73a05511-7e27-41cd-9da6-e9277550936d-catalog-content\") pod \"community-operators-xn9pt\" (UID: \"73a05511-7e27-41cd-9da6-e9277550936d\") " pod="openshift-marketplace/community-operators-xn9pt" Jan 22 09:09:45 crc kubenswrapper[4811]: I0122 09:09:45.389662 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lv85w\" (UniqueName: \"kubernetes.io/projected/73a05511-7e27-41cd-9da6-e9277550936d-kube-api-access-lv85w\") pod \"community-operators-xn9pt\" (UID: \"73a05511-7e27-41cd-9da6-e9277550936d\") " pod="openshift-marketplace/community-operators-xn9pt" Jan 22 09:09:45 crc kubenswrapper[4811]: I0122 09:09:45.490185 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73a05511-7e27-41cd-9da6-e9277550936d-catalog-content\") pod \"community-operators-xn9pt\" (UID: \"73a05511-7e27-41cd-9da6-e9277550936d\") " pod="openshift-marketplace/community-operators-xn9pt" Jan 22 09:09:45 crc kubenswrapper[4811]: I0122 09:09:45.490240 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lv85w\" (UniqueName: \"kubernetes.io/projected/73a05511-7e27-41cd-9da6-e9277550936d-kube-api-access-lv85w\") pod \"community-operators-xn9pt\" (UID: \"73a05511-7e27-41cd-9da6-e9277550936d\") " pod="openshift-marketplace/community-operators-xn9pt" Jan 22 09:09:45 crc kubenswrapper[4811]: I0122 09:09:45.490313 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73a05511-7e27-41cd-9da6-e9277550936d-utilities\") pod \"community-operators-xn9pt\" (UID: \"73a05511-7e27-41cd-9da6-e9277550936d\") " pod="openshift-marketplace/community-operators-xn9pt" Jan 22 09:09:45 crc kubenswrapper[4811]: I0122 09:09:45.490731 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73a05511-7e27-41cd-9da6-e9277550936d-utilities\") pod \"community-operators-xn9pt\" (UID: \"73a05511-7e27-41cd-9da6-e9277550936d\") " pod="openshift-marketplace/community-operators-xn9pt" Jan 22 09:09:45 crc kubenswrapper[4811]: I0122 09:09:45.490798 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73a05511-7e27-41cd-9da6-e9277550936d-catalog-content\") pod \"community-operators-xn9pt\" (UID: \"73a05511-7e27-41cd-9da6-e9277550936d\") " pod="openshift-marketplace/community-operators-xn9pt" Jan 22 09:09:45 crc kubenswrapper[4811]: I0122 09:09:45.509349 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lv85w\" (UniqueName: \"kubernetes.io/projected/73a05511-7e27-41cd-9da6-e9277550936d-kube-api-access-lv85w\") pod \"community-operators-xn9pt\" (UID: \"73a05511-7e27-41cd-9da6-e9277550936d\") " pod="openshift-marketplace/community-operators-xn9pt" Jan 22 09:09:45 crc kubenswrapper[4811]: I0122 09:09:45.551069 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-65jbw"] Jan 22 09:09:45 crc kubenswrapper[4811]: I0122 09:09:45.552215 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-65jbw" Jan 22 09:09:45 crc kubenswrapper[4811]: I0122 09:09:45.558512 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 22 09:09:45 crc kubenswrapper[4811]: I0122 09:09:45.562379 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-65jbw"] Jan 22 09:09:45 crc kubenswrapper[4811]: I0122 09:09:45.671827 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xn9pt" Jan 22 09:09:45 crc kubenswrapper[4811]: I0122 09:09:45.693256 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6wjh\" (UniqueName: \"kubernetes.io/projected/d1f41cc2-bb4a-415e-80a6-8ae31b4c354f-kube-api-access-g6wjh\") pod \"redhat-operators-65jbw\" (UID: \"d1f41cc2-bb4a-415e-80a6-8ae31b4c354f\") " pod="openshift-marketplace/redhat-operators-65jbw" Jan 22 09:09:45 crc kubenswrapper[4811]: I0122 09:09:45.693299 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1f41cc2-bb4a-415e-80a6-8ae31b4c354f-catalog-content\") pod \"redhat-operators-65jbw\" (UID: \"d1f41cc2-bb4a-415e-80a6-8ae31b4c354f\") " pod="openshift-marketplace/redhat-operators-65jbw" Jan 22 09:09:45 crc kubenswrapper[4811]: I0122 09:09:45.693398 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1f41cc2-bb4a-415e-80a6-8ae31b4c354f-utilities\") pod \"redhat-operators-65jbw\" (UID: \"d1f41cc2-bb4a-415e-80a6-8ae31b4c354f\") " pod="openshift-marketplace/redhat-operators-65jbw" Jan 22 09:09:45 crc kubenswrapper[4811]: I0122 09:09:45.794833 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1f41cc2-bb4a-415e-80a6-8ae31b4c354f-utilities\") pod \"redhat-operators-65jbw\" (UID: \"d1f41cc2-bb4a-415e-80a6-8ae31b4c354f\") " pod="openshift-marketplace/redhat-operators-65jbw" Jan 22 09:09:45 crc kubenswrapper[4811]: I0122 09:09:45.795230 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6wjh\" (UniqueName: \"kubernetes.io/projected/d1f41cc2-bb4a-415e-80a6-8ae31b4c354f-kube-api-access-g6wjh\") pod \"redhat-operators-65jbw\" (UID: \"d1f41cc2-bb4a-415e-80a6-8ae31b4c354f\") " pod="openshift-marketplace/redhat-operators-65jbw" Jan 22 09:09:45 crc kubenswrapper[4811]: I0122 09:09:45.795257 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1f41cc2-bb4a-415e-80a6-8ae31b4c354f-catalog-content\") pod \"redhat-operators-65jbw\" (UID: \"d1f41cc2-bb4a-415e-80a6-8ae31b4c354f\") " pod="openshift-marketplace/redhat-operators-65jbw" Jan 22 09:09:45 crc kubenswrapper[4811]: I0122 09:09:45.798912 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1f41cc2-bb4a-415e-80a6-8ae31b4c354f-catalog-content\") pod \"redhat-operators-65jbw\" (UID: \"d1f41cc2-bb4a-415e-80a6-8ae31b4c354f\") " pod="openshift-marketplace/redhat-operators-65jbw" Jan 22 09:09:45 crc kubenswrapper[4811]: I0122 09:09:45.798936 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1f41cc2-bb4a-415e-80a6-8ae31b4c354f-utilities\") pod \"redhat-operators-65jbw\" (UID: \"d1f41cc2-bb4a-415e-80a6-8ae31b4c354f\") " pod="openshift-marketplace/redhat-operators-65jbw" Jan 22 09:09:45 crc kubenswrapper[4811]: I0122 09:09:45.818300 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6wjh\" (UniqueName: \"kubernetes.io/projected/d1f41cc2-bb4a-415e-80a6-8ae31b4c354f-kube-api-access-g6wjh\") pod \"redhat-operators-65jbw\" (UID: \"d1f41cc2-bb4a-415e-80a6-8ae31b4c354f\") " pod="openshift-marketplace/redhat-operators-65jbw" Jan 22 09:09:45 crc kubenswrapper[4811]: I0122 09:09:45.900929 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-65jbw" Jan 22 09:09:46 crc kubenswrapper[4811]: I0122 09:09:46.080586 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xn9pt"] Jan 22 09:09:46 crc kubenswrapper[4811]: W0122 09:09:46.084295 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod73a05511_7e27_41cd_9da6_e9277550936d.slice/crio-94de09f5260c318956035b4f08caaa8690d346001c01fb54e638a1f7abfaa83e WatchSource:0}: Error finding container 94de09f5260c318956035b4f08caaa8690d346001c01fb54e638a1f7abfaa83e: Status 404 returned error can't find the container with id 94de09f5260c318956035b4f08caaa8690d346001c01fb54e638a1f7abfaa83e Jan 22 09:09:46 crc kubenswrapper[4811]: I0122 09:09:46.121446 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xn9pt" event={"ID":"73a05511-7e27-41cd-9da6-e9277550936d","Type":"ContainerStarted","Data":"94de09f5260c318956035b4f08caaa8690d346001c01fb54e638a1f7abfaa83e"} Jan 22 09:09:46 crc kubenswrapper[4811]: I0122 09:09:46.124641 4811 generic.go:334] "Generic (PLEG): container finished" podID="6d36eba9-846c-4d4e-8623-513ada4d04d7" containerID="245d10dcad4a4e0a49a72497217e7fad02f73e67be87ede65f40513cf88c92a1" exitCode=0 Jan 22 09:09:46 crc kubenswrapper[4811]: I0122 09:09:46.124690 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wct2f" event={"ID":"6d36eba9-846c-4d4e-8623-513ada4d04d7","Type":"ContainerDied","Data":"245d10dcad4a4e0a49a72497217e7fad02f73e67be87ede65f40513cf88c92a1"} Jan 22 09:09:46 crc kubenswrapper[4811]: I0122 09:09:46.288012 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-65jbw"] Jan 22 09:09:47 crc kubenswrapper[4811]: I0122 09:09:47.133500 4811 generic.go:334] "Generic (PLEG): container finished" podID="d1f41cc2-bb4a-415e-80a6-8ae31b4c354f" containerID="961cc3b19ef75a1b69aca3c1482376394f87536688049dcbd2bb7352a89f7971" exitCode=0 Jan 22 09:09:47 crc kubenswrapper[4811]: I0122 09:09:47.133604 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-65jbw" event={"ID":"d1f41cc2-bb4a-415e-80a6-8ae31b4c354f","Type":"ContainerDied","Data":"961cc3b19ef75a1b69aca3c1482376394f87536688049dcbd2bb7352a89f7971"} Jan 22 09:09:47 crc kubenswrapper[4811]: I0122 09:09:47.133889 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-65jbw" event={"ID":"d1f41cc2-bb4a-415e-80a6-8ae31b4c354f","Type":"ContainerStarted","Data":"a3f5b08b8159184a66488e71c8c185df564a32dabe97aa4a5a967de3184b9a43"} Jan 22 09:09:47 crc kubenswrapper[4811]: I0122 09:09:47.136591 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wct2f" event={"ID":"6d36eba9-846c-4d4e-8623-513ada4d04d7","Type":"ContainerStarted","Data":"d579490859c07558547011ef211a3a7d9672bb81380d8c9f5694fc149c75ea3e"} Jan 22 09:09:47 crc kubenswrapper[4811]: I0122 09:09:47.139999 4811 generic.go:334] "Generic (PLEG): container finished" podID="73a05511-7e27-41cd-9da6-e9277550936d" containerID="c642d94e8393ac9f343c4a445329e54fb585c12bbda1a1bc061e0d1ad06988a9" exitCode=0 Jan 22 09:09:47 crc kubenswrapper[4811]: I0122 09:09:47.140113 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xn9pt" event={"ID":"73a05511-7e27-41cd-9da6-e9277550936d","Type":"ContainerDied","Data":"c642d94e8393ac9f343c4a445329e54fb585c12bbda1a1bc061e0d1ad06988a9"} Jan 22 09:09:47 crc kubenswrapper[4811]: I0122 09:09:47.146173 4811 generic.go:334] "Generic (PLEG): container finished" podID="aedb9efe-c04f-46f9-9b3c-c231b81440e7" containerID="e87a1a9430b34fc6584be507fdbae79ab9b462053c9664bfe16d38c63ba66a01" exitCode=0 Jan 22 09:09:47 crc kubenswrapper[4811]: I0122 09:09:47.146228 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pbqkb" event={"ID":"aedb9efe-c04f-46f9-9b3c-c231b81440e7","Type":"ContainerDied","Data":"e87a1a9430b34fc6584be507fdbae79ab9b462053c9664bfe16d38c63ba66a01"} Jan 22 09:09:47 crc kubenswrapper[4811]: I0122 09:09:47.169801 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-wct2f" podStartSLOduration=1.5958949599999999 podStartE2EDuration="4.169778698s" podCreationTimestamp="2026-01-22 09:09:43 +0000 UTC" firstStartedPulling="2026-01-22 09:09:44.091056008 +0000 UTC m=+228.413243131" lastFinishedPulling="2026-01-22 09:09:46.664939746 +0000 UTC m=+230.987126869" observedRunningTime="2026-01-22 09:09:47.168229843 +0000 UTC m=+231.490416956" watchObservedRunningTime="2026-01-22 09:09:47.169778698 +0000 UTC m=+231.491965821" Jan 22 09:09:48 crc kubenswrapper[4811]: I0122 09:09:48.157961 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pbqkb" event={"ID":"aedb9efe-c04f-46f9-9b3c-c231b81440e7","Type":"ContainerStarted","Data":"ab803da7557160732945af0ea73d3196fcb0bfbbf929f1196ae5da2aaa72c0cd"} Jan 22 09:09:48 crc kubenswrapper[4811]: I0122 09:09:48.177850 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-pbqkb" podStartSLOduration=3.672950965 podStartE2EDuration="6.177830302s" podCreationTimestamp="2026-01-22 09:09:42 +0000 UTC" firstStartedPulling="2026-01-22 09:09:45.108093445 +0000 UTC m=+229.430280569" lastFinishedPulling="2026-01-22 09:09:47.612972783 +0000 UTC m=+231.935159906" observedRunningTime="2026-01-22 09:09:48.174133424 +0000 UTC m=+232.496320547" watchObservedRunningTime="2026-01-22 09:09:48.177830302 +0000 UTC m=+232.500017425" Jan 22 09:09:49 crc kubenswrapper[4811]: I0122 09:09:49.170449 4811 generic.go:334] "Generic (PLEG): container finished" podID="73a05511-7e27-41cd-9da6-e9277550936d" containerID="f5db08fd8ca92f3d579087a4f109d313bc53442bfa1c0ef212f4ae72687ac2cd" exitCode=0 Jan 22 09:09:49 crc kubenswrapper[4811]: I0122 09:09:49.170654 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xn9pt" event={"ID":"73a05511-7e27-41cd-9da6-e9277550936d","Type":"ContainerDied","Data":"f5db08fd8ca92f3d579087a4f109d313bc53442bfa1c0ef212f4ae72687ac2cd"} Jan 22 09:09:49 crc kubenswrapper[4811]: I0122 09:09:49.175457 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-65jbw" event={"ID":"d1f41cc2-bb4a-415e-80a6-8ae31b4c354f","Type":"ContainerStarted","Data":"1c99d057f8eef0d402c7dd07a955f99a80940197e43d3cef6d87371a524bac69"} Jan 22 09:09:50 crc kubenswrapper[4811]: I0122 09:09:50.183749 4811 generic.go:334] "Generic (PLEG): container finished" podID="d1f41cc2-bb4a-415e-80a6-8ae31b4c354f" containerID="1c99d057f8eef0d402c7dd07a955f99a80940197e43d3cef6d87371a524bac69" exitCode=0 Jan 22 09:09:50 crc kubenswrapper[4811]: I0122 09:09:50.184008 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-65jbw" event={"ID":"d1f41cc2-bb4a-415e-80a6-8ae31b4c354f","Type":"ContainerDied","Data":"1c99d057f8eef0d402c7dd07a955f99a80940197e43d3cef6d87371a524bac69"} Jan 22 09:09:51 crc kubenswrapper[4811]: I0122 09:09:51.195340 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-65jbw" event={"ID":"d1f41cc2-bb4a-415e-80a6-8ae31b4c354f","Type":"ContainerStarted","Data":"5add107832ceed1efb76a96488eb2d88ab01b408e27b0e16854f10e421d837cd"} Jan 22 09:09:51 crc kubenswrapper[4811]: I0122 09:09:51.198172 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xn9pt" event={"ID":"73a05511-7e27-41cd-9da6-e9277550936d","Type":"ContainerStarted","Data":"0da84348a6f05ad02a6e49dc9dc462c0326627ff8ced53763782b83a5a09897f"} Jan 22 09:09:51 crc kubenswrapper[4811]: I0122 09:09:51.236672 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-65jbw" podStartSLOduration=2.5877552169999998 podStartE2EDuration="6.236648668s" podCreationTimestamp="2026-01-22 09:09:45 +0000 UTC" firstStartedPulling="2026-01-22 09:09:47.135236372 +0000 UTC m=+231.457423495" lastFinishedPulling="2026-01-22 09:09:50.784129823 +0000 UTC m=+235.106316946" observedRunningTime="2026-01-22 09:09:51.219729894 +0000 UTC m=+235.541917017" watchObservedRunningTime="2026-01-22 09:09:51.236648668 +0000 UTC m=+235.558835791" Jan 22 09:09:51 crc kubenswrapper[4811]: I0122 09:09:51.239016 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-xn9pt" podStartSLOduration=3.735215307 podStartE2EDuration="6.23900317s" podCreationTimestamp="2026-01-22 09:09:45 +0000 UTC" firstStartedPulling="2026-01-22 09:09:47.142119848 +0000 UTC m=+231.464306971" lastFinishedPulling="2026-01-22 09:09:49.645907721 +0000 UTC m=+233.968094834" observedRunningTime="2026-01-22 09:09:51.234588631 +0000 UTC m=+235.556775754" watchObservedRunningTime="2026-01-22 09:09:51.23900317 +0000 UTC m=+235.561190293" Jan 22 09:09:53 crc kubenswrapper[4811]: I0122 09:09:53.485218 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-wct2f" Jan 22 09:09:53 crc kubenswrapper[4811]: I0122 09:09:53.486307 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-wct2f" Jan 22 09:09:53 crc kubenswrapper[4811]: I0122 09:09:53.517166 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-wct2f" Jan 22 09:09:53 crc kubenswrapper[4811]: I0122 09:09:53.572880 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-pbqkb" Jan 22 09:09:53 crc kubenswrapper[4811]: I0122 09:09:53.572930 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-pbqkb" Jan 22 09:09:53 crc kubenswrapper[4811]: I0122 09:09:53.604443 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-pbqkb" Jan 22 09:09:54 crc kubenswrapper[4811]: I0122 09:09:54.253267 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-wct2f" Jan 22 09:09:54 crc kubenswrapper[4811]: I0122 09:09:54.254383 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-pbqkb" Jan 22 09:09:55 crc kubenswrapper[4811]: I0122 09:09:55.672604 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-xn9pt" Jan 22 09:09:55 crc kubenswrapper[4811]: I0122 09:09:55.675156 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-xn9pt" Jan 22 09:09:55 crc kubenswrapper[4811]: I0122 09:09:55.714581 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-xn9pt" Jan 22 09:09:55 crc kubenswrapper[4811]: I0122 09:09:55.901987 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-65jbw" Jan 22 09:09:55 crc kubenswrapper[4811]: I0122 09:09:55.903784 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-65jbw" Jan 22 09:09:56 crc kubenswrapper[4811]: I0122 09:09:56.258552 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-xn9pt" Jan 22 09:09:56 crc kubenswrapper[4811]: I0122 09:09:56.943310 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-65jbw" podUID="d1f41cc2-bb4a-415e-80a6-8ae31b4c354f" containerName="registry-server" probeResult="failure" output=< Jan 22 09:09:56 crc kubenswrapper[4811]: timeout: failed to connect service ":50051" within 1s Jan 22 09:09:56 crc kubenswrapper[4811]: > Jan 22 09:10:00 crc kubenswrapper[4811]: I0122 09:10:00.258744 4811 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 22 09:10:00 crc kubenswrapper[4811]: I0122 09:10:00.259778 4811 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 22 09:10:00 crc kubenswrapper[4811]: I0122 09:10:00.259898 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 09:10:00 crc kubenswrapper[4811]: I0122 09:10:00.260271 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://2c0b4f518da69d1ede47cb9d322e9d9ca120cfc5eb770f109f0e5fd2e3260964" gracePeriod=15 Jan 22 09:10:00 crc kubenswrapper[4811]: I0122 09:10:00.260319 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://4aee61445030e02ac4fa9de4fefd94fe50e821ac8897c7403e173f2594a2e0ff" gracePeriod=15 Jan 22 09:10:00 crc kubenswrapper[4811]: I0122 09:10:00.260300 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://6d66c2a7980e144d2ce01cb8ebac91297e012c09d4b9aec617e747542e1ca304" gracePeriod=15 Jan 22 09:10:00 crc kubenswrapper[4811]: I0122 09:10:00.260294 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://a4daff941637e717d7dc33c5627db72d253f50f50288c23257e1eaa66cd38c5d" gracePeriod=15 Jan 22 09:10:00 crc kubenswrapper[4811]: I0122 09:10:00.260801 4811 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 22 09:10:00 crc kubenswrapper[4811]: E0122 09:10:00.260947 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 22 09:10:00 crc kubenswrapper[4811]: I0122 09:10:00.260966 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 22 09:10:00 crc kubenswrapper[4811]: E0122 09:10:00.260977 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 22 09:10:00 crc kubenswrapper[4811]: I0122 09:10:00.260985 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 22 09:10:00 crc kubenswrapper[4811]: E0122 09:10:00.260990 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 22 09:10:00 crc kubenswrapper[4811]: I0122 09:10:00.260998 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 22 09:10:00 crc kubenswrapper[4811]: E0122 09:10:00.261004 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 22 09:10:00 crc kubenswrapper[4811]: I0122 09:10:00.261009 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 22 09:10:00 crc kubenswrapper[4811]: E0122 09:10:00.261023 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 22 09:10:00 crc kubenswrapper[4811]: I0122 09:10:00.261029 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 22 09:10:00 crc kubenswrapper[4811]: E0122 09:10:00.261038 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 22 09:10:00 crc kubenswrapper[4811]: I0122 09:10:00.261044 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 22 09:10:00 crc kubenswrapper[4811]: I0122 09:10:00.261132 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 22 09:10:00 crc kubenswrapper[4811]: I0122 09:10:00.261145 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 22 09:10:00 crc kubenswrapper[4811]: I0122 09:10:00.261156 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 22 09:10:00 crc kubenswrapper[4811]: I0122 09:10:00.261170 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 22 09:10:00 crc kubenswrapper[4811]: I0122 09:10:00.261178 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 22 09:10:00 crc kubenswrapper[4811]: I0122 09:10:00.261186 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 22 09:10:00 crc kubenswrapper[4811]: E0122 09:10:00.261265 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 22 09:10:00 crc kubenswrapper[4811]: I0122 09:10:00.261273 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 22 09:10:00 crc kubenswrapper[4811]: I0122 09:10:00.262712 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://a9efb1ddf2ce6601c9860410e3ccacc38359aef779a7c8524bf4ec5950743d36" gracePeriod=15 Jan 22 09:10:00 crc kubenswrapper[4811]: I0122 09:10:00.313494 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 22 09:10:00 crc kubenswrapper[4811]: I0122 09:10:00.400532 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 09:10:00 crc kubenswrapper[4811]: I0122 09:10:00.400920 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 09:10:00 crc kubenswrapper[4811]: I0122 09:10:00.400960 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 09:10:00 crc kubenswrapper[4811]: I0122 09:10:00.401008 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 09:10:00 crc kubenswrapper[4811]: I0122 09:10:00.401306 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 09:10:00 crc kubenswrapper[4811]: I0122 09:10:00.401343 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 09:10:00 crc kubenswrapper[4811]: I0122 09:10:00.401362 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 09:10:00 crc kubenswrapper[4811]: I0122 09:10:00.401397 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 09:10:00 crc kubenswrapper[4811]: I0122 09:10:00.502224 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 09:10:00 crc kubenswrapper[4811]: I0122 09:10:00.502276 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 09:10:00 crc kubenswrapper[4811]: I0122 09:10:00.502301 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 09:10:00 crc kubenswrapper[4811]: I0122 09:10:00.502321 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 09:10:00 crc kubenswrapper[4811]: I0122 09:10:00.502344 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 09:10:00 crc kubenswrapper[4811]: I0122 09:10:00.502391 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 09:10:00 crc kubenswrapper[4811]: I0122 09:10:00.502412 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 09:10:00 crc kubenswrapper[4811]: I0122 09:10:00.502435 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 09:10:00 crc kubenswrapper[4811]: I0122 09:10:00.502519 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 09:10:00 crc kubenswrapper[4811]: I0122 09:10:00.502559 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 09:10:00 crc kubenswrapper[4811]: I0122 09:10:00.502596 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 09:10:00 crc kubenswrapper[4811]: I0122 09:10:00.502617 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 09:10:00 crc kubenswrapper[4811]: I0122 09:10:00.502613 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 09:10:00 crc kubenswrapper[4811]: I0122 09:10:00.502675 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 09:10:00 crc kubenswrapper[4811]: I0122 09:10:00.502657 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 09:10:00 crc kubenswrapper[4811]: I0122 09:10:00.502702 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 09:10:00 crc kubenswrapper[4811]: I0122 09:10:00.614982 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 09:10:00 crc kubenswrapper[4811]: W0122 09:10:00.634581 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-cf387af1875aaf6d9170c66319135c9838c0d3a5840a344f2bff93c68809594b WatchSource:0}: Error finding container cf387af1875aaf6d9170c66319135c9838c0d3a5840a344f2bff93c68809594b: Status 404 returned error can't find the container with id cf387af1875aaf6d9170c66319135c9838c0d3a5840a344f2bff93c68809594b Jan 22 09:10:00 crc kubenswrapper[4811]: E0122 09:10:00.638318 4811 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 192.168.26.94:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188d028568aa526b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 09:10:00.636371563 +0000 UTC m=+244.958558686,LastTimestamp:2026-01-22 09:10:00.636371563 +0000 UTC m=+244.958558686,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 09:10:00 crc kubenswrapper[4811]: E0122 09:10:00.710408 4811 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.26.94:6443: connect: connection refused" Jan 22 09:10:00 crc kubenswrapper[4811]: E0122 09:10:00.710619 4811 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.26.94:6443: connect: connection refused" Jan 22 09:10:00 crc kubenswrapper[4811]: E0122 09:10:00.712066 4811 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.26.94:6443: connect: connection refused" Jan 22 09:10:00 crc kubenswrapper[4811]: E0122 09:10:00.713212 4811 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.26.94:6443: connect: connection refused" Jan 22 09:10:00 crc kubenswrapper[4811]: E0122 09:10:00.713496 4811 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.26.94:6443: connect: connection refused" Jan 22 09:10:00 crc kubenswrapper[4811]: I0122 09:10:00.713524 4811 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 22 09:10:00 crc kubenswrapper[4811]: E0122 09:10:00.713707 4811 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.26.94:6443: connect: connection refused" interval="200ms" Jan 22 09:10:00 crc kubenswrapper[4811]: E0122 09:10:00.914962 4811 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.26.94:6443: connect: connection refused" interval="400ms" Jan 22 09:10:01 crc kubenswrapper[4811]: E0122 09:10:01.081286 4811 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 192.168.26.94:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188d028568aa526b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 09:10:00.636371563 +0000 UTC m=+244.958558686,LastTimestamp:2026-01-22 09:10:00.636371563 +0000 UTC m=+244.958558686,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 09:10:01 crc kubenswrapper[4811]: I0122 09:10:01.255923 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"f8f8d4ad68f63e1ccbe3294a3b7c9ac841c5e14d92f598bf7541c4a8663b9f4f"} Jan 22 09:10:01 crc kubenswrapper[4811]: I0122 09:10:01.256000 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"cf387af1875aaf6d9170c66319135c9838c0d3a5840a344f2bff93c68809594b"} Jan 22 09:10:01 crc kubenswrapper[4811]: I0122 09:10:01.256407 4811 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.26.94:6443: connect: connection refused" Jan 22 09:10:01 crc kubenswrapper[4811]: I0122 09:10:01.256663 4811 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.26.94:6443: connect: connection refused" Jan 22 09:10:01 crc kubenswrapper[4811]: I0122 09:10:01.258332 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 22 09:10:01 crc kubenswrapper[4811]: I0122 09:10:01.259894 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 22 09:10:01 crc kubenswrapper[4811]: I0122 09:10:01.260560 4811 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="a4daff941637e717d7dc33c5627db72d253f50f50288c23257e1eaa66cd38c5d" exitCode=0 Jan 22 09:10:01 crc kubenswrapper[4811]: I0122 09:10:01.260578 4811 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="a9efb1ddf2ce6601c9860410e3ccacc38359aef779a7c8524bf4ec5950743d36" exitCode=0 Jan 22 09:10:01 crc kubenswrapper[4811]: I0122 09:10:01.260586 4811 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="4aee61445030e02ac4fa9de4fefd94fe50e821ac8897c7403e173f2594a2e0ff" exitCode=0 Jan 22 09:10:01 crc kubenswrapper[4811]: I0122 09:10:01.260594 4811 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="6d66c2a7980e144d2ce01cb8ebac91297e012c09d4b9aec617e747542e1ca304" exitCode=2 Jan 22 09:10:01 crc kubenswrapper[4811]: I0122 09:10:01.260697 4811 scope.go:117] "RemoveContainer" containerID="a86cb7b767900041a2946ed016b105369283ee61acdf725ade666acb1e926ed0" Jan 22 09:10:01 crc kubenswrapper[4811]: I0122 09:10:01.262732 4811 generic.go:334] "Generic (PLEG): container finished" podID="740119c0-67c2-4467-8095-b99b843e9d53" containerID="2ee44217b263139a95a41ab7712c75bc7c9a43b2c8e09b7036d6f5ddee6307b4" exitCode=0 Jan 22 09:10:01 crc kubenswrapper[4811]: I0122 09:10:01.262767 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"740119c0-67c2-4467-8095-b99b843e9d53","Type":"ContainerDied","Data":"2ee44217b263139a95a41ab7712c75bc7c9a43b2c8e09b7036d6f5ddee6307b4"} Jan 22 09:10:01 crc kubenswrapper[4811]: I0122 09:10:01.263154 4811 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.26.94:6443: connect: connection refused" Jan 22 09:10:01 crc kubenswrapper[4811]: I0122 09:10:01.263373 4811 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.26.94:6443: connect: connection refused" Jan 22 09:10:01 crc kubenswrapper[4811]: I0122 09:10:01.263551 4811 status_manager.go:851] "Failed to get status for pod" podUID="740119c0-67c2-4467-8095-b99b843e9d53" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.26.94:6443: connect: connection refused" Jan 22 09:10:01 crc kubenswrapper[4811]: E0122 09:10:01.315739 4811 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.26.94:6443: connect: connection refused" interval="800ms" Jan 22 09:10:02 crc kubenswrapper[4811]: E0122 09:10:02.117270 4811 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.26.94:6443: connect: connection refused" interval="1.6s" Jan 22 09:10:02 crc kubenswrapper[4811]: I0122 09:10:02.271443 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 22 09:10:02 crc kubenswrapper[4811]: I0122 09:10:02.571588 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 22 09:10:02 crc kubenswrapper[4811]: I0122 09:10:02.572949 4811 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.26.94:6443: connect: connection refused" Jan 22 09:10:02 crc kubenswrapper[4811]: I0122 09:10:02.573481 4811 status_manager.go:851] "Failed to get status for pod" podUID="740119c0-67c2-4467-8095-b99b843e9d53" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.26.94:6443: connect: connection refused" Jan 22 09:10:02 crc kubenswrapper[4811]: I0122 09:10:02.720947 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 22 09:10:02 crc kubenswrapper[4811]: I0122 09:10:02.721841 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 09:10:02 crc kubenswrapper[4811]: I0122 09:10:02.722400 4811 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.26.94:6443: connect: connection refused" Jan 22 09:10:02 crc kubenswrapper[4811]: I0122 09:10:02.722695 4811 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.26.94:6443: connect: connection refused" Jan 22 09:10:02 crc kubenswrapper[4811]: I0122 09:10:02.722996 4811 status_manager.go:851] "Failed to get status for pod" podUID="740119c0-67c2-4467-8095-b99b843e9d53" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.26.94:6443: connect: connection refused" Jan 22 09:10:02 crc kubenswrapper[4811]: I0122 09:10:02.730439 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/740119c0-67c2-4467-8095-b99b843e9d53-kubelet-dir\") pod \"740119c0-67c2-4467-8095-b99b843e9d53\" (UID: \"740119c0-67c2-4467-8095-b99b843e9d53\") " Jan 22 09:10:02 crc kubenswrapper[4811]: I0122 09:10:02.730499 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/740119c0-67c2-4467-8095-b99b843e9d53-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "740119c0-67c2-4467-8095-b99b843e9d53" (UID: "740119c0-67c2-4467-8095-b99b843e9d53"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 09:10:02 crc kubenswrapper[4811]: I0122 09:10:02.730549 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/740119c0-67c2-4467-8095-b99b843e9d53-kube-api-access\") pod \"740119c0-67c2-4467-8095-b99b843e9d53\" (UID: \"740119c0-67c2-4467-8095-b99b843e9d53\") " Jan 22 09:10:02 crc kubenswrapper[4811]: I0122 09:10:02.730606 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/740119c0-67c2-4467-8095-b99b843e9d53-var-lock\") pod \"740119c0-67c2-4467-8095-b99b843e9d53\" (UID: \"740119c0-67c2-4467-8095-b99b843e9d53\") " Jan 22 09:10:02 crc kubenswrapper[4811]: I0122 09:10:02.730743 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/740119c0-67c2-4467-8095-b99b843e9d53-var-lock" (OuterVolumeSpecName: "var-lock") pod "740119c0-67c2-4467-8095-b99b843e9d53" (UID: "740119c0-67c2-4467-8095-b99b843e9d53"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 09:10:02 crc kubenswrapper[4811]: I0122 09:10:02.730777 4811 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/740119c0-67c2-4467-8095-b99b843e9d53-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:02 crc kubenswrapper[4811]: I0122 09:10:02.735586 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/740119c0-67c2-4467-8095-b99b843e9d53-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "740119c0-67c2-4467-8095-b99b843e9d53" (UID: "740119c0-67c2-4467-8095-b99b843e9d53"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:10:02 crc kubenswrapper[4811]: I0122 09:10:02.831012 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 22 09:10:02 crc kubenswrapper[4811]: I0122 09:10:02.831045 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 22 09:10:02 crc kubenswrapper[4811]: I0122 09:10:02.831066 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 22 09:10:02 crc kubenswrapper[4811]: I0122 09:10:02.831105 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 09:10:02 crc kubenswrapper[4811]: I0122 09:10:02.831127 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 09:10:02 crc kubenswrapper[4811]: I0122 09:10:02.831174 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 09:10:02 crc kubenswrapper[4811]: I0122 09:10:02.831396 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/740119c0-67c2-4467-8095-b99b843e9d53-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:02 crc kubenswrapper[4811]: I0122 09:10:02.831420 4811 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/740119c0-67c2-4467-8095-b99b843e9d53-var-lock\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:02 crc kubenswrapper[4811]: I0122 09:10:02.831430 4811 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:02 crc kubenswrapper[4811]: I0122 09:10:02.831438 4811 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:02 crc kubenswrapper[4811]: I0122 09:10:02.831447 4811 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:03 crc kubenswrapper[4811]: I0122 09:10:03.279129 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"740119c0-67c2-4467-8095-b99b843e9d53","Type":"ContainerDied","Data":"e89de37c7f64e0148dc6233eb1597c37a5c7ce81facf81ea06a7c8e100ac8799"} Jan 22 09:10:03 crc kubenswrapper[4811]: I0122 09:10:03.279169 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 22 09:10:03 crc kubenswrapper[4811]: I0122 09:10:03.279181 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e89de37c7f64e0148dc6233eb1597c37a5c7ce81facf81ea06a7c8e100ac8799" Jan 22 09:10:03 crc kubenswrapper[4811]: I0122 09:10:03.283045 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 22 09:10:03 crc kubenswrapper[4811]: I0122 09:10:03.283813 4811 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="2c0b4f518da69d1ede47cb9d322e9d9ca120cfc5eb770f109f0e5fd2e3260964" exitCode=0 Jan 22 09:10:03 crc kubenswrapper[4811]: I0122 09:10:03.283883 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 09:10:03 crc kubenswrapper[4811]: I0122 09:10:03.283888 4811 scope.go:117] "RemoveContainer" containerID="a4daff941637e717d7dc33c5627db72d253f50f50288c23257e1eaa66cd38c5d" Jan 22 09:10:03 crc kubenswrapper[4811]: I0122 09:10:03.291065 4811 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.26.94:6443: connect: connection refused" Jan 22 09:10:03 crc kubenswrapper[4811]: I0122 09:10:03.291416 4811 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.26.94:6443: connect: connection refused" Jan 22 09:10:03 crc kubenswrapper[4811]: I0122 09:10:03.291650 4811 status_manager.go:851] "Failed to get status for pod" podUID="740119c0-67c2-4467-8095-b99b843e9d53" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.26.94:6443: connect: connection refused" Jan 22 09:10:03 crc kubenswrapper[4811]: I0122 09:10:03.296486 4811 status_manager.go:851] "Failed to get status for pod" podUID="740119c0-67c2-4467-8095-b99b843e9d53" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.26.94:6443: connect: connection refused" Jan 22 09:10:03 crc kubenswrapper[4811]: I0122 09:10:03.297777 4811 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.26.94:6443: connect: connection refused" Jan 22 09:10:03 crc kubenswrapper[4811]: I0122 09:10:03.298224 4811 scope.go:117] "RemoveContainer" containerID="a9efb1ddf2ce6601c9860410e3ccacc38359aef779a7c8524bf4ec5950743d36" Jan 22 09:10:03 crc kubenswrapper[4811]: I0122 09:10:03.298245 4811 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.26.94:6443: connect: connection refused" Jan 22 09:10:03 crc kubenswrapper[4811]: I0122 09:10:03.312692 4811 scope.go:117] "RemoveContainer" containerID="4aee61445030e02ac4fa9de4fefd94fe50e821ac8897c7403e173f2594a2e0ff" Jan 22 09:10:03 crc kubenswrapper[4811]: I0122 09:10:03.325578 4811 scope.go:117] "RemoveContainer" containerID="6d66c2a7980e144d2ce01cb8ebac91297e012c09d4b9aec617e747542e1ca304" Jan 22 09:10:03 crc kubenswrapper[4811]: I0122 09:10:03.336814 4811 scope.go:117] "RemoveContainer" containerID="2c0b4f518da69d1ede47cb9d322e9d9ca120cfc5eb770f109f0e5fd2e3260964" Jan 22 09:10:03 crc kubenswrapper[4811]: I0122 09:10:03.349788 4811 scope.go:117] "RemoveContainer" containerID="85e811085c9b345899231a1fcf56619c1cc5c8b0e6d22fc897fb96f20ba72e37" Jan 22 09:10:03 crc kubenswrapper[4811]: I0122 09:10:03.364683 4811 scope.go:117] "RemoveContainer" containerID="a4daff941637e717d7dc33c5627db72d253f50f50288c23257e1eaa66cd38c5d" Jan 22 09:10:03 crc kubenswrapper[4811]: E0122 09:10:03.364981 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4daff941637e717d7dc33c5627db72d253f50f50288c23257e1eaa66cd38c5d\": container with ID starting with a4daff941637e717d7dc33c5627db72d253f50f50288c23257e1eaa66cd38c5d not found: ID does not exist" containerID="a4daff941637e717d7dc33c5627db72d253f50f50288c23257e1eaa66cd38c5d" Jan 22 09:10:03 crc kubenswrapper[4811]: I0122 09:10:03.365017 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4daff941637e717d7dc33c5627db72d253f50f50288c23257e1eaa66cd38c5d"} err="failed to get container status \"a4daff941637e717d7dc33c5627db72d253f50f50288c23257e1eaa66cd38c5d\": rpc error: code = NotFound desc = could not find container \"a4daff941637e717d7dc33c5627db72d253f50f50288c23257e1eaa66cd38c5d\": container with ID starting with a4daff941637e717d7dc33c5627db72d253f50f50288c23257e1eaa66cd38c5d not found: ID does not exist" Jan 22 09:10:03 crc kubenswrapper[4811]: I0122 09:10:03.365044 4811 scope.go:117] "RemoveContainer" containerID="a9efb1ddf2ce6601c9860410e3ccacc38359aef779a7c8524bf4ec5950743d36" Jan 22 09:10:03 crc kubenswrapper[4811]: E0122 09:10:03.365425 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a9efb1ddf2ce6601c9860410e3ccacc38359aef779a7c8524bf4ec5950743d36\": container with ID starting with a9efb1ddf2ce6601c9860410e3ccacc38359aef779a7c8524bf4ec5950743d36 not found: ID does not exist" containerID="a9efb1ddf2ce6601c9860410e3ccacc38359aef779a7c8524bf4ec5950743d36" Jan 22 09:10:03 crc kubenswrapper[4811]: I0122 09:10:03.365448 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9efb1ddf2ce6601c9860410e3ccacc38359aef779a7c8524bf4ec5950743d36"} err="failed to get container status \"a9efb1ddf2ce6601c9860410e3ccacc38359aef779a7c8524bf4ec5950743d36\": rpc error: code = NotFound desc = could not find container \"a9efb1ddf2ce6601c9860410e3ccacc38359aef779a7c8524bf4ec5950743d36\": container with ID starting with a9efb1ddf2ce6601c9860410e3ccacc38359aef779a7c8524bf4ec5950743d36 not found: ID does not exist" Jan 22 09:10:03 crc kubenswrapper[4811]: I0122 09:10:03.365462 4811 scope.go:117] "RemoveContainer" containerID="4aee61445030e02ac4fa9de4fefd94fe50e821ac8897c7403e173f2594a2e0ff" Jan 22 09:10:03 crc kubenswrapper[4811]: E0122 09:10:03.365901 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4aee61445030e02ac4fa9de4fefd94fe50e821ac8897c7403e173f2594a2e0ff\": container with ID starting with 4aee61445030e02ac4fa9de4fefd94fe50e821ac8897c7403e173f2594a2e0ff not found: ID does not exist" containerID="4aee61445030e02ac4fa9de4fefd94fe50e821ac8897c7403e173f2594a2e0ff" Jan 22 09:10:03 crc kubenswrapper[4811]: I0122 09:10:03.365939 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4aee61445030e02ac4fa9de4fefd94fe50e821ac8897c7403e173f2594a2e0ff"} err="failed to get container status \"4aee61445030e02ac4fa9de4fefd94fe50e821ac8897c7403e173f2594a2e0ff\": rpc error: code = NotFound desc = could not find container \"4aee61445030e02ac4fa9de4fefd94fe50e821ac8897c7403e173f2594a2e0ff\": container with ID starting with 4aee61445030e02ac4fa9de4fefd94fe50e821ac8897c7403e173f2594a2e0ff not found: ID does not exist" Jan 22 09:10:03 crc kubenswrapper[4811]: I0122 09:10:03.365988 4811 scope.go:117] "RemoveContainer" containerID="6d66c2a7980e144d2ce01cb8ebac91297e012c09d4b9aec617e747542e1ca304" Jan 22 09:10:03 crc kubenswrapper[4811]: E0122 09:10:03.366508 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d66c2a7980e144d2ce01cb8ebac91297e012c09d4b9aec617e747542e1ca304\": container with ID starting with 6d66c2a7980e144d2ce01cb8ebac91297e012c09d4b9aec617e747542e1ca304 not found: ID does not exist" containerID="6d66c2a7980e144d2ce01cb8ebac91297e012c09d4b9aec617e747542e1ca304" Jan 22 09:10:03 crc kubenswrapper[4811]: I0122 09:10:03.366536 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d66c2a7980e144d2ce01cb8ebac91297e012c09d4b9aec617e747542e1ca304"} err="failed to get container status \"6d66c2a7980e144d2ce01cb8ebac91297e012c09d4b9aec617e747542e1ca304\": rpc error: code = NotFound desc = could not find container \"6d66c2a7980e144d2ce01cb8ebac91297e012c09d4b9aec617e747542e1ca304\": container with ID starting with 6d66c2a7980e144d2ce01cb8ebac91297e012c09d4b9aec617e747542e1ca304 not found: ID does not exist" Jan 22 09:10:03 crc kubenswrapper[4811]: I0122 09:10:03.366551 4811 scope.go:117] "RemoveContainer" containerID="2c0b4f518da69d1ede47cb9d322e9d9ca120cfc5eb770f109f0e5fd2e3260964" Jan 22 09:10:03 crc kubenswrapper[4811]: E0122 09:10:03.367042 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c0b4f518da69d1ede47cb9d322e9d9ca120cfc5eb770f109f0e5fd2e3260964\": container with ID starting with 2c0b4f518da69d1ede47cb9d322e9d9ca120cfc5eb770f109f0e5fd2e3260964 not found: ID does not exist" containerID="2c0b4f518da69d1ede47cb9d322e9d9ca120cfc5eb770f109f0e5fd2e3260964" Jan 22 09:10:03 crc kubenswrapper[4811]: I0122 09:10:03.367067 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c0b4f518da69d1ede47cb9d322e9d9ca120cfc5eb770f109f0e5fd2e3260964"} err="failed to get container status \"2c0b4f518da69d1ede47cb9d322e9d9ca120cfc5eb770f109f0e5fd2e3260964\": rpc error: code = NotFound desc = could not find container \"2c0b4f518da69d1ede47cb9d322e9d9ca120cfc5eb770f109f0e5fd2e3260964\": container with ID starting with 2c0b4f518da69d1ede47cb9d322e9d9ca120cfc5eb770f109f0e5fd2e3260964 not found: ID does not exist" Jan 22 09:10:03 crc kubenswrapper[4811]: I0122 09:10:03.367084 4811 scope.go:117] "RemoveContainer" containerID="85e811085c9b345899231a1fcf56619c1cc5c8b0e6d22fc897fb96f20ba72e37" Jan 22 09:10:03 crc kubenswrapper[4811]: E0122 09:10:03.367614 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"85e811085c9b345899231a1fcf56619c1cc5c8b0e6d22fc897fb96f20ba72e37\": container with ID starting with 85e811085c9b345899231a1fcf56619c1cc5c8b0e6d22fc897fb96f20ba72e37 not found: ID does not exist" containerID="85e811085c9b345899231a1fcf56619c1cc5c8b0e6d22fc897fb96f20ba72e37" Jan 22 09:10:03 crc kubenswrapper[4811]: I0122 09:10:03.367653 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85e811085c9b345899231a1fcf56619c1cc5c8b0e6d22fc897fb96f20ba72e37"} err="failed to get container status \"85e811085c9b345899231a1fcf56619c1cc5c8b0e6d22fc897fb96f20ba72e37\": rpc error: code = NotFound desc = could not find container \"85e811085c9b345899231a1fcf56619c1cc5c8b0e6d22fc897fb96f20ba72e37\": container with ID starting with 85e811085c9b345899231a1fcf56619c1cc5c8b0e6d22fc897fb96f20ba72e37 not found: ID does not exist" Jan 22 09:10:03 crc kubenswrapper[4811]: E0122 09:10:03.718252 4811 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.26.94:6443: connect: connection refused" interval="3.2s" Jan 22 09:10:03 crc kubenswrapper[4811]: I0122 09:10:03.997083 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 22 09:10:05 crc kubenswrapper[4811]: I0122 09:10:05.932782 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-65jbw" Jan 22 09:10:05 crc kubenswrapper[4811]: I0122 09:10:05.933303 4811 status_manager.go:851] "Failed to get status for pod" podUID="d1f41cc2-bb4a-415e-80a6-8ae31b4c354f" pod="openshift-marketplace/redhat-operators-65jbw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-65jbw\": dial tcp 192.168.26.94:6443: connect: connection refused" Jan 22 09:10:05 crc kubenswrapper[4811]: I0122 09:10:05.933607 4811 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.26.94:6443: connect: connection refused" Jan 22 09:10:05 crc kubenswrapper[4811]: I0122 09:10:05.933911 4811 status_manager.go:851] "Failed to get status for pod" podUID="740119c0-67c2-4467-8095-b99b843e9d53" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.26.94:6443: connect: connection refused" Jan 22 09:10:05 crc kubenswrapper[4811]: I0122 09:10:05.962089 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-65jbw" Jan 22 09:10:05 crc kubenswrapper[4811]: I0122 09:10:05.962507 4811 status_manager.go:851] "Failed to get status for pod" podUID="740119c0-67c2-4467-8095-b99b843e9d53" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.26.94:6443: connect: connection refused" Jan 22 09:10:05 crc kubenswrapper[4811]: I0122 09:10:05.962814 4811 status_manager.go:851] "Failed to get status for pod" podUID="d1f41cc2-bb4a-415e-80a6-8ae31b4c354f" pod="openshift-marketplace/redhat-operators-65jbw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-65jbw\": dial tcp 192.168.26.94:6443: connect: connection refused" Jan 22 09:10:05 crc kubenswrapper[4811]: I0122 09:10:05.963130 4811 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.26.94:6443: connect: connection refused" Jan 22 09:10:05 crc kubenswrapper[4811]: I0122 09:10:05.994072 4811 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.26.94:6443: connect: connection refused" Jan 22 09:10:05 crc kubenswrapper[4811]: I0122 09:10:05.995364 4811 status_manager.go:851] "Failed to get status for pod" podUID="740119c0-67c2-4467-8095-b99b843e9d53" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.26.94:6443: connect: connection refused" Jan 22 09:10:05 crc kubenswrapper[4811]: I0122 09:10:05.995948 4811 status_manager.go:851] "Failed to get status for pod" podUID="d1f41cc2-bb4a-415e-80a6-8ae31b4c354f" pod="openshift-marketplace/redhat-operators-65jbw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-65jbw\": dial tcp 192.168.26.94:6443: connect: connection refused" Jan 22 09:10:06 crc kubenswrapper[4811]: E0122 09:10:06.919472 4811 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.26.94:6443: connect: connection refused" interval="6.4s" Jan 22 09:10:10 crc kubenswrapper[4811]: I0122 09:10:10.991545 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 09:10:10 crc kubenswrapper[4811]: I0122 09:10:10.992719 4811 status_manager.go:851] "Failed to get status for pod" podUID="d1f41cc2-bb4a-415e-80a6-8ae31b4c354f" pod="openshift-marketplace/redhat-operators-65jbw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-65jbw\": dial tcp 192.168.26.94:6443: connect: connection refused" Jan 22 09:10:10 crc kubenswrapper[4811]: I0122 09:10:10.993003 4811 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.26.94:6443: connect: connection refused" Jan 22 09:10:10 crc kubenswrapper[4811]: I0122 09:10:10.993237 4811 status_manager.go:851] "Failed to get status for pod" podUID="740119c0-67c2-4467-8095-b99b843e9d53" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.26.94:6443: connect: connection refused" Jan 22 09:10:11 crc kubenswrapper[4811]: I0122 09:10:11.005229 4811 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="30b44c40-e6d4-4902-98e9-a259269d8bf7" Jan 22 09:10:11 crc kubenswrapper[4811]: I0122 09:10:11.005258 4811 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="30b44c40-e6d4-4902-98e9-a259269d8bf7" Jan 22 09:10:11 crc kubenswrapper[4811]: E0122 09:10:11.005534 4811 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.26.94:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 09:10:11 crc kubenswrapper[4811]: I0122 09:10:11.006047 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 09:10:11 crc kubenswrapper[4811]: W0122 09:10:11.024216 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-5b3035d72e264c69b4a6a6eafcd53befadb7a1e67708bba7b411c6957b1851c5 WatchSource:0}: Error finding container 5b3035d72e264c69b4a6a6eafcd53befadb7a1e67708bba7b411c6957b1851c5: Status 404 returned error can't find the container with id 5b3035d72e264c69b4a6a6eafcd53befadb7a1e67708bba7b411c6957b1851c5 Jan 22 09:10:11 crc kubenswrapper[4811]: E0122 09:10:11.082926 4811 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 192.168.26.94:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188d028568aa526b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 09:10:00.636371563 +0000 UTC m=+244.958558686,LastTimestamp:2026-01-22 09:10:00.636371563 +0000 UTC m=+244.958558686,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 09:10:11 crc kubenswrapper[4811]: I0122 09:10:11.333033 4811 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="cdcf85e35e77dbb2794275e841c622590ad7842d3232ea1a15f8918e8f1a4192" exitCode=0 Jan 22 09:10:11 crc kubenswrapper[4811]: I0122 09:10:11.333096 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"cdcf85e35e77dbb2794275e841c622590ad7842d3232ea1a15f8918e8f1a4192"} Jan 22 09:10:11 crc kubenswrapper[4811]: I0122 09:10:11.333126 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"5b3035d72e264c69b4a6a6eafcd53befadb7a1e67708bba7b411c6957b1851c5"} Jan 22 09:10:11 crc kubenswrapper[4811]: I0122 09:10:11.333363 4811 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="30b44c40-e6d4-4902-98e9-a259269d8bf7" Jan 22 09:10:11 crc kubenswrapper[4811]: I0122 09:10:11.333380 4811 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="30b44c40-e6d4-4902-98e9-a259269d8bf7" Jan 22 09:10:11 crc kubenswrapper[4811]: I0122 09:10:11.333649 4811 status_manager.go:851] "Failed to get status for pod" podUID="d1f41cc2-bb4a-415e-80a6-8ae31b4c354f" pod="openshift-marketplace/redhat-operators-65jbw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-65jbw\": dial tcp 192.168.26.94:6443: connect: connection refused" Jan 22 09:10:11 crc kubenswrapper[4811]: E0122 09:10:11.333686 4811 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.26.94:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 09:10:11 crc kubenswrapper[4811]: I0122 09:10:11.333838 4811 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.26.94:6443: connect: connection refused" Jan 22 09:10:11 crc kubenswrapper[4811]: I0122 09:10:11.334030 4811 status_manager.go:851] "Failed to get status for pod" podUID="740119c0-67c2-4467-8095-b99b843e9d53" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.26.94:6443: connect: connection refused" Jan 22 09:10:12 crc kubenswrapper[4811]: I0122 09:10:12.343050 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"29105de22ca9488f0116eaa32b864ef9cf4d4601e6dc1b46a2a6c348c8dd1802"} Jan 22 09:10:12 crc kubenswrapper[4811]: I0122 09:10:12.343472 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"e1142ef5ffca4b8d2354f8dfec3ceb96a4e8fc47b3ca2761a804d203470896e0"} Jan 22 09:10:12 crc kubenswrapper[4811]: I0122 09:10:12.343489 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"bb9d0f44cc0cf80b4221f663d9db637aab9d94d0681e77c32d60f28c4f7af972"} Jan 22 09:10:12 crc kubenswrapper[4811]: I0122 09:10:12.343500 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"91a18261e62f41e4223552ba8c87322befaf52cf0db45057ba91a956d4601f66"} Jan 22 09:10:12 crc kubenswrapper[4811]: I0122 09:10:12.343510 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"6bcc7d3a22bdfdba7807ac81d0504e5af8be685819adeebfa51ea52a85db22be"} Jan 22 09:10:12 crc kubenswrapper[4811]: I0122 09:10:12.343886 4811 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="30b44c40-e6d4-4902-98e9-a259269d8bf7" Jan 22 09:10:12 crc kubenswrapper[4811]: I0122 09:10:12.343905 4811 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="30b44c40-e6d4-4902-98e9-a259269d8bf7" Jan 22 09:10:12 crc kubenswrapper[4811]: I0122 09:10:12.344162 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 09:10:16 crc kubenswrapper[4811]: I0122 09:10:16.009158 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 09:10:16 crc kubenswrapper[4811]: I0122 09:10:16.010093 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 09:10:16 crc kubenswrapper[4811]: I0122 09:10:16.014956 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 09:10:17 crc kubenswrapper[4811]: I0122 09:10:17.913807 4811 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 09:10:17 crc kubenswrapper[4811]: I0122 09:10:17.962532 4811 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="0602c49b-ad1d-4a33-935a-5bbe79894a6c" Jan 22 09:10:18 crc kubenswrapper[4811]: E0122 09:10:18.074990 4811 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-4419625f66ae957b2d9b146fc8b085a4a95c3f47860d6b4847f8fe71a6c4c86a.scope\": RecentStats: unable to find data in memory cache]" Jan 22 09:10:18 crc kubenswrapper[4811]: I0122 09:10:18.378682 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 22 09:10:18 crc kubenswrapper[4811]: I0122 09:10:18.378739 4811 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="4419625f66ae957b2d9b146fc8b085a4a95c3f47860d6b4847f8fe71a6c4c86a" exitCode=1 Jan 22 09:10:18 crc kubenswrapper[4811]: I0122 09:10:18.378842 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"4419625f66ae957b2d9b146fc8b085a4a95c3f47860d6b4847f8fe71a6c4c86a"} Jan 22 09:10:18 crc kubenswrapper[4811]: I0122 09:10:18.379088 4811 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="30b44c40-e6d4-4902-98e9-a259269d8bf7" Jan 22 09:10:18 crc kubenswrapper[4811]: I0122 09:10:18.379104 4811 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="30b44c40-e6d4-4902-98e9-a259269d8bf7" Jan 22 09:10:18 crc kubenswrapper[4811]: I0122 09:10:18.379320 4811 scope.go:117] "RemoveContainer" containerID="4419625f66ae957b2d9b146fc8b085a4a95c3f47860d6b4847f8fe71a6c4c86a" Jan 22 09:10:18 crc kubenswrapper[4811]: I0122 09:10:18.406310 4811 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="0602c49b-ad1d-4a33-935a-5bbe79894a6c" Jan 22 09:10:19 crc kubenswrapper[4811]: I0122 09:10:19.193877 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 09:10:19 crc kubenswrapper[4811]: I0122 09:10:19.386978 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 22 09:10:19 crc kubenswrapper[4811]: I0122 09:10:19.387052 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"31cffde1d5d36b33349a490d3ff81a1b102831650e97eca4dc373aa2c0b57dde"} Jan 22 09:10:24 crc kubenswrapper[4811]: I0122 09:10:24.297111 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 22 09:10:24 crc kubenswrapper[4811]: I0122 09:10:24.480179 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 22 09:10:24 crc kubenswrapper[4811]: I0122 09:10:24.509849 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 22 09:10:24 crc kubenswrapper[4811]: I0122 09:10:24.565858 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 22 09:10:24 crc kubenswrapper[4811]: I0122 09:10:24.568210 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 22 09:10:24 crc kubenswrapper[4811]: I0122 09:10:24.612415 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 22 09:10:25 crc kubenswrapper[4811]: I0122 09:10:25.193466 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 22 09:10:25 crc kubenswrapper[4811]: I0122 09:10:25.207382 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 22 09:10:25 crc kubenswrapper[4811]: I0122 09:10:25.412276 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 22 09:10:25 crc kubenswrapper[4811]: I0122 09:10:25.728568 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 22 09:10:26 crc kubenswrapper[4811]: I0122 09:10:26.104026 4811 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 22 09:10:26 crc kubenswrapper[4811]: I0122 09:10:26.126759 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 22 09:10:26 crc kubenswrapper[4811]: I0122 09:10:26.214920 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 22 09:10:26 crc kubenswrapper[4811]: I0122 09:10:26.313775 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 22 09:10:26 crc kubenswrapper[4811]: I0122 09:10:26.323868 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 22 09:10:26 crc kubenswrapper[4811]: I0122 09:10:26.418344 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 22 09:10:26 crc kubenswrapper[4811]: I0122 09:10:26.706134 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 22 09:10:26 crc kubenswrapper[4811]: I0122 09:10:26.975468 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 22 09:10:27 crc kubenswrapper[4811]: I0122 09:10:27.009037 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 22 09:10:27 crc kubenswrapper[4811]: I0122 09:10:27.169125 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 22 09:10:27 crc kubenswrapper[4811]: I0122 09:10:27.337310 4811 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 22 09:10:27 crc kubenswrapper[4811]: I0122 09:10:27.342675 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=27.342652567000002 podStartE2EDuration="27.342652567s" podCreationTimestamp="2026-01-22 09:10:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:10:17.954953316 +0000 UTC m=+262.277140439" watchObservedRunningTime="2026-01-22 09:10:27.342652567 +0000 UTC m=+271.664839690" Jan 22 09:10:27 crc kubenswrapper[4811]: I0122 09:10:27.343180 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 22 09:10:27 crc kubenswrapper[4811]: I0122 09:10:27.344830 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 22 09:10:27 crc kubenswrapper[4811]: I0122 09:10:27.344881 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 22 09:10:27 crc kubenswrapper[4811]: I0122 09:10:27.348860 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 09:10:27 crc kubenswrapper[4811]: I0122 09:10:27.348916 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 09:10:27 crc kubenswrapper[4811]: I0122 09:10:27.363788 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=10.363774311 podStartE2EDuration="10.363774311s" podCreationTimestamp="2026-01-22 09:10:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:10:27.358951445 +0000 UTC m=+271.681138568" watchObservedRunningTime="2026-01-22 09:10:27.363774311 +0000 UTC m=+271.685961434" Jan 22 09:10:27 crc kubenswrapper[4811]: I0122 09:10:27.458503 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 22 09:10:27 crc kubenswrapper[4811]: I0122 09:10:27.514669 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 22 09:10:27 crc kubenswrapper[4811]: I0122 09:10:27.585925 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 22 09:10:27 crc kubenswrapper[4811]: I0122 09:10:27.677435 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 22 09:10:27 crc kubenswrapper[4811]: I0122 09:10:27.694010 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 22 09:10:27 crc kubenswrapper[4811]: I0122 09:10:27.774193 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 22 09:10:27 crc kubenswrapper[4811]: I0122 09:10:27.827591 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 22 09:10:27 crc kubenswrapper[4811]: I0122 09:10:27.955677 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 22 09:10:28 crc kubenswrapper[4811]: I0122 09:10:28.056686 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 22 09:10:28 crc kubenswrapper[4811]: I0122 09:10:28.083089 4811 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 22 09:10:28 crc kubenswrapper[4811]: I0122 09:10:28.083363 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://f8f8d4ad68f63e1ccbe3294a3b7c9ac841c5e14d92f598bf7541c4a8663b9f4f" gracePeriod=5 Jan 22 09:10:28 crc kubenswrapper[4811]: I0122 09:10:28.177763 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 22 09:10:28 crc kubenswrapper[4811]: I0122 09:10:28.262621 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 22 09:10:28 crc kubenswrapper[4811]: I0122 09:10:28.334889 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 09:10:28 crc kubenswrapper[4811]: I0122 09:10:28.338871 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 09:10:28 crc kubenswrapper[4811]: I0122 09:10:28.442392 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 09:10:28 crc kubenswrapper[4811]: I0122 09:10:28.445300 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 09:10:28 crc kubenswrapper[4811]: I0122 09:10:28.449006 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 22 09:10:28 crc kubenswrapper[4811]: I0122 09:10:28.486980 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 22 09:10:28 crc kubenswrapper[4811]: I0122 09:10:28.805503 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 22 09:10:28 crc kubenswrapper[4811]: I0122 09:10:28.929546 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 22 09:10:29 crc kubenswrapper[4811]: I0122 09:10:29.153201 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 22 09:10:29 crc kubenswrapper[4811]: I0122 09:10:29.214441 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 22 09:10:29 crc kubenswrapper[4811]: I0122 09:10:29.222754 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 22 09:10:29 crc kubenswrapper[4811]: I0122 09:10:29.302378 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 22 09:10:29 crc kubenswrapper[4811]: I0122 09:10:29.333585 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 22 09:10:29 crc kubenswrapper[4811]: I0122 09:10:29.348932 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 22 09:10:29 crc kubenswrapper[4811]: I0122 09:10:29.407100 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 22 09:10:29 crc kubenswrapper[4811]: I0122 09:10:29.643291 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 22 09:10:29 crc kubenswrapper[4811]: I0122 09:10:29.709560 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 22 09:10:29 crc kubenswrapper[4811]: I0122 09:10:29.816088 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 22 09:10:29 crc kubenswrapper[4811]: I0122 09:10:29.825437 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 22 09:10:29 crc kubenswrapper[4811]: I0122 09:10:29.898243 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 22 09:10:29 crc kubenswrapper[4811]: I0122 09:10:29.918163 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 22 09:10:29 crc kubenswrapper[4811]: I0122 09:10:29.926890 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 22 09:10:29 crc kubenswrapper[4811]: I0122 09:10:29.964895 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 22 09:10:29 crc kubenswrapper[4811]: I0122 09:10:29.965999 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 22 09:10:29 crc kubenswrapper[4811]: I0122 09:10:29.986140 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 22 09:10:30 crc kubenswrapper[4811]: I0122 09:10:30.125137 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 22 09:10:30 crc kubenswrapper[4811]: I0122 09:10:30.271058 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 22 09:10:30 crc kubenswrapper[4811]: I0122 09:10:30.379246 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 22 09:10:30 crc kubenswrapper[4811]: I0122 09:10:30.451536 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 22 09:10:30 crc kubenswrapper[4811]: I0122 09:10:30.516568 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 22 09:10:30 crc kubenswrapper[4811]: I0122 09:10:30.538648 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 22 09:10:30 crc kubenswrapper[4811]: I0122 09:10:30.728183 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 22 09:10:30 crc kubenswrapper[4811]: I0122 09:10:30.795687 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 22 09:10:31 crc kubenswrapper[4811]: I0122 09:10:31.447307 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 22 09:10:32 crc kubenswrapper[4811]: I0122 09:10:32.006200 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 22 09:10:32 crc kubenswrapper[4811]: I0122 09:10:32.213013 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 22 09:10:32 crc kubenswrapper[4811]: I0122 09:10:32.264841 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 22 09:10:32 crc kubenswrapper[4811]: I0122 09:10:32.588734 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 22 09:10:32 crc kubenswrapper[4811]: I0122 09:10:32.659277 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 22 09:10:32 crc kubenswrapper[4811]: I0122 09:10:32.838308 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 22 09:10:32 crc kubenswrapper[4811]: I0122 09:10:32.959553 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 22 09:10:33 crc kubenswrapper[4811]: I0122 09:10:33.096420 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 22 09:10:33 crc kubenswrapper[4811]: I0122 09:10:33.346365 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 22 09:10:33 crc kubenswrapper[4811]: I0122 09:10:33.385122 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 22 09:10:33 crc kubenswrapper[4811]: I0122 09:10:33.457608 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 22 09:10:33 crc kubenswrapper[4811]: I0122 09:10:33.458120 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 22 09:10:33 crc kubenswrapper[4811]: I0122 09:10:33.458466 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 22 09:10:33 crc kubenswrapper[4811]: I0122 09:10:33.469825 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 22 09:10:33 crc kubenswrapper[4811]: I0122 09:10:33.469898 4811 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="f8f8d4ad68f63e1ccbe3294a3b7c9ac841c5e14d92f598bf7541c4a8663b9f4f" exitCode=137 Jan 22 09:10:33 crc kubenswrapper[4811]: I0122 09:10:33.604999 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 22 09:10:33 crc kubenswrapper[4811]: I0122 09:10:33.642918 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 22 09:10:33 crc kubenswrapper[4811]: I0122 09:10:33.642994 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 09:10:33 crc kubenswrapper[4811]: I0122 09:10:33.800779 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 22 09:10:33 crc kubenswrapper[4811]: I0122 09:10:33.800846 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 22 09:10:33 crc kubenswrapper[4811]: I0122 09:10:33.800872 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 22 09:10:33 crc kubenswrapper[4811]: I0122 09:10:33.800899 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 22 09:10:33 crc kubenswrapper[4811]: I0122 09:10:33.800902 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 09:10:33 crc kubenswrapper[4811]: I0122 09:10:33.800937 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 09:10:33 crc kubenswrapper[4811]: I0122 09:10:33.800959 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 09:10:33 crc kubenswrapper[4811]: I0122 09:10:33.800979 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 22 09:10:33 crc kubenswrapper[4811]: I0122 09:10:33.801016 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 09:10:33 crc kubenswrapper[4811]: I0122 09:10:33.801494 4811 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:33 crc kubenswrapper[4811]: I0122 09:10:33.801516 4811 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:33 crc kubenswrapper[4811]: I0122 09:10:33.801524 4811 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:33 crc kubenswrapper[4811]: I0122 09:10:33.801533 4811 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:33 crc kubenswrapper[4811]: I0122 09:10:33.807772 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 09:10:33 crc kubenswrapper[4811]: I0122 09:10:33.874371 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 22 09:10:33 crc kubenswrapper[4811]: I0122 09:10:33.892697 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 22 09:10:33 crc kubenswrapper[4811]: I0122 09:10:33.902316 4811 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 22 09:10:33 crc kubenswrapper[4811]: I0122 09:10:33.999146 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 22 09:10:33 crc kubenswrapper[4811]: I0122 09:10:33.999388 4811 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Jan 22 09:10:34 crc kubenswrapper[4811]: I0122 09:10:34.008409 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 22 09:10:34 crc kubenswrapper[4811]: I0122 09:10:34.008441 4811 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="2a1c7fa4-aad4-4ad0-84d6-e970a5d79efb" Jan 22 09:10:34 crc kubenswrapper[4811]: I0122 09:10:34.010685 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 22 09:10:34 crc kubenswrapper[4811]: I0122 09:10:34.010715 4811 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="2a1c7fa4-aad4-4ad0-84d6-e970a5d79efb" Jan 22 09:10:34 crc kubenswrapper[4811]: I0122 09:10:34.043969 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 22 09:10:34 crc kubenswrapper[4811]: I0122 09:10:34.084788 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 22 09:10:34 crc kubenswrapper[4811]: I0122 09:10:34.112274 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 22 09:10:34 crc kubenswrapper[4811]: I0122 09:10:34.248698 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 22 09:10:34 crc kubenswrapper[4811]: I0122 09:10:34.400678 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 22 09:10:34 crc kubenswrapper[4811]: I0122 09:10:34.432971 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 22 09:10:34 crc kubenswrapper[4811]: I0122 09:10:34.459899 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 22 09:10:34 crc kubenswrapper[4811]: I0122 09:10:34.476216 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 22 09:10:34 crc kubenswrapper[4811]: I0122 09:10:34.476288 4811 scope.go:117] "RemoveContainer" containerID="f8f8d4ad68f63e1ccbe3294a3b7c9ac841c5e14d92f598bf7541c4a8663b9f4f" Jan 22 09:10:34 crc kubenswrapper[4811]: I0122 09:10:34.476348 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 09:10:34 crc kubenswrapper[4811]: I0122 09:10:34.508390 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 22 09:10:34 crc kubenswrapper[4811]: I0122 09:10:34.554725 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 22 09:10:34 crc kubenswrapper[4811]: I0122 09:10:34.835349 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 22 09:10:34 crc kubenswrapper[4811]: I0122 09:10:34.896494 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 22 09:10:34 crc kubenswrapper[4811]: I0122 09:10:34.956054 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 22 09:10:35 crc kubenswrapper[4811]: I0122 09:10:35.107104 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 22 09:10:35 crc kubenswrapper[4811]: I0122 09:10:35.114471 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 22 09:10:35 crc kubenswrapper[4811]: I0122 09:10:35.168759 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 22 09:10:35 crc kubenswrapper[4811]: I0122 09:10:35.279557 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 22 09:10:35 crc kubenswrapper[4811]: I0122 09:10:35.295806 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 22 09:10:35 crc kubenswrapper[4811]: I0122 09:10:35.314418 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 22 09:10:35 crc kubenswrapper[4811]: I0122 09:10:35.372616 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 22 09:10:35 crc kubenswrapper[4811]: I0122 09:10:35.378598 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 22 09:10:35 crc kubenswrapper[4811]: I0122 09:10:35.386213 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 22 09:10:35 crc kubenswrapper[4811]: I0122 09:10:35.464085 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 22 09:10:35 crc kubenswrapper[4811]: I0122 09:10:35.492730 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 22 09:10:35 crc kubenswrapper[4811]: I0122 09:10:35.507913 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 22 09:10:35 crc kubenswrapper[4811]: I0122 09:10:35.519614 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 22 09:10:35 crc kubenswrapper[4811]: I0122 09:10:35.522816 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 22 09:10:35 crc kubenswrapper[4811]: I0122 09:10:35.595534 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 22 09:10:35 crc kubenswrapper[4811]: I0122 09:10:35.820153 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 22 09:10:35 crc kubenswrapper[4811]: I0122 09:10:35.849936 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 22 09:10:35 crc kubenswrapper[4811]: I0122 09:10:35.868965 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 22 09:10:35 crc kubenswrapper[4811]: I0122 09:10:35.940303 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 22 09:10:36 crc kubenswrapper[4811]: I0122 09:10:36.005957 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 22 09:10:36 crc kubenswrapper[4811]: I0122 09:10:36.033611 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 22 09:10:36 crc kubenswrapper[4811]: I0122 09:10:36.044957 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 22 09:10:36 crc kubenswrapper[4811]: I0122 09:10:36.058816 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 22 09:10:36 crc kubenswrapper[4811]: I0122 09:10:36.200778 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 22 09:10:36 crc kubenswrapper[4811]: I0122 09:10:36.353378 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 22 09:10:36 crc kubenswrapper[4811]: I0122 09:10:36.406340 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 22 09:10:36 crc kubenswrapper[4811]: I0122 09:10:36.515229 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 22 09:10:36 crc kubenswrapper[4811]: I0122 09:10:36.588886 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 22 09:10:36 crc kubenswrapper[4811]: I0122 09:10:36.718147 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 22 09:10:36 crc kubenswrapper[4811]: I0122 09:10:36.721154 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 22 09:10:36 crc kubenswrapper[4811]: I0122 09:10:36.764008 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 22 09:10:36 crc kubenswrapper[4811]: I0122 09:10:36.793133 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 22 09:10:36 crc kubenswrapper[4811]: I0122 09:10:36.821575 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 22 09:10:36 crc kubenswrapper[4811]: I0122 09:10:36.957669 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 22 09:10:36 crc kubenswrapper[4811]: I0122 09:10:36.981179 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 22 09:10:37 crc kubenswrapper[4811]: I0122 09:10:37.028106 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 22 09:10:37 crc kubenswrapper[4811]: I0122 09:10:37.032243 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 22 09:10:37 crc kubenswrapper[4811]: I0122 09:10:37.041781 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 22 09:10:37 crc kubenswrapper[4811]: I0122 09:10:37.069987 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 22 09:10:37 crc kubenswrapper[4811]: I0122 09:10:37.112044 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 22 09:10:37 crc kubenswrapper[4811]: I0122 09:10:37.168708 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 22 09:10:37 crc kubenswrapper[4811]: I0122 09:10:37.181727 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 22 09:10:37 crc kubenswrapper[4811]: I0122 09:10:37.351189 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 22 09:10:37 crc kubenswrapper[4811]: I0122 09:10:37.455279 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 22 09:10:37 crc kubenswrapper[4811]: I0122 09:10:37.497177 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 22 09:10:37 crc kubenswrapper[4811]: I0122 09:10:37.532658 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 22 09:10:37 crc kubenswrapper[4811]: I0122 09:10:37.663997 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 22 09:10:37 crc kubenswrapper[4811]: I0122 09:10:37.761579 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 22 09:10:37 crc kubenswrapper[4811]: I0122 09:10:37.811262 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 22 09:10:37 crc kubenswrapper[4811]: I0122 09:10:37.815752 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 22 09:10:37 crc kubenswrapper[4811]: I0122 09:10:37.916205 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 22 09:10:37 crc kubenswrapper[4811]: I0122 09:10:37.919690 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 22 09:10:37 crc kubenswrapper[4811]: I0122 09:10:37.954601 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 22 09:10:38 crc kubenswrapper[4811]: I0122 09:10:38.036266 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 22 09:10:38 crc kubenswrapper[4811]: I0122 09:10:38.046394 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 22 09:10:38 crc kubenswrapper[4811]: I0122 09:10:38.144867 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 22 09:10:38 crc kubenswrapper[4811]: I0122 09:10:38.339465 4811 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 22 09:10:38 crc kubenswrapper[4811]: I0122 09:10:38.393586 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 22 09:10:38 crc kubenswrapper[4811]: I0122 09:10:38.620263 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 22 09:10:38 crc kubenswrapper[4811]: I0122 09:10:38.708033 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 22 09:10:38 crc kubenswrapper[4811]: I0122 09:10:38.769580 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 22 09:10:38 crc kubenswrapper[4811]: I0122 09:10:38.837379 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 22 09:10:38 crc kubenswrapper[4811]: I0122 09:10:38.841348 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 22 09:10:38 crc kubenswrapper[4811]: I0122 09:10:38.987171 4811 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 22 09:10:38 crc kubenswrapper[4811]: I0122 09:10:38.996760 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 22 09:10:39 crc kubenswrapper[4811]: I0122 09:10:39.031883 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 22 09:10:39 crc kubenswrapper[4811]: I0122 09:10:39.129548 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 22 09:10:39 crc kubenswrapper[4811]: I0122 09:10:39.162091 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 22 09:10:39 crc kubenswrapper[4811]: I0122 09:10:39.267421 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 22 09:10:39 crc kubenswrapper[4811]: I0122 09:10:39.293365 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 22 09:10:39 crc kubenswrapper[4811]: I0122 09:10:39.392425 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 22 09:10:39 crc kubenswrapper[4811]: I0122 09:10:39.444153 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 22 09:10:39 crc kubenswrapper[4811]: I0122 09:10:39.474492 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 22 09:10:39 crc kubenswrapper[4811]: I0122 09:10:39.570920 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 22 09:10:39 crc kubenswrapper[4811]: I0122 09:10:39.666247 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 22 09:10:39 crc kubenswrapper[4811]: I0122 09:10:39.721543 4811 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 22 09:10:39 crc kubenswrapper[4811]: I0122 09:10:39.898475 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 22 09:10:39 crc kubenswrapper[4811]: I0122 09:10:39.965893 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 22 09:10:39 crc kubenswrapper[4811]: I0122 09:10:39.997993 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 22 09:10:40 crc kubenswrapper[4811]: I0122 09:10:40.019143 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 22 09:10:40 crc kubenswrapper[4811]: I0122 09:10:40.134232 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 22 09:10:40 crc kubenswrapper[4811]: I0122 09:10:40.153712 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 22 09:10:40 crc kubenswrapper[4811]: I0122 09:10:40.163981 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 22 09:10:40 crc kubenswrapper[4811]: I0122 09:10:40.186574 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 22 09:10:40 crc kubenswrapper[4811]: I0122 09:10:40.205059 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 22 09:10:40 crc kubenswrapper[4811]: I0122 09:10:40.618585 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 22 09:10:40 crc kubenswrapper[4811]: I0122 09:10:40.679192 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 22 09:10:40 crc kubenswrapper[4811]: I0122 09:10:40.840368 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 22 09:10:40 crc kubenswrapper[4811]: I0122 09:10:40.887599 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 22 09:10:40 crc kubenswrapper[4811]: I0122 09:10:40.945577 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 22 09:10:41 crc kubenswrapper[4811]: I0122 09:10:41.033590 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 22 09:10:41 crc kubenswrapper[4811]: I0122 09:10:41.045012 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 22 09:10:41 crc kubenswrapper[4811]: I0122 09:10:41.108566 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 22 09:10:41 crc kubenswrapper[4811]: I0122 09:10:41.189499 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 22 09:10:41 crc kubenswrapper[4811]: I0122 09:10:41.226956 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 22 09:10:41 crc kubenswrapper[4811]: I0122 09:10:41.328962 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 22 09:10:41 crc kubenswrapper[4811]: I0122 09:10:41.416241 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 22 09:10:41 crc kubenswrapper[4811]: I0122 09:10:41.420702 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 22 09:10:41 crc kubenswrapper[4811]: I0122 09:10:41.470197 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 22 09:10:41 crc kubenswrapper[4811]: I0122 09:10:41.483137 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 22 09:10:41 crc kubenswrapper[4811]: I0122 09:10:41.506392 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 22 09:10:41 crc kubenswrapper[4811]: I0122 09:10:41.549302 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 22 09:10:41 crc kubenswrapper[4811]: I0122 09:10:41.593403 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 22 09:10:41 crc kubenswrapper[4811]: I0122 09:10:41.857214 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 22 09:10:42 crc kubenswrapper[4811]: I0122 09:10:42.153608 4811 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 22 09:10:42 crc kubenswrapper[4811]: I0122 09:10:42.153824 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 22 09:10:42 crc kubenswrapper[4811]: I0122 09:10:42.281840 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 22 09:10:42 crc kubenswrapper[4811]: I0122 09:10:42.284774 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 22 09:10:42 crc kubenswrapper[4811]: I0122 09:10:42.288738 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 22 09:10:42 crc kubenswrapper[4811]: I0122 09:10:42.362687 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 22 09:10:42 crc kubenswrapper[4811]: I0122 09:10:42.378147 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 22 09:10:42 crc kubenswrapper[4811]: I0122 09:10:42.388932 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 22 09:10:42 crc kubenswrapper[4811]: I0122 09:10:42.412696 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 22 09:10:42 crc kubenswrapper[4811]: I0122 09:10:42.442230 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 22 09:10:42 crc kubenswrapper[4811]: I0122 09:10:42.447283 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 22 09:10:42 crc kubenswrapper[4811]: I0122 09:10:42.500065 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 22 09:10:42 crc kubenswrapper[4811]: I0122 09:10:42.646584 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 22 09:10:42 crc kubenswrapper[4811]: I0122 09:10:42.646588 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 22 09:10:42 crc kubenswrapper[4811]: I0122 09:10:42.849666 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 22 09:10:42 crc kubenswrapper[4811]: I0122 09:10:42.876747 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 22 09:10:42 crc kubenswrapper[4811]: I0122 09:10:42.884214 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 22 09:10:42 crc kubenswrapper[4811]: I0122 09:10:42.958192 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 22 09:10:43 crc kubenswrapper[4811]: I0122 09:10:43.131292 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 22 09:10:43 crc kubenswrapper[4811]: I0122 09:10:43.134742 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 22 09:10:43 crc kubenswrapper[4811]: I0122 09:10:43.254109 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 22 09:10:43 crc kubenswrapper[4811]: I0122 09:10:43.271792 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 22 09:10:43 crc kubenswrapper[4811]: I0122 09:10:43.355467 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 22 09:10:43 crc kubenswrapper[4811]: I0122 09:10:43.416702 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 22 09:10:43 crc kubenswrapper[4811]: I0122 09:10:43.567240 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 22 09:10:43 crc kubenswrapper[4811]: I0122 09:10:43.665595 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 22 09:10:43 crc kubenswrapper[4811]: I0122 09:10:43.684315 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 22 09:10:43 crc kubenswrapper[4811]: I0122 09:10:43.806890 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 22 09:10:43 crc kubenswrapper[4811]: I0122 09:10:43.820431 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 22 09:10:43 crc kubenswrapper[4811]: I0122 09:10:43.882395 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 22 09:10:43 crc kubenswrapper[4811]: I0122 09:10:43.938385 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 22 09:10:43 crc kubenswrapper[4811]: I0122 09:10:43.972788 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 22 09:10:44 crc kubenswrapper[4811]: I0122 09:10:44.313773 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 22 09:10:44 crc kubenswrapper[4811]: I0122 09:10:44.321348 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 22 09:10:44 crc kubenswrapper[4811]: I0122 09:10:44.336315 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 22 09:10:44 crc kubenswrapper[4811]: I0122 09:10:44.495875 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 22 09:10:44 crc kubenswrapper[4811]: I0122 09:10:44.570686 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 22 09:10:44 crc kubenswrapper[4811]: I0122 09:10:44.712899 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 22 09:10:45 crc kubenswrapper[4811]: I0122 09:10:45.324459 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 22 09:10:45 crc kubenswrapper[4811]: I0122 09:10:45.535859 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 22 09:10:45 crc kubenswrapper[4811]: I0122 09:10:45.557482 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 22 09:10:45 crc kubenswrapper[4811]: I0122 09:10:45.904210 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 22 09:10:46 crc kubenswrapper[4811]: I0122 09:10:46.133034 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 22 09:10:46 crc kubenswrapper[4811]: I0122 09:10:46.152950 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 22 09:10:46 crc kubenswrapper[4811]: I0122 09:10:46.245127 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 22 09:10:46 crc kubenswrapper[4811]: I0122 09:10:46.332077 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 22 09:10:46 crc kubenswrapper[4811]: I0122 09:10:46.471597 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 22 09:10:46 crc kubenswrapper[4811]: I0122 09:10:46.484045 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 22 09:10:46 crc kubenswrapper[4811]: I0122 09:10:46.656807 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 22 09:10:47 crc kubenswrapper[4811]: I0122 09:10:47.103711 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 22 09:10:47 crc kubenswrapper[4811]: I0122 09:10:47.528981 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 22 09:10:47 crc kubenswrapper[4811]: I0122 09:10:47.551709 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 22 09:10:47 crc kubenswrapper[4811]: I0122 09:10:47.874612 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 22 09:10:47 crc kubenswrapper[4811]: I0122 09:10:47.970989 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 22 09:10:48 crc kubenswrapper[4811]: I0122 09:10:48.098688 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 22 09:10:48 crc kubenswrapper[4811]: I0122 09:10:48.326177 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 22 09:10:48 crc kubenswrapper[4811]: I0122 09:10:48.561562 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 22 09:10:55 crc kubenswrapper[4811]: I0122 09:10:55.875892 4811 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 22 09:11:15 crc kubenswrapper[4811]: I0122 09:11:15.459971 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-8fflv"] Jan 22 09:11:15 crc kubenswrapper[4811]: I0122 09:11:15.460688 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8fflv" podUID="7285d1e4-79ce-4ada-b15f-b1df68271703" containerName="route-controller-manager" containerID="cri-o://fca687aca6a02eaa20a516a56fbddd1fb23bfc706e76773d5f14be3896f458dc" gracePeriod=30 Jan 22 09:11:15 crc kubenswrapper[4811]: I0122 09:11:15.463335 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-vx8k5"] Jan 22 09:11:15 crc kubenswrapper[4811]: I0122 09:11:15.463591 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-vx8k5" podUID="799c1556-92e6-43b8-a620-c7211a2ce813" containerName="controller-manager" containerID="cri-o://4947fc0e89be1195278de1d0c7f0b09d917ac00a8bab7a0e811d5ac66f0e683f" gracePeriod=30 Jan 22 09:11:15 crc kubenswrapper[4811]: I0122 09:11:15.683730 4811 generic.go:334] "Generic (PLEG): container finished" podID="7285d1e4-79ce-4ada-b15f-b1df68271703" containerID="fca687aca6a02eaa20a516a56fbddd1fb23bfc706e76773d5f14be3896f458dc" exitCode=0 Jan 22 09:11:15 crc kubenswrapper[4811]: I0122 09:11:15.684046 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8fflv" event={"ID":"7285d1e4-79ce-4ada-b15f-b1df68271703","Type":"ContainerDied","Data":"fca687aca6a02eaa20a516a56fbddd1fb23bfc706e76773d5f14be3896f458dc"} Jan 22 09:11:15 crc kubenswrapper[4811]: I0122 09:11:15.686539 4811 generic.go:334] "Generic (PLEG): container finished" podID="799c1556-92e6-43b8-a620-c7211a2ce813" containerID="4947fc0e89be1195278de1d0c7f0b09d917ac00a8bab7a0e811d5ac66f0e683f" exitCode=0 Jan 22 09:11:15 crc kubenswrapper[4811]: I0122 09:11:15.686617 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-vx8k5" event={"ID":"799c1556-92e6-43b8-a620-c7211a2ce813","Type":"ContainerDied","Data":"4947fc0e89be1195278de1d0c7f0b09d917ac00a8bab7a0e811d5ac66f0e683f"} Jan 22 09:11:15 crc kubenswrapper[4811]: I0122 09:11:15.863368 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-vx8k5" Jan 22 09:11:15 crc kubenswrapper[4811]: I0122 09:11:15.916756 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8fflv" Jan 22 09:11:15 crc kubenswrapper[4811]: I0122 09:11:15.991915 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/799c1556-92e6-43b8-a620-c7211a2ce813-proxy-ca-bundles\") pod \"799c1556-92e6-43b8-a620-c7211a2ce813\" (UID: \"799c1556-92e6-43b8-a620-c7211a2ce813\") " Jan 22 09:11:15 crc kubenswrapper[4811]: I0122 09:11:15.992047 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kr89x\" (UniqueName: \"kubernetes.io/projected/799c1556-92e6-43b8-a620-c7211a2ce813-kube-api-access-kr89x\") pod \"799c1556-92e6-43b8-a620-c7211a2ce813\" (UID: \"799c1556-92e6-43b8-a620-c7211a2ce813\") " Jan 22 09:11:15 crc kubenswrapper[4811]: I0122 09:11:15.992087 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/799c1556-92e6-43b8-a620-c7211a2ce813-serving-cert\") pod \"799c1556-92e6-43b8-a620-c7211a2ce813\" (UID: \"799c1556-92e6-43b8-a620-c7211a2ce813\") " Jan 22 09:11:15 crc kubenswrapper[4811]: I0122 09:11:15.992154 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/799c1556-92e6-43b8-a620-c7211a2ce813-config\") pod \"799c1556-92e6-43b8-a620-c7211a2ce813\" (UID: \"799c1556-92e6-43b8-a620-c7211a2ce813\") " Jan 22 09:11:15 crc kubenswrapper[4811]: I0122 09:11:15.992337 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/799c1556-92e6-43b8-a620-c7211a2ce813-client-ca\") pod \"799c1556-92e6-43b8-a620-c7211a2ce813\" (UID: \"799c1556-92e6-43b8-a620-c7211a2ce813\") " Jan 22 09:11:15 crc kubenswrapper[4811]: I0122 09:11:15.992975 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/799c1556-92e6-43b8-a620-c7211a2ce813-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "799c1556-92e6-43b8-a620-c7211a2ce813" (UID: "799c1556-92e6-43b8-a620-c7211a2ce813"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:11:15 crc kubenswrapper[4811]: I0122 09:11:15.993209 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/799c1556-92e6-43b8-a620-c7211a2ce813-client-ca" (OuterVolumeSpecName: "client-ca") pod "799c1556-92e6-43b8-a620-c7211a2ce813" (UID: "799c1556-92e6-43b8-a620-c7211a2ce813"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:11:15 crc kubenswrapper[4811]: I0122 09:11:15.993592 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/799c1556-92e6-43b8-a620-c7211a2ce813-config" (OuterVolumeSpecName: "config") pod "799c1556-92e6-43b8-a620-c7211a2ce813" (UID: "799c1556-92e6-43b8-a620-c7211a2ce813"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:11:15 crc kubenswrapper[4811]: I0122 09:11:15.998705 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/799c1556-92e6-43b8-a620-c7211a2ce813-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "799c1556-92e6-43b8-a620-c7211a2ce813" (UID: "799c1556-92e6-43b8-a620-c7211a2ce813"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:11:15 crc kubenswrapper[4811]: I0122 09:11:15.998776 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/799c1556-92e6-43b8-a620-c7211a2ce813-kube-api-access-kr89x" (OuterVolumeSpecName: "kube-api-access-kr89x") pod "799c1556-92e6-43b8-a620-c7211a2ce813" (UID: "799c1556-92e6-43b8-a620-c7211a2ce813"). InnerVolumeSpecName "kube-api-access-kr89x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:11:16 crc kubenswrapper[4811]: I0122 09:11:16.093850 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7285d1e4-79ce-4ada-b15f-b1df68271703-client-ca\") pod \"7285d1e4-79ce-4ada-b15f-b1df68271703\" (UID: \"7285d1e4-79ce-4ada-b15f-b1df68271703\") " Jan 22 09:11:16 crc kubenswrapper[4811]: I0122 09:11:16.093966 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lnz52\" (UniqueName: \"kubernetes.io/projected/7285d1e4-79ce-4ada-b15f-b1df68271703-kube-api-access-lnz52\") pod \"7285d1e4-79ce-4ada-b15f-b1df68271703\" (UID: \"7285d1e4-79ce-4ada-b15f-b1df68271703\") " Jan 22 09:11:16 crc kubenswrapper[4811]: I0122 09:11:16.094601 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7285d1e4-79ce-4ada-b15f-b1df68271703-client-ca" (OuterVolumeSpecName: "client-ca") pod "7285d1e4-79ce-4ada-b15f-b1df68271703" (UID: "7285d1e4-79ce-4ada-b15f-b1df68271703"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:11:16 crc kubenswrapper[4811]: I0122 09:11:16.094710 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7285d1e4-79ce-4ada-b15f-b1df68271703-config\") pod \"7285d1e4-79ce-4ada-b15f-b1df68271703\" (UID: \"7285d1e4-79ce-4ada-b15f-b1df68271703\") " Jan 22 09:11:16 crc kubenswrapper[4811]: I0122 09:11:16.094770 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7285d1e4-79ce-4ada-b15f-b1df68271703-serving-cert\") pod \"7285d1e4-79ce-4ada-b15f-b1df68271703\" (UID: \"7285d1e4-79ce-4ada-b15f-b1df68271703\") " Jan 22 09:11:16 crc kubenswrapper[4811]: I0122 09:11:16.095126 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7285d1e4-79ce-4ada-b15f-b1df68271703-config" (OuterVolumeSpecName: "config") pod "7285d1e4-79ce-4ada-b15f-b1df68271703" (UID: "7285d1e4-79ce-4ada-b15f-b1df68271703"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:11:16 crc kubenswrapper[4811]: I0122 09:11:16.095292 4811 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/799c1556-92e6-43b8-a620-c7211a2ce813-config\") on node \"crc\" DevicePath \"\"" Jan 22 09:11:16 crc kubenswrapper[4811]: I0122 09:11:16.095315 4811 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7285d1e4-79ce-4ada-b15f-b1df68271703-config\") on node \"crc\" DevicePath \"\"" Jan 22 09:11:16 crc kubenswrapper[4811]: I0122 09:11:16.095326 4811 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/799c1556-92e6-43b8-a620-c7211a2ce813-client-ca\") on node \"crc\" DevicePath \"\"" Jan 22 09:11:16 crc kubenswrapper[4811]: I0122 09:11:16.095337 4811 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/799c1556-92e6-43b8-a620-c7211a2ce813-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 22 09:11:16 crc kubenswrapper[4811]: I0122 09:11:16.095347 4811 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7285d1e4-79ce-4ada-b15f-b1df68271703-client-ca\") on node \"crc\" DevicePath \"\"" Jan 22 09:11:16 crc kubenswrapper[4811]: I0122 09:11:16.095359 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kr89x\" (UniqueName: \"kubernetes.io/projected/799c1556-92e6-43b8-a620-c7211a2ce813-kube-api-access-kr89x\") on node \"crc\" DevicePath \"\"" Jan 22 09:11:16 crc kubenswrapper[4811]: I0122 09:11:16.095370 4811 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/799c1556-92e6-43b8-a620-c7211a2ce813-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 09:11:16 crc kubenswrapper[4811]: I0122 09:11:16.098329 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7285d1e4-79ce-4ada-b15f-b1df68271703-kube-api-access-lnz52" (OuterVolumeSpecName: "kube-api-access-lnz52") pod "7285d1e4-79ce-4ada-b15f-b1df68271703" (UID: "7285d1e4-79ce-4ada-b15f-b1df68271703"). InnerVolumeSpecName "kube-api-access-lnz52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:11:16 crc kubenswrapper[4811]: I0122 09:11:16.098848 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7285d1e4-79ce-4ada-b15f-b1df68271703-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7285d1e4-79ce-4ada-b15f-b1df68271703" (UID: "7285d1e4-79ce-4ada-b15f-b1df68271703"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:11:16 crc kubenswrapper[4811]: I0122 09:11:16.197540 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lnz52\" (UniqueName: \"kubernetes.io/projected/7285d1e4-79ce-4ada-b15f-b1df68271703-kube-api-access-lnz52\") on node \"crc\" DevicePath \"\"" Jan 22 09:11:16 crc kubenswrapper[4811]: I0122 09:11:16.197714 4811 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7285d1e4-79ce-4ada-b15f-b1df68271703-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 09:11:16 crc kubenswrapper[4811]: I0122 09:11:16.694139 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8fflv" event={"ID":"7285d1e4-79ce-4ada-b15f-b1df68271703","Type":"ContainerDied","Data":"b327810d260818d30ec0ee97db39adfb3a5dc570837740aa00dc841fb3d6ab58"} Jan 22 09:11:16 crc kubenswrapper[4811]: I0122 09:11:16.694207 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8fflv" Jan 22 09:11:16 crc kubenswrapper[4811]: I0122 09:11:16.694220 4811 scope.go:117] "RemoveContainer" containerID="fca687aca6a02eaa20a516a56fbddd1fb23bfc706e76773d5f14be3896f458dc" Jan 22 09:11:16 crc kubenswrapper[4811]: I0122 09:11:16.697244 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-vx8k5" event={"ID":"799c1556-92e6-43b8-a620-c7211a2ce813","Type":"ContainerDied","Data":"56e1eda8491da29653d78995ad484dbd526369bccf6f6d2c7947e9726b928d2b"} Jan 22 09:11:16 crc kubenswrapper[4811]: I0122 09:11:16.697354 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-vx8k5" Jan 22 09:11:16 crc kubenswrapper[4811]: I0122 09:11:16.714773 4811 scope.go:117] "RemoveContainer" containerID="4947fc0e89be1195278de1d0c7f0b09d917ac00a8bab7a0e811d5ac66f0e683f" Jan 22 09:11:16 crc kubenswrapper[4811]: I0122 09:11:16.720172 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-vx8k5"] Jan 22 09:11:16 crc kubenswrapper[4811]: I0122 09:11:16.728917 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-vx8k5"] Jan 22 09:11:16 crc kubenswrapper[4811]: I0122 09:11:16.732675 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-8fflv"] Jan 22 09:11:16 crc kubenswrapper[4811]: I0122 09:11:16.734683 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-8fflv"] Jan 22 09:11:17 crc kubenswrapper[4811]: I0122 09:11:17.437099 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-668dc6f567-2qfm8"] Jan 22 09:11:17 crc kubenswrapper[4811]: E0122 09:11:17.437804 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 22 09:11:17 crc kubenswrapper[4811]: I0122 09:11:17.437821 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 22 09:11:17 crc kubenswrapper[4811]: E0122 09:11:17.437845 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="799c1556-92e6-43b8-a620-c7211a2ce813" containerName="controller-manager" Jan 22 09:11:17 crc kubenswrapper[4811]: I0122 09:11:17.437852 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="799c1556-92e6-43b8-a620-c7211a2ce813" containerName="controller-manager" Jan 22 09:11:17 crc kubenswrapper[4811]: E0122 09:11:17.437863 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7285d1e4-79ce-4ada-b15f-b1df68271703" containerName="route-controller-manager" Jan 22 09:11:17 crc kubenswrapper[4811]: I0122 09:11:17.437872 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="7285d1e4-79ce-4ada-b15f-b1df68271703" containerName="route-controller-manager" Jan 22 09:11:17 crc kubenswrapper[4811]: E0122 09:11:17.437884 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="740119c0-67c2-4467-8095-b99b843e9d53" containerName="installer" Jan 22 09:11:17 crc kubenswrapper[4811]: I0122 09:11:17.437890 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="740119c0-67c2-4467-8095-b99b843e9d53" containerName="installer" Jan 22 09:11:17 crc kubenswrapper[4811]: I0122 09:11:17.438014 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="7285d1e4-79ce-4ada-b15f-b1df68271703" containerName="route-controller-manager" Jan 22 09:11:17 crc kubenswrapper[4811]: I0122 09:11:17.438033 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="740119c0-67c2-4467-8095-b99b843e9d53" containerName="installer" Jan 22 09:11:17 crc kubenswrapper[4811]: I0122 09:11:17.438047 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 22 09:11:17 crc kubenswrapper[4811]: I0122 09:11:17.438058 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="799c1556-92e6-43b8-a620-c7211a2ce813" containerName="controller-manager" Jan 22 09:11:17 crc kubenswrapper[4811]: I0122 09:11:17.438593 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-668dc6f567-2qfm8" Jan 22 09:11:17 crc kubenswrapper[4811]: I0122 09:11:17.440915 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 22 09:11:17 crc kubenswrapper[4811]: I0122 09:11:17.441406 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 22 09:11:17 crc kubenswrapper[4811]: I0122 09:11:17.441574 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 22 09:11:17 crc kubenswrapper[4811]: I0122 09:11:17.441785 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 22 09:11:17 crc kubenswrapper[4811]: I0122 09:11:17.441863 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 22 09:11:17 crc kubenswrapper[4811]: I0122 09:11:17.442047 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-f66b4c6db-lz8r5"] Jan 22 09:11:17 crc kubenswrapper[4811]: I0122 09:11:17.442176 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 22 09:11:17 crc kubenswrapper[4811]: I0122 09:11:17.443709 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-f66b4c6db-lz8r5" Jan 22 09:11:17 crc kubenswrapper[4811]: I0122 09:11:17.447502 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 22 09:11:17 crc kubenswrapper[4811]: I0122 09:11:17.447613 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 22 09:11:17 crc kubenswrapper[4811]: I0122 09:11:17.447666 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 22 09:11:17 crc kubenswrapper[4811]: I0122 09:11:17.448724 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 22 09:11:17 crc kubenswrapper[4811]: I0122 09:11:17.448985 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 22 09:11:17 crc kubenswrapper[4811]: I0122 09:11:17.449049 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 22 09:11:17 crc kubenswrapper[4811]: I0122 09:11:17.453582 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 22 09:11:17 crc kubenswrapper[4811]: I0122 09:11:17.457448 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-668dc6f567-2qfm8"] Jan 22 09:11:17 crc kubenswrapper[4811]: I0122 09:11:17.473723 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-f66b4c6db-lz8r5"] Jan 22 09:11:17 crc kubenswrapper[4811]: I0122 09:11:17.616439 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b34e03f3-eb2a-4d34-97f6-d68bbea8e581-client-ca\") pod \"route-controller-manager-668dc6f567-2qfm8\" (UID: \"b34e03f3-eb2a-4d34-97f6-d68bbea8e581\") " pod="openshift-route-controller-manager/route-controller-manager-668dc6f567-2qfm8" Jan 22 09:11:17 crc kubenswrapper[4811]: I0122 09:11:17.616499 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9dxl\" (UniqueName: \"kubernetes.io/projected/b34e03f3-eb2a-4d34-97f6-d68bbea8e581-kube-api-access-k9dxl\") pod \"route-controller-manager-668dc6f567-2qfm8\" (UID: \"b34e03f3-eb2a-4d34-97f6-d68bbea8e581\") " pod="openshift-route-controller-manager/route-controller-manager-668dc6f567-2qfm8" Jan 22 09:11:17 crc kubenswrapper[4811]: I0122 09:11:17.616542 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b34e03f3-eb2a-4d34-97f6-d68bbea8e581-config\") pod \"route-controller-manager-668dc6f567-2qfm8\" (UID: \"b34e03f3-eb2a-4d34-97f6-d68bbea8e581\") " pod="openshift-route-controller-manager/route-controller-manager-668dc6f567-2qfm8" Jan 22 09:11:17 crc kubenswrapper[4811]: I0122 09:11:17.616596 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a83fa59-1db2-4185-bd9c-bffdb9ccb6e9-config\") pod \"controller-manager-f66b4c6db-lz8r5\" (UID: \"7a83fa59-1db2-4185-bd9c-bffdb9ccb6e9\") " pod="openshift-controller-manager/controller-manager-f66b4c6db-lz8r5" Jan 22 09:11:17 crc kubenswrapper[4811]: I0122 09:11:17.616666 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7a83fa59-1db2-4185-bd9c-bffdb9ccb6e9-proxy-ca-bundles\") pod \"controller-manager-f66b4c6db-lz8r5\" (UID: \"7a83fa59-1db2-4185-bd9c-bffdb9ccb6e9\") " pod="openshift-controller-manager/controller-manager-f66b4c6db-lz8r5" Jan 22 09:11:17 crc kubenswrapper[4811]: I0122 09:11:17.616688 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7a83fa59-1db2-4185-bd9c-bffdb9ccb6e9-client-ca\") pod \"controller-manager-f66b4c6db-lz8r5\" (UID: \"7a83fa59-1db2-4185-bd9c-bffdb9ccb6e9\") " pod="openshift-controller-manager/controller-manager-f66b4c6db-lz8r5" Jan 22 09:11:17 crc kubenswrapper[4811]: I0122 09:11:17.616706 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dznh\" (UniqueName: \"kubernetes.io/projected/7a83fa59-1db2-4185-bd9c-bffdb9ccb6e9-kube-api-access-4dznh\") pod \"controller-manager-f66b4c6db-lz8r5\" (UID: \"7a83fa59-1db2-4185-bd9c-bffdb9ccb6e9\") " pod="openshift-controller-manager/controller-manager-f66b4c6db-lz8r5" Jan 22 09:11:17 crc kubenswrapper[4811]: I0122 09:11:17.616844 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b34e03f3-eb2a-4d34-97f6-d68bbea8e581-serving-cert\") pod \"route-controller-manager-668dc6f567-2qfm8\" (UID: \"b34e03f3-eb2a-4d34-97f6-d68bbea8e581\") " pod="openshift-route-controller-manager/route-controller-manager-668dc6f567-2qfm8" Jan 22 09:11:17 crc kubenswrapper[4811]: I0122 09:11:17.616978 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a83fa59-1db2-4185-bd9c-bffdb9ccb6e9-serving-cert\") pod \"controller-manager-f66b4c6db-lz8r5\" (UID: \"7a83fa59-1db2-4185-bd9c-bffdb9ccb6e9\") " pod="openshift-controller-manager/controller-manager-f66b4c6db-lz8r5" Jan 22 09:11:17 crc kubenswrapper[4811]: I0122 09:11:17.717715 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7a83fa59-1db2-4185-bd9c-bffdb9ccb6e9-proxy-ca-bundles\") pod \"controller-manager-f66b4c6db-lz8r5\" (UID: \"7a83fa59-1db2-4185-bd9c-bffdb9ccb6e9\") " pod="openshift-controller-manager/controller-manager-f66b4c6db-lz8r5" Jan 22 09:11:17 crc kubenswrapper[4811]: I0122 09:11:17.717770 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7a83fa59-1db2-4185-bd9c-bffdb9ccb6e9-client-ca\") pod \"controller-manager-f66b4c6db-lz8r5\" (UID: \"7a83fa59-1db2-4185-bd9c-bffdb9ccb6e9\") " pod="openshift-controller-manager/controller-manager-f66b4c6db-lz8r5" Jan 22 09:11:17 crc kubenswrapper[4811]: I0122 09:11:17.717790 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4dznh\" (UniqueName: \"kubernetes.io/projected/7a83fa59-1db2-4185-bd9c-bffdb9ccb6e9-kube-api-access-4dznh\") pod \"controller-manager-f66b4c6db-lz8r5\" (UID: \"7a83fa59-1db2-4185-bd9c-bffdb9ccb6e9\") " pod="openshift-controller-manager/controller-manager-f66b4c6db-lz8r5" Jan 22 09:11:17 crc kubenswrapper[4811]: I0122 09:11:17.717824 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b34e03f3-eb2a-4d34-97f6-d68bbea8e581-serving-cert\") pod \"route-controller-manager-668dc6f567-2qfm8\" (UID: \"b34e03f3-eb2a-4d34-97f6-d68bbea8e581\") " pod="openshift-route-controller-manager/route-controller-manager-668dc6f567-2qfm8" Jan 22 09:11:17 crc kubenswrapper[4811]: I0122 09:11:17.717863 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a83fa59-1db2-4185-bd9c-bffdb9ccb6e9-serving-cert\") pod \"controller-manager-f66b4c6db-lz8r5\" (UID: \"7a83fa59-1db2-4185-bd9c-bffdb9ccb6e9\") " pod="openshift-controller-manager/controller-manager-f66b4c6db-lz8r5" Jan 22 09:11:17 crc kubenswrapper[4811]: I0122 09:11:17.717917 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b34e03f3-eb2a-4d34-97f6-d68bbea8e581-client-ca\") pod \"route-controller-manager-668dc6f567-2qfm8\" (UID: \"b34e03f3-eb2a-4d34-97f6-d68bbea8e581\") " pod="openshift-route-controller-manager/route-controller-manager-668dc6f567-2qfm8" Jan 22 09:11:17 crc kubenswrapper[4811]: I0122 09:11:17.717943 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9dxl\" (UniqueName: \"kubernetes.io/projected/b34e03f3-eb2a-4d34-97f6-d68bbea8e581-kube-api-access-k9dxl\") pod \"route-controller-manager-668dc6f567-2qfm8\" (UID: \"b34e03f3-eb2a-4d34-97f6-d68bbea8e581\") " pod="openshift-route-controller-manager/route-controller-manager-668dc6f567-2qfm8" Jan 22 09:11:17 crc kubenswrapper[4811]: I0122 09:11:17.717971 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b34e03f3-eb2a-4d34-97f6-d68bbea8e581-config\") pod \"route-controller-manager-668dc6f567-2qfm8\" (UID: \"b34e03f3-eb2a-4d34-97f6-d68bbea8e581\") " pod="openshift-route-controller-manager/route-controller-manager-668dc6f567-2qfm8" Jan 22 09:11:17 crc kubenswrapper[4811]: I0122 09:11:17.717999 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a83fa59-1db2-4185-bd9c-bffdb9ccb6e9-config\") pod \"controller-manager-f66b4c6db-lz8r5\" (UID: \"7a83fa59-1db2-4185-bd9c-bffdb9ccb6e9\") " pod="openshift-controller-manager/controller-manager-f66b4c6db-lz8r5" Jan 22 09:11:17 crc kubenswrapper[4811]: I0122 09:11:17.718863 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7a83fa59-1db2-4185-bd9c-bffdb9ccb6e9-client-ca\") pod \"controller-manager-f66b4c6db-lz8r5\" (UID: \"7a83fa59-1db2-4185-bd9c-bffdb9ccb6e9\") " pod="openshift-controller-manager/controller-manager-f66b4c6db-lz8r5" Jan 22 09:11:17 crc kubenswrapper[4811]: I0122 09:11:17.719151 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7a83fa59-1db2-4185-bd9c-bffdb9ccb6e9-proxy-ca-bundles\") pod \"controller-manager-f66b4c6db-lz8r5\" (UID: \"7a83fa59-1db2-4185-bd9c-bffdb9ccb6e9\") " pod="openshift-controller-manager/controller-manager-f66b4c6db-lz8r5" Jan 22 09:11:17 crc kubenswrapper[4811]: I0122 09:11:17.719324 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a83fa59-1db2-4185-bd9c-bffdb9ccb6e9-config\") pod \"controller-manager-f66b4c6db-lz8r5\" (UID: \"7a83fa59-1db2-4185-bd9c-bffdb9ccb6e9\") " pod="openshift-controller-manager/controller-manager-f66b4c6db-lz8r5" Jan 22 09:11:17 crc kubenswrapper[4811]: I0122 09:11:17.719463 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b34e03f3-eb2a-4d34-97f6-d68bbea8e581-client-ca\") pod \"route-controller-manager-668dc6f567-2qfm8\" (UID: \"b34e03f3-eb2a-4d34-97f6-d68bbea8e581\") " pod="openshift-route-controller-manager/route-controller-manager-668dc6f567-2qfm8" Jan 22 09:11:17 crc kubenswrapper[4811]: I0122 09:11:17.720118 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b34e03f3-eb2a-4d34-97f6-d68bbea8e581-config\") pod \"route-controller-manager-668dc6f567-2qfm8\" (UID: \"b34e03f3-eb2a-4d34-97f6-d68bbea8e581\") " pod="openshift-route-controller-manager/route-controller-manager-668dc6f567-2qfm8" Jan 22 09:11:17 crc kubenswrapper[4811]: I0122 09:11:17.726621 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b34e03f3-eb2a-4d34-97f6-d68bbea8e581-serving-cert\") pod \"route-controller-manager-668dc6f567-2qfm8\" (UID: \"b34e03f3-eb2a-4d34-97f6-d68bbea8e581\") " pod="openshift-route-controller-manager/route-controller-manager-668dc6f567-2qfm8" Jan 22 09:11:17 crc kubenswrapper[4811]: I0122 09:11:17.728144 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a83fa59-1db2-4185-bd9c-bffdb9ccb6e9-serving-cert\") pod \"controller-manager-f66b4c6db-lz8r5\" (UID: \"7a83fa59-1db2-4185-bd9c-bffdb9ccb6e9\") " pod="openshift-controller-manager/controller-manager-f66b4c6db-lz8r5" Jan 22 09:11:17 crc kubenswrapper[4811]: I0122 09:11:17.733120 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dznh\" (UniqueName: \"kubernetes.io/projected/7a83fa59-1db2-4185-bd9c-bffdb9ccb6e9-kube-api-access-4dznh\") pod \"controller-manager-f66b4c6db-lz8r5\" (UID: \"7a83fa59-1db2-4185-bd9c-bffdb9ccb6e9\") " pod="openshift-controller-manager/controller-manager-f66b4c6db-lz8r5" Jan 22 09:11:17 crc kubenswrapper[4811]: I0122 09:11:17.735728 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9dxl\" (UniqueName: \"kubernetes.io/projected/b34e03f3-eb2a-4d34-97f6-d68bbea8e581-kube-api-access-k9dxl\") pod \"route-controller-manager-668dc6f567-2qfm8\" (UID: \"b34e03f3-eb2a-4d34-97f6-d68bbea8e581\") " pod="openshift-route-controller-manager/route-controller-manager-668dc6f567-2qfm8" Jan 22 09:11:17 crc kubenswrapper[4811]: I0122 09:11:17.758696 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-668dc6f567-2qfm8" Jan 22 09:11:17 crc kubenswrapper[4811]: I0122 09:11:17.766523 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-f66b4c6db-lz8r5" Jan 22 09:11:17 crc kubenswrapper[4811]: I0122 09:11:17.915272 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-668dc6f567-2qfm8"] Jan 22 09:11:17 crc kubenswrapper[4811]: I0122 09:11:17.955770 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-f66b4c6db-lz8r5"] Jan 22 09:11:17 crc kubenswrapper[4811]: W0122 09:11:17.965142 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7a83fa59_1db2_4185_bd9c_bffdb9ccb6e9.slice/crio-9e8912f0d36c38082437c852f294205b74b0b4aae92be771ffb51b4081bca4af WatchSource:0}: Error finding container 9e8912f0d36c38082437c852f294205b74b0b4aae92be771ffb51b4081bca4af: Status 404 returned error can't find the container with id 9e8912f0d36c38082437c852f294205b74b0b4aae92be771ffb51b4081bca4af Jan 22 09:11:18 crc kubenswrapper[4811]: I0122 09:11:18.002603 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7285d1e4-79ce-4ada-b15f-b1df68271703" path="/var/lib/kubelet/pods/7285d1e4-79ce-4ada-b15f-b1df68271703/volumes" Jan 22 09:11:18 crc kubenswrapper[4811]: I0122 09:11:18.003760 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="799c1556-92e6-43b8-a620-c7211a2ce813" path="/var/lib/kubelet/pods/799c1556-92e6-43b8-a620-c7211a2ce813/volumes" Jan 22 09:11:18 crc kubenswrapper[4811]: I0122 09:11:18.712522 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-668dc6f567-2qfm8" event={"ID":"b34e03f3-eb2a-4d34-97f6-d68bbea8e581","Type":"ContainerStarted","Data":"3287bcfd604ff530a3cced363fc82eb8f7b79a06335d209d8bcdea4d8c7c0fb4"} Jan 22 09:11:18 crc kubenswrapper[4811]: I0122 09:11:18.712981 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-668dc6f567-2qfm8" event={"ID":"b34e03f3-eb2a-4d34-97f6-d68bbea8e581","Type":"ContainerStarted","Data":"79824c9379fdf2117d80ec93ee725898ccf434dbe8917ebc50e853838bde7210"} Jan 22 09:11:18 crc kubenswrapper[4811]: I0122 09:11:18.713007 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-668dc6f567-2qfm8" Jan 22 09:11:18 crc kubenswrapper[4811]: I0122 09:11:18.717265 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-f66b4c6db-lz8r5" event={"ID":"7a83fa59-1db2-4185-bd9c-bffdb9ccb6e9","Type":"ContainerStarted","Data":"bcd709951ba899cbb36fd1c4e52837dac206409ec23c81c94ed6fb9a51c4da6b"} Jan 22 09:11:18 crc kubenswrapper[4811]: I0122 09:11:18.717296 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-f66b4c6db-lz8r5" event={"ID":"7a83fa59-1db2-4185-bd9c-bffdb9ccb6e9","Type":"ContainerStarted","Data":"9e8912f0d36c38082437c852f294205b74b0b4aae92be771ffb51b4081bca4af"} Jan 22 09:11:18 crc kubenswrapper[4811]: I0122 09:11:18.717869 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-f66b4c6db-lz8r5" Jan 22 09:11:18 crc kubenswrapper[4811]: I0122 09:11:18.719068 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-668dc6f567-2qfm8" Jan 22 09:11:18 crc kubenswrapper[4811]: I0122 09:11:18.722068 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-f66b4c6db-lz8r5" Jan 22 09:11:18 crc kubenswrapper[4811]: I0122 09:11:18.744076 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-668dc6f567-2qfm8" podStartSLOduration=3.744048351 podStartE2EDuration="3.744048351s" podCreationTimestamp="2026-01-22 09:11:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:11:18.735788253 +0000 UTC m=+323.057975377" watchObservedRunningTime="2026-01-22 09:11:18.744048351 +0000 UTC m=+323.066235474" Jan 22 09:11:18 crc kubenswrapper[4811]: I0122 09:11:18.758253 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-f66b4c6db-lz8r5" podStartSLOduration=3.758227883 podStartE2EDuration="3.758227883s" podCreationTimestamp="2026-01-22 09:11:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:11:18.750963162 +0000 UTC m=+323.073150285" watchObservedRunningTime="2026-01-22 09:11:18.758227883 +0000 UTC m=+323.080415005" Jan 22 09:11:35 crc kubenswrapper[4811]: I0122 09:11:35.502301 4811 patch_prober.go:28] interesting pod/machine-config-daemon-txvcq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 09:11:35 crc kubenswrapper[4811]: I0122 09:11:35.502992 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 09:11:44 crc kubenswrapper[4811]: I0122 09:11:44.241475 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-4767d"] Jan 22 09:11:44 crc kubenswrapper[4811]: I0122 09:11:44.243083 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-4767d" Jan 22 09:11:44 crc kubenswrapper[4811]: I0122 09:11:44.257773 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-4767d"] Jan 22 09:11:44 crc kubenswrapper[4811]: I0122 09:11:44.358329 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/91b0b3ab-d741-4df2-b902-e4c43d5d6e29-ca-trust-extracted\") pod \"image-registry-66df7c8f76-4767d\" (UID: \"91b0b3ab-d741-4df2-b902-e4c43d5d6e29\") " pod="openshift-image-registry/image-registry-66df7c8f76-4767d" Jan 22 09:11:44 crc kubenswrapper[4811]: I0122 09:11:44.358391 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-4767d\" (UID: \"91b0b3ab-d741-4df2-b902-e4c43d5d6e29\") " pod="openshift-image-registry/image-registry-66df7c8f76-4767d" Jan 22 09:11:44 crc kubenswrapper[4811]: I0122 09:11:44.358472 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/91b0b3ab-d741-4df2-b902-e4c43d5d6e29-bound-sa-token\") pod \"image-registry-66df7c8f76-4767d\" (UID: \"91b0b3ab-d741-4df2-b902-e4c43d5d6e29\") " pod="openshift-image-registry/image-registry-66df7c8f76-4767d" Jan 22 09:11:44 crc kubenswrapper[4811]: I0122 09:11:44.358518 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/91b0b3ab-d741-4df2-b902-e4c43d5d6e29-trusted-ca\") pod \"image-registry-66df7c8f76-4767d\" (UID: \"91b0b3ab-d741-4df2-b902-e4c43d5d6e29\") " pod="openshift-image-registry/image-registry-66df7c8f76-4767d" Jan 22 09:11:44 crc kubenswrapper[4811]: I0122 09:11:44.358541 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kd2f5\" (UniqueName: \"kubernetes.io/projected/91b0b3ab-d741-4df2-b902-e4c43d5d6e29-kube-api-access-kd2f5\") pod \"image-registry-66df7c8f76-4767d\" (UID: \"91b0b3ab-d741-4df2-b902-e4c43d5d6e29\") " pod="openshift-image-registry/image-registry-66df7c8f76-4767d" Jan 22 09:11:44 crc kubenswrapper[4811]: I0122 09:11:44.358614 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/91b0b3ab-d741-4df2-b902-e4c43d5d6e29-registry-certificates\") pod \"image-registry-66df7c8f76-4767d\" (UID: \"91b0b3ab-d741-4df2-b902-e4c43d5d6e29\") " pod="openshift-image-registry/image-registry-66df7c8f76-4767d" Jan 22 09:11:44 crc kubenswrapper[4811]: I0122 09:11:44.358718 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/91b0b3ab-d741-4df2-b902-e4c43d5d6e29-registry-tls\") pod \"image-registry-66df7c8f76-4767d\" (UID: \"91b0b3ab-d741-4df2-b902-e4c43d5d6e29\") " pod="openshift-image-registry/image-registry-66df7c8f76-4767d" Jan 22 09:11:44 crc kubenswrapper[4811]: I0122 09:11:44.358734 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/91b0b3ab-d741-4df2-b902-e4c43d5d6e29-installation-pull-secrets\") pod \"image-registry-66df7c8f76-4767d\" (UID: \"91b0b3ab-d741-4df2-b902-e4c43d5d6e29\") " pod="openshift-image-registry/image-registry-66df7c8f76-4767d" Jan 22 09:11:44 crc kubenswrapper[4811]: I0122 09:11:44.398331 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-4767d\" (UID: \"91b0b3ab-d741-4df2-b902-e4c43d5d6e29\") " pod="openshift-image-registry/image-registry-66df7c8f76-4767d" Jan 22 09:11:44 crc kubenswrapper[4811]: I0122 09:11:44.460558 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/91b0b3ab-d741-4df2-b902-e4c43d5d6e29-registry-tls\") pod \"image-registry-66df7c8f76-4767d\" (UID: \"91b0b3ab-d741-4df2-b902-e4c43d5d6e29\") " pod="openshift-image-registry/image-registry-66df7c8f76-4767d" Jan 22 09:11:44 crc kubenswrapper[4811]: I0122 09:11:44.460607 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/91b0b3ab-d741-4df2-b902-e4c43d5d6e29-installation-pull-secrets\") pod \"image-registry-66df7c8f76-4767d\" (UID: \"91b0b3ab-d741-4df2-b902-e4c43d5d6e29\") " pod="openshift-image-registry/image-registry-66df7c8f76-4767d" Jan 22 09:11:44 crc kubenswrapper[4811]: I0122 09:11:44.460701 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/91b0b3ab-d741-4df2-b902-e4c43d5d6e29-ca-trust-extracted\") pod \"image-registry-66df7c8f76-4767d\" (UID: \"91b0b3ab-d741-4df2-b902-e4c43d5d6e29\") " pod="openshift-image-registry/image-registry-66df7c8f76-4767d" Jan 22 09:11:44 crc kubenswrapper[4811]: I0122 09:11:44.460743 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/91b0b3ab-d741-4df2-b902-e4c43d5d6e29-bound-sa-token\") pod \"image-registry-66df7c8f76-4767d\" (UID: \"91b0b3ab-d741-4df2-b902-e4c43d5d6e29\") " pod="openshift-image-registry/image-registry-66df7c8f76-4767d" Jan 22 09:11:44 crc kubenswrapper[4811]: I0122 09:11:44.460764 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/91b0b3ab-d741-4df2-b902-e4c43d5d6e29-trusted-ca\") pod \"image-registry-66df7c8f76-4767d\" (UID: \"91b0b3ab-d741-4df2-b902-e4c43d5d6e29\") " pod="openshift-image-registry/image-registry-66df7c8f76-4767d" Jan 22 09:11:44 crc kubenswrapper[4811]: I0122 09:11:44.460786 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kd2f5\" (UniqueName: \"kubernetes.io/projected/91b0b3ab-d741-4df2-b902-e4c43d5d6e29-kube-api-access-kd2f5\") pod \"image-registry-66df7c8f76-4767d\" (UID: \"91b0b3ab-d741-4df2-b902-e4c43d5d6e29\") " pod="openshift-image-registry/image-registry-66df7c8f76-4767d" Jan 22 09:11:44 crc kubenswrapper[4811]: I0122 09:11:44.460823 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/91b0b3ab-d741-4df2-b902-e4c43d5d6e29-registry-certificates\") pod \"image-registry-66df7c8f76-4767d\" (UID: \"91b0b3ab-d741-4df2-b902-e4c43d5d6e29\") " pod="openshift-image-registry/image-registry-66df7c8f76-4767d" Jan 22 09:11:44 crc kubenswrapper[4811]: I0122 09:11:44.461982 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/91b0b3ab-d741-4df2-b902-e4c43d5d6e29-registry-certificates\") pod \"image-registry-66df7c8f76-4767d\" (UID: \"91b0b3ab-d741-4df2-b902-e4c43d5d6e29\") " pod="openshift-image-registry/image-registry-66df7c8f76-4767d" Jan 22 09:11:44 crc kubenswrapper[4811]: I0122 09:11:44.463275 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/91b0b3ab-d741-4df2-b902-e4c43d5d6e29-ca-trust-extracted\") pod \"image-registry-66df7c8f76-4767d\" (UID: \"91b0b3ab-d741-4df2-b902-e4c43d5d6e29\") " pod="openshift-image-registry/image-registry-66df7c8f76-4767d" Jan 22 09:11:44 crc kubenswrapper[4811]: I0122 09:11:44.463603 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/91b0b3ab-d741-4df2-b902-e4c43d5d6e29-trusted-ca\") pod \"image-registry-66df7c8f76-4767d\" (UID: \"91b0b3ab-d741-4df2-b902-e4c43d5d6e29\") " pod="openshift-image-registry/image-registry-66df7c8f76-4767d" Jan 22 09:11:44 crc kubenswrapper[4811]: I0122 09:11:44.468807 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/91b0b3ab-d741-4df2-b902-e4c43d5d6e29-installation-pull-secrets\") pod \"image-registry-66df7c8f76-4767d\" (UID: \"91b0b3ab-d741-4df2-b902-e4c43d5d6e29\") " pod="openshift-image-registry/image-registry-66df7c8f76-4767d" Jan 22 09:11:44 crc kubenswrapper[4811]: I0122 09:11:44.474713 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/91b0b3ab-d741-4df2-b902-e4c43d5d6e29-registry-tls\") pod \"image-registry-66df7c8f76-4767d\" (UID: \"91b0b3ab-d741-4df2-b902-e4c43d5d6e29\") " pod="openshift-image-registry/image-registry-66df7c8f76-4767d" Jan 22 09:11:44 crc kubenswrapper[4811]: I0122 09:11:44.487619 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/91b0b3ab-d741-4df2-b902-e4c43d5d6e29-bound-sa-token\") pod \"image-registry-66df7c8f76-4767d\" (UID: \"91b0b3ab-d741-4df2-b902-e4c43d5d6e29\") " pod="openshift-image-registry/image-registry-66df7c8f76-4767d" Jan 22 09:11:44 crc kubenswrapper[4811]: I0122 09:11:44.487921 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kd2f5\" (UniqueName: \"kubernetes.io/projected/91b0b3ab-d741-4df2-b902-e4c43d5d6e29-kube-api-access-kd2f5\") pod \"image-registry-66df7c8f76-4767d\" (UID: \"91b0b3ab-d741-4df2-b902-e4c43d5d6e29\") " pod="openshift-image-registry/image-registry-66df7c8f76-4767d" Jan 22 09:11:44 crc kubenswrapper[4811]: I0122 09:11:44.568523 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-4767d" Jan 22 09:11:44 crc kubenswrapper[4811]: I0122 09:11:44.950151 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-4767d"] Jan 22 09:11:45 crc kubenswrapper[4811]: I0122 09:11:45.876888 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-4767d" event={"ID":"91b0b3ab-d741-4df2-b902-e4c43d5d6e29","Type":"ContainerStarted","Data":"4870b076673c74f39bd937383a57757ca6fc28bdd0ef30afa5df3d1cc9e278f9"} Jan 22 09:11:45 crc kubenswrapper[4811]: I0122 09:11:45.877247 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-4767d" event={"ID":"91b0b3ab-d741-4df2-b902-e4c43d5d6e29","Type":"ContainerStarted","Data":"239f5a4db79840c769ad2eba4b1622ee8cc7edd75dc258ddc5688e23457e5e73"} Jan 22 09:11:45 crc kubenswrapper[4811]: I0122 09:11:45.877356 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-4767d" Jan 22 09:11:45 crc kubenswrapper[4811]: I0122 09:11:45.893859 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-4767d" podStartSLOduration=1.893844606 podStartE2EDuration="1.893844606s" podCreationTimestamp="2026-01-22 09:11:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:11:45.891373767 +0000 UTC m=+350.213560891" watchObservedRunningTime="2026-01-22 09:11:45.893844606 +0000 UTC m=+350.216031729" Jan 22 09:12:04 crc kubenswrapper[4811]: I0122 09:12:04.573739 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-4767d" Jan 22 09:12:04 crc kubenswrapper[4811]: I0122 09:12:04.637199 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-nczwv"] Jan 22 09:12:05 crc kubenswrapper[4811]: I0122 09:12:05.501365 4811 patch_prober.go:28] interesting pod/machine-config-daemon-txvcq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 09:12:05 crc kubenswrapper[4811]: I0122 09:12:05.501896 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 09:12:15 crc kubenswrapper[4811]: I0122 09:12:15.446593 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-f66b4c6db-lz8r5"] Jan 22 09:12:15 crc kubenswrapper[4811]: I0122 09:12:15.447279 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-f66b4c6db-lz8r5" podUID="7a83fa59-1db2-4185-bd9c-bffdb9ccb6e9" containerName="controller-manager" containerID="cri-o://bcd709951ba899cbb36fd1c4e52837dac206409ec23c81c94ed6fb9a51c4da6b" gracePeriod=30 Jan 22 09:12:15 crc kubenswrapper[4811]: I0122 09:12:15.805497 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-f66b4c6db-lz8r5" Jan 22 09:12:15 crc kubenswrapper[4811]: I0122 09:12:15.872153 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4dznh\" (UniqueName: \"kubernetes.io/projected/7a83fa59-1db2-4185-bd9c-bffdb9ccb6e9-kube-api-access-4dznh\") pod \"7a83fa59-1db2-4185-bd9c-bffdb9ccb6e9\" (UID: \"7a83fa59-1db2-4185-bd9c-bffdb9ccb6e9\") " Jan 22 09:12:15 crc kubenswrapper[4811]: I0122 09:12:15.872233 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7a83fa59-1db2-4185-bd9c-bffdb9ccb6e9-proxy-ca-bundles\") pod \"7a83fa59-1db2-4185-bd9c-bffdb9ccb6e9\" (UID: \"7a83fa59-1db2-4185-bd9c-bffdb9ccb6e9\") " Jan 22 09:12:15 crc kubenswrapper[4811]: I0122 09:12:15.872314 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a83fa59-1db2-4185-bd9c-bffdb9ccb6e9-serving-cert\") pod \"7a83fa59-1db2-4185-bd9c-bffdb9ccb6e9\" (UID: \"7a83fa59-1db2-4185-bd9c-bffdb9ccb6e9\") " Jan 22 09:12:15 crc kubenswrapper[4811]: I0122 09:12:15.872348 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a83fa59-1db2-4185-bd9c-bffdb9ccb6e9-config\") pod \"7a83fa59-1db2-4185-bd9c-bffdb9ccb6e9\" (UID: \"7a83fa59-1db2-4185-bd9c-bffdb9ccb6e9\") " Jan 22 09:12:15 crc kubenswrapper[4811]: I0122 09:12:15.872438 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7a83fa59-1db2-4185-bd9c-bffdb9ccb6e9-client-ca\") pod \"7a83fa59-1db2-4185-bd9c-bffdb9ccb6e9\" (UID: \"7a83fa59-1db2-4185-bd9c-bffdb9ccb6e9\") " Jan 22 09:12:15 crc kubenswrapper[4811]: I0122 09:12:15.873436 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a83fa59-1db2-4185-bd9c-bffdb9ccb6e9-client-ca" (OuterVolumeSpecName: "client-ca") pod "7a83fa59-1db2-4185-bd9c-bffdb9ccb6e9" (UID: "7a83fa59-1db2-4185-bd9c-bffdb9ccb6e9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:12:15 crc kubenswrapper[4811]: I0122 09:12:15.873489 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a83fa59-1db2-4185-bd9c-bffdb9ccb6e9-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7a83fa59-1db2-4185-bd9c-bffdb9ccb6e9" (UID: "7a83fa59-1db2-4185-bd9c-bffdb9ccb6e9"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:12:15 crc kubenswrapper[4811]: I0122 09:12:15.873929 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a83fa59-1db2-4185-bd9c-bffdb9ccb6e9-config" (OuterVolumeSpecName: "config") pod "7a83fa59-1db2-4185-bd9c-bffdb9ccb6e9" (UID: "7a83fa59-1db2-4185-bd9c-bffdb9ccb6e9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:12:15 crc kubenswrapper[4811]: I0122 09:12:15.878833 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a83fa59-1db2-4185-bd9c-bffdb9ccb6e9-kube-api-access-4dznh" (OuterVolumeSpecName: "kube-api-access-4dznh") pod "7a83fa59-1db2-4185-bd9c-bffdb9ccb6e9" (UID: "7a83fa59-1db2-4185-bd9c-bffdb9ccb6e9"). InnerVolumeSpecName "kube-api-access-4dznh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:12:15 crc kubenswrapper[4811]: I0122 09:12:15.878836 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a83fa59-1db2-4185-bd9c-bffdb9ccb6e9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7a83fa59-1db2-4185-bd9c-bffdb9ccb6e9" (UID: "7a83fa59-1db2-4185-bd9c-bffdb9ccb6e9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:12:15 crc kubenswrapper[4811]: I0122 09:12:15.974820 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4dznh\" (UniqueName: \"kubernetes.io/projected/7a83fa59-1db2-4185-bd9c-bffdb9ccb6e9-kube-api-access-4dznh\") on node \"crc\" DevicePath \"\"" Jan 22 09:12:15 crc kubenswrapper[4811]: I0122 09:12:15.974857 4811 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7a83fa59-1db2-4185-bd9c-bffdb9ccb6e9-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 22 09:12:15 crc kubenswrapper[4811]: I0122 09:12:15.974869 4811 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a83fa59-1db2-4185-bd9c-bffdb9ccb6e9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 09:12:15 crc kubenswrapper[4811]: I0122 09:12:15.974880 4811 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a83fa59-1db2-4185-bd9c-bffdb9ccb6e9-config\") on node \"crc\" DevicePath \"\"" Jan 22 09:12:15 crc kubenswrapper[4811]: I0122 09:12:15.974891 4811 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7a83fa59-1db2-4185-bd9c-bffdb9ccb6e9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 22 09:12:16 crc kubenswrapper[4811]: I0122 09:12:16.051358 4811 generic.go:334] "Generic (PLEG): container finished" podID="7a83fa59-1db2-4185-bd9c-bffdb9ccb6e9" containerID="bcd709951ba899cbb36fd1c4e52837dac206409ec23c81c94ed6fb9a51c4da6b" exitCode=0 Jan 22 09:12:16 crc kubenswrapper[4811]: I0122 09:12:16.051413 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-f66b4c6db-lz8r5" event={"ID":"7a83fa59-1db2-4185-bd9c-bffdb9ccb6e9","Type":"ContainerDied","Data":"bcd709951ba899cbb36fd1c4e52837dac206409ec23c81c94ed6fb9a51c4da6b"} Jan 22 09:12:16 crc kubenswrapper[4811]: I0122 09:12:16.051423 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-f66b4c6db-lz8r5" Jan 22 09:12:16 crc kubenswrapper[4811]: I0122 09:12:16.051455 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-f66b4c6db-lz8r5" event={"ID":"7a83fa59-1db2-4185-bd9c-bffdb9ccb6e9","Type":"ContainerDied","Data":"9e8912f0d36c38082437c852f294205b74b0b4aae92be771ffb51b4081bca4af"} Jan 22 09:12:16 crc kubenswrapper[4811]: I0122 09:12:16.051475 4811 scope.go:117] "RemoveContainer" containerID="bcd709951ba899cbb36fd1c4e52837dac206409ec23c81c94ed6fb9a51c4da6b" Jan 22 09:12:16 crc kubenswrapper[4811]: I0122 09:12:16.069033 4811 scope.go:117] "RemoveContainer" containerID="bcd709951ba899cbb36fd1c4e52837dac206409ec23c81c94ed6fb9a51c4da6b" Jan 22 09:12:16 crc kubenswrapper[4811]: E0122 09:12:16.072501 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bcd709951ba899cbb36fd1c4e52837dac206409ec23c81c94ed6fb9a51c4da6b\": container with ID starting with bcd709951ba899cbb36fd1c4e52837dac206409ec23c81c94ed6fb9a51c4da6b not found: ID does not exist" containerID="bcd709951ba899cbb36fd1c4e52837dac206409ec23c81c94ed6fb9a51c4da6b" Jan 22 09:12:16 crc kubenswrapper[4811]: I0122 09:12:16.072534 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bcd709951ba899cbb36fd1c4e52837dac206409ec23c81c94ed6fb9a51c4da6b"} err="failed to get container status \"bcd709951ba899cbb36fd1c4e52837dac206409ec23c81c94ed6fb9a51c4da6b\": rpc error: code = NotFound desc = could not find container \"bcd709951ba899cbb36fd1c4e52837dac206409ec23c81c94ed6fb9a51c4da6b\": container with ID starting with bcd709951ba899cbb36fd1c4e52837dac206409ec23c81c94ed6fb9a51c4da6b not found: ID does not exist" Jan 22 09:12:16 crc kubenswrapper[4811]: I0122 09:12:16.076815 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-f66b4c6db-lz8r5"] Jan 22 09:12:16 crc kubenswrapper[4811]: I0122 09:12:16.079829 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-f66b4c6db-lz8r5"] Jan 22 09:12:16 crc kubenswrapper[4811]: I0122 09:12:16.473560 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5c495bff4f-bbbmx"] Jan 22 09:12:16 crc kubenswrapper[4811]: E0122 09:12:16.473905 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a83fa59-1db2-4185-bd9c-bffdb9ccb6e9" containerName="controller-manager" Jan 22 09:12:16 crc kubenswrapper[4811]: I0122 09:12:16.473925 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a83fa59-1db2-4185-bd9c-bffdb9ccb6e9" containerName="controller-manager" Jan 22 09:12:16 crc kubenswrapper[4811]: I0122 09:12:16.474067 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a83fa59-1db2-4185-bd9c-bffdb9ccb6e9" containerName="controller-manager" Jan 22 09:12:16 crc kubenswrapper[4811]: I0122 09:12:16.474713 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5c495bff4f-bbbmx" Jan 22 09:12:16 crc kubenswrapper[4811]: I0122 09:12:16.478931 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 22 09:12:16 crc kubenswrapper[4811]: I0122 09:12:16.479406 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 22 09:12:16 crc kubenswrapper[4811]: I0122 09:12:16.479752 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 22 09:12:16 crc kubenswrapper[4811]: I0122 09:12:16.479931 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 22 09:12:16 crc kubenswrapper[4811]: I0122 09:12:16.479949 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 22 09:12:16 crc kubenswrapper[4811]: I0122 09:12:16.480070 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 22 09:12:16 crc kubenswrapper[4811]: I0122 09:12:16.483801 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5c495bff4f-bbbmx"] Jan 22 09:12:16 crc kubenswrapper[4811]: I0122 09:12:16.485173 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 22 09:12:16 crc kubenswrapper[4811]: I0122 09:12:16.581993 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/265f8f19-d7f9-407d-80d2-1da71a38572f-client-ca\") pod \"controller-manager-5c495bff4f-bbbmx\" (UID: \"265f8f19-d7f9-407d-80d2-1da71a38572f\") " pod="openshift-controller-manager/controller-manager-5c495bff4f-bbbmx" Jan 22 09:12:16 crc kubenswrapper[4811]: I0122 09:12:16.582116 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/265f8f19-d7f9-407d-80d2-1da71a38572f-config\") pod \"controller-manager-5c495bff4f-bbbmx\" (UID: \"265f8f19-d7f9-407d-80d2-1da71a38572f\") " pod="openshift-controller-manager/controller-manager-5c495bff4f-bbbmx" Jan 22 09:12:16 crc kubenswrapper[4811]: I0122 09:12:16.582161 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tdrf\" (UniqueName: \"kubernetes.io/projected/265f8f19-d7f9-407d-80d2-1da71a38572f-kube-api-access-5tdrf\") pod \"controller-manager-5c495bff4f-bbbmx\" (UID: \"265f8f19-d7f9-407d-80d2-1da71a38572f\") " pod="openshift-controller-manager/controller-manager-5c495bff4f-bbbmx" Jan 22 09:12:16 crc kubenswrapper[4811]: I0122 09:12:16.582313 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/265f8f19-d7f9-407d-80d2-1da71a38572f-serving-cert\") pod \"controller-manager-5c495bff4f-bbbmx\" (UID: \"265f8f19-d7f9-407d-80d2-1da71a38572f\") " pod="openshift-controller-manager/controller-manager-5c495bff4f-bbbmx" Jan 22 09:12:16 crc kubenswrapper[4811]: I0122 09:12:16.582374 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/265f8f19-d7f9-407d-80d2-1da71a38572f-proxy-ca-bundles\") pod \"controller-manager-5c495bff4f-bbbmx\" (UID: \"265f8f19-d7f9-407d-80d2-1da71a38572f\") " pod="openshift-controller-manager/controller-manager-5c495bff4f-bbbmx" Jan 22 09:12:16 crc kubenswrapper[4811]: I0122 09:12:16.683072 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/265f8f19-d7f9-407d-80d2-1da71a38572f-config\") pod \"controller-manager-5c495bff4f-bbbmx\" (UID: \"265f8f19-d7f9-407d-80d2-1da71a38572f\") " pod="openshift-controller-manager/controller-manager-5c495bff4f-bbbmx" Jan 22 09:12:16 crc kubenswrapper[4811]: I0122 09:12:16.683131 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5tdrf\" (UniqueName: \"kubernetes.io/projected/265f8f19-d7f9-407d-80d2-1da71a38572f-kube-api-access-5tdrf\") pod \"controller-manager-5c495bff4f-bbbmx\" (UID: \"265f8f19-d7f9-407d-80d2-1da71a38572f\") " pod="openshift-controller-manager/controller-manager-5c495bff4f-bbbmx" Jan 22 09:12:16 crc kubenswrapper[4811]: I0122 09:12:16.683171 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/265f8f19-d7f9-407d-80d2-1da71a38572f-serving-cert\") pod \"controller-manager-5c495bff4f-bbbmx\" (UID: \"265f8f19-d7f9-407d-80d2-1da71a38572f\") " pod="openshift-controller-manager/controller-manager-5c495bff4f-bbbmx" Jan 22 09:12:16 crc kubenswrapper[4811]: I0122 09:12:16.683207 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/265f8f19-d7f9-407d-80d2-1da71a38572f-proxy-ca-bundles\") pod \"controller-manager-5c495bff4f-bbbmx\" (UID: \"265f8f19-d7f9-407d-80d2-1da71a38572f\") " pod="openshift-controller-manager/controller-manager-5c495bff4f-bbbmx" Jan 22 09:12:16 crc kubenswrapper[4811]: I0122 09:12:16.683251 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/265f8f19-d7f9-407d-80d2-1da71a38572f-client-ca\") pod \"controller-manager-5c495bff4f-bbbmx\" (UID: \"265f8f19-d7f9-407d-80d2-1da71a38572f\") " pod="openshift-controller-manager/controller-manager-5c495bff4f-bbbmx" Jan 22 09:12:16 crc kubenswrapper[4811]: I0122 09:12:16.684477 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/265f8f19-d7f9-407d-80d2-1da71a38572f-client-ca\") pod \"controller-manager-5c495bff4f-bbbmx\" (UID: \"265f8f19-d7f9-407d-80d2-1da71a38572f\") " pod="openshift-controller-manager/controller-manager-5c495bff4f-bbbmx" Jan 22 09:12:16 crc kubenswrapper[4811]: I0122 09:12:16.684766 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/265f8f19-d7f9-407d-80d2-1da71a38572f-config\") pod \"controller-manager-5c495bff4f-bbbmx\" (UID: \"265f8f19-d7f9-407d-80d2-1da71a38572f\") " pod="openshift-controller-manager/controller-manager-5c495bff4f-bbbmx" Jan 22 09:12:16 crc kubenswrapper[4811]: I0122 09:12:16.684899 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/265f8f19-d7f9-407d-80d2-1da71a38572f-proxy-ca-bundles\") pod \"controller-manager-5c495bff4f-bbbmx\" (UID: \"265f8f19-d7f9-407d-80d2-1da71a38572f\") " pod="openshift-controller-manager/controller-manager-5c495bff4f-bbbmx" Jan 22 09:12:16 crc kubenswrapper[4811]: I0122 09:12:16.688293 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/265f8f19-d7f9-407d-80d2-1da71a38572f-serving-cert\") pod \"controller-manager-5c495bff4f-bbbmx\" (UID: \"265f8f19-d7f9-407d-80d2-1da71a38572f\") " pod="openshift-controller-manager/controller-manager-5c495bff4f-bbbmx" Jan 22 09:12:16 crc kubenswrapper[4811]: I0122 09:12:16.699060 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5tdrf\" (UniqueName: \"kubernetes.io/projected/265f8f19-d7f9-407d-80d2-1da71a38572f-kube-api-access-5tdrf\") pod \"controller-manager-5c495bff4f-bbbmx\" (UID: \"265f8f19-d7f9-407d-80d2-1da71a38572f\") " pod="openshift-controller-manager/controller-manager-5c495bff4f-bbbmx" Jan 22 09:12:16 crc kubenswrapper[4811]: I0122 09:12:16.791357 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5c495bff4f-bbbmx" Jan 22 09:12:17 crc kubenswrapper[4811]: I0122 09:12:17.151428 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5c495bff4f-bbbmx"] Jan 22 09:12:17 crc kubenswrapper[4811]: I0122 09:12:17.999416 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a83fa59-1db2-4185-bd9c-bffdb9ccb6e9" path="/var/lib/kubelet/pods/7a83fa59-1db2-4185-bd9c-bffdb9ccb6e9/volumes" Jan 22 09:12:18 crc kubenswrapper[4811]: I0122 09:12:18.067026 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5c495bff4f-bbbmx" event={"ID":"265f8f19-d7f9-407d-80d2-1da71a38572f","Type":"ContainerStarted","Data":"0da03514dd7b7250a996689b7a0cc94c93b08bc41049706bcc3033d327d260f1"} Jan 22 09:12:18 crc kubenswrapper[4811]: I0122 09:12:18.067109 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5c495bff4f-bbbmx" event={"ID":"265f8f19-d7f9-407d-80d2-1da71a38572f","Type":"ContainerStarted","Data":"57a7ec7475db4d8232a6a07b4a505ee018715d150adc049721d1ecbae3bd508a"} Jan 22 09:12:18 crc kubenswrapper[4811]: I0122 09:12:18.067469 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5c495bff4f-bbbmx" Jan 22 09:12:18 crc kubenswrapper[4811]: I0122 09:12:18.072737 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5c495bff4f-bbbmx" Jan 22 09:12:18 crc kubenswrapper[4811]: I0122 09:12:18.086999 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5c495bff4f-bbbmx" podStartSLOduration=3.086979899 podStartE2EDuration="3.086979899s" podCreationTimestamp="2026-01-22 09:12:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:12:18.083353911 +0000 UTC m=+382.405541034" watchObservedRunningTime="2026-01-22 09:12:18.086979899 +0000 UTC m=+382.409167022" Jan 22 09:12:29 crc kubenswrapper[4811]: I0122 09:12:29.675331 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-nczwv" podUID="73533561-14fb-4481-872e-1b47096f9d30" containerName="registry" containerID="cri-o://1ddd3eb570b4a5eb2e92fcdddf21609cdab4e03742dcecfaef66d5702ce2f3db" gracePeriod=30 Jan 22 09:12:30 crc kubenswrapper[4811]: I0122 09:12:30.078724 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-nczwv" Jan 22 09:12:30 crc kubenswrapper[4811]: I0122 09:12:30.124720 4811 generic.go:334] "Generic (PLEG): container finished" podID="73533561-14fb-4481-872e-1b47096f9d30" containerID="1ddd3eb570b4a5eb2e92fcdddf21609cdab4e03742dcecfaef66d5702ce2f3db" exitCode=0 Jan 22 09:12:30 crc kubenswrapper[4811]: I0122 09:12:30.124775 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-nczwv" event={"ID":"73533561-14fb-4481-872e-1b47096f9d30","Type":"ContainerDied","Data":"1ddd3eb570b4a5eb2e92fcdddf21609cdab4e03742dcecfaef66d5702ce2f3db"} Jan 22 09:12:30 crc kubenswrapper[4811]: I0122 09:12:30.124809 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-nczwv" event={"ID":"73533561-14fb-4481-872e-1b47096f9d30","Type":"ContainerDied","Data":"a74b21ff1721c8e4857c6574b62956e4c1b0fcd0dc4d85bfeb59292760b815f4"} Jan 22 09:12:30 crc kubenswrapper[4811]: I0122 09:12:30.124827 4811 scope.go:117] "RemoveContainer" containerID="1ddd3eb570b4a5eb2e92fcdddf21609cdab4e03742dcecfaef66d5702ce2f3db" Jan 22 09:12:30 crc kubenswrapper[4811]: I0122 09:12:30.124778 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-nczwv" Jan 22 09:12:30 crc kubenswrapper[4811]: I0122 09:12:30.139000 4811 scope.go:117] "RemoveContainer" containerID="1ddd3eb570b4a5eb2e92fcdddf21609cdab4e03742dcecfaef66d5702ce2f3db" Jan 22 09:12:30 crc kubenswrapper[4811]: E0122 09:12:30.139489 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ddd3eb570b4a5eb2e92fcdddf21609cdab4e03742dcecfaef66d5702ce2f3db\": container with ID starting with 1ddd3eb570b4a5eb2e92fcdddf21609cdab4e03742dcecfaef66d5702ce2f3db not found: ID does not exist" containerID="1ddd3eb570b4a5eb2e92fcdddf21609cdab4e03742dcecfaef66d5702ce2f3db" Jan 22 09:12:30 crc kubenswrapper[4811]: I0122 09:12:30.139597 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ddd3eb570b4a5eb2e92fcdddf21609cdab4e03742dcecfaef66d5702ce2f3db"} err="failed to get container status \"1ddd3eb570b4a5eb2e92fcdddf21609cdab4e03742dcecfaef66d5702ce2f3db\": rpc error: code = NotFound desc = could not find container \"1ddd3eb570b4a5eb2e92fcdddf21609cdab4e03742dcecfaef66d5702ce2f3db\": container with ID starting with 1ddd3eb570b4a5eb2e92fcdddf21609cdab4e03742dcecfaef66d5702ce2f3db not found: ID does not exist" Jan 22 09:12:30 crc kubenswrapper[4811]: I0122 09:12:30.177976 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/73533561-14fb-4481-872e-1b47096f9d30-trusted-ca\") pod \"73533561-14fb-4481-872e-1b47096f9d30\" (UID: \"73533561-14fb-4481-872e-1b47096f9d30\") " Jan 22 09:12:30 crc kubenswrapper[4811]: I0122 09:12:30.178204 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"73533561-14fb-4481-872e-1b47096f9d30\" (UID: \"73533561-14fb-4481-872e-1b47096f9d30\") " Jan 22 09:12:30 crc kubenswrapper[4811]: I0122 09:12:30.178761 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vlj6w\" (UniqueName: \"kubernetes.io/projected/73533561-14fb-4481-872e-1b47096f9d30-kube-api-access-vlj6w\") pod \"73533561-14fb-4481-872e-1b47096f9d30\" (UID: \"73533561-14fb-4481-872e-1b47096f9d30\") " Jan 22 09:12:30 crc kubenswrapper[4811]: I0122 09:12:30.178859 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73533561-14fb-4481-872e-1b47096f9d30-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "73533561-14fb-4481-872e-1b47096f9d30" (UID: "73533561-14fb-4481-872e-1b47096f9d30"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:12:30 crc kubenswrapper[4811]: I0122 09:12:30.178984 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/73533561-14fb-4481-872e-1b47096f9d30-bound-sa-token\") pod \"73533561-14fb-4481-872e-1b47096f9d30\" (UID: \"73533561-14fb-4481-872e-1b47096f9d30\") " Jan 22 09:12:30 crc kubenswrapper[4811]: I0122 09:12:30.179093 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/73533561-14fb-4481-872e-1b47096f9d30-registry-tls\") pod \"73533561-14fb-4481-872e-1b47096f9d30\" (UID: \"73533561-14fb-4481-872e-1b47096f9d30\") " Jan 22 09:12:30 crc kubenswrapper[4811]: I0122 09:12:30.179649 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/73533561-14fb-4481-872e-1b47096f9d30-ca-trust-extracted\") pod \"73533561-14fb-4481-872e-1b47096f9d30\" (UID: \"73533561-14fb-4481-872e-1b47096f9d30\") " Jan 22 09:12:30 crc kubenswrapper[4811]: I0122 09:12:30.179792 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/73533561-14fb-4481-872e-1b47096f9d30-installation-pull-secrets\") pod \"73533561-14fb-4481-872e-1b47096f9d30\" (UID: \"73533561-14fb-4481-872e-1b47096f9d30\") " Jan 22 09:12:30 crc kubenswrapper[4811]: I0122 09:12:30.179939 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/73533561-14fb-4481-872e-1b47096f9d30-registry-certificates\") pod \"73533561-14fb-4481-872e-1b47096f9d30\" (UID: \"73533561-14fb-4481-872e-1b47096f9d30\") " Jan 22 09:12:30 crc kubenswrapper[4811]: I0122 09:12:30.180354 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73533561-14fb-4481-872e-1b47096f9d30-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "73533561-14fb-4481-872e-1b47096f9d30" (UID: "73533561-14fb-4481-872e-1b47096f9d30"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:12:30 crc kubenswrapper[4811]: I0122 09:12:30.180672 4811 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/73533561-14fb-4481-872e-1b47096f9d30-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 22 09:12:30 crc kubenswrapper[4811]: I0122 09:12:30.180787 4811 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/73533561-14fb-4481-872e-1b47096f9d30-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 22 09:12:30 crc kubenswrapper[4811]: I0122 09:12:30.186114 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73533561-14fb-4481-872e-1b47096f9d30-kube-api-access-vlj6w" (OuterVolumeSpecName: "kube-api-access-vlj6w") pod "73533561-14fb-4481-872e-1b47096f9d30" (UID: "73533561-14fb-4481-872e-1b47096f9d30"). InnerVolumeSpecName "kube-api-access-vlj6w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:12:30 crc kubenswrapper[4811]: I0122 09:12:30.187029 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73533561-14fb-4481-872e-1b47096f9d30-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "73533561-14fb-4481-872e-1b47096f9d30" (UID: "73533561-14fb-4481-872e-1b47096f9d30"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:12:30 crc kubenswrapper[4811]: I0122 09:12:30.187101 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73533561-14fb-4481-872e-1b47096f9d30-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "73533561-14fb-4481-872e-1b47096f9d30" (UID: "73533561-14fb-4481-872e-1b47096f9d30"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:12:30 crc kubenswrapper[4811]: I0122 09:12:30.187180 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "73533561-14fb-4481-872e-1b47096f9d30" (UID: "73533561-14fb-4481-872e-1b47096f9d30"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 22 09:12:30 crc kubenswrapper[4811]: I0122 09:12:30.187440 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73533561-14fb-4481-872e-1b47096f9d30-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "73533561-14fb-4481-872e-1b47096f9d30" (UID: "73533561-14fb-4481-872e-1b47096f9d30"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:12:30 crc kubenswrapper[4811]: I0122 09:12:30.194667 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/73533561-14fb-4481-872e-1b47096f9d30-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "73533561-14fb-4481-872e-1b47096f9d30" (UID: "73533561-14fb-4481-872e-1b47096f9d30"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:12:30 crc kubenswrapper[4811]: I0122 09:12:30.281759 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vlj6w\" (UniqueName: \"kubernetes.io/projected/73533561-14fb-4481-872e-1b47096f9d30-kube-api-access-vlj6w\") on node \"crc\" DevicePath \"\"" Jan 22 09:12:30 crc kubenswrapper[4811]: I0122 09:12:30.281795 4811 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/73533561-14fb-4481-872e-1b47096f9d30-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 22 09:12:30 crc kubenswrapper[4811]: I0122 09:12:30.281806 4811 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/73533561-14fb-4481-872e-1b47096f9d30-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 22 09:12:30 crc kubenswrapper[4811]: I0122 09:12:30.281816 4811 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/73533561-14fb-4481-872e-1b47096f9d30-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 22 09:12:30 crc kubenswrapper[4811]: I0122 09:12:30.281828 4811 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/73533561-14fb-4481-872e-1b47096f9d30-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 22 09:12:30 crc kubenswrapper[4811]: I0122 09:12:30.450473 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-nczwv"] Jan 22 09:12:30 crc kubenswrapper[4811]: I0122 09:12:30.455863 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-nczwv"] Jan 22 09:12:32 crc kubenswrapper[4811]: I0122 09:12:32.005758 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73533561-14fb-4481-872e-1b47096f9d30" path="/var/lib/kubelet/pods/73533561-14fb-4481-872e-1b47096f9d30/volumes" Jan 22 09:12:35 crc kubenswrapper[4811]: I0122 09:12:35.501477 4811 patch_prober.go:28] interesting pod/machine-config-daemon-txvcq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 09:12:35 crc kubenswrapper[4811]: I0122 09:12:35.501886 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 09:12:35 crc kubenswrapper[4811]: I0122 09:12:35.501957 4811 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" Jan 22 09:12:35 crc kubenswrapper[4811]: I0122 09:12:35.502565 4811 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1b2f0e7c21faa08c5ffc1625c27cd1cb01040f89d6aab01c53b541a45ff7e759"} pod="openshift-machine-config-operator/machine-config-daemon-txvcq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 09:12:35 crc kubenswrapper[4811]: I0122 09:12:35.502645 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" containerName="machine-config-daemon" containerID="cri-o://1b2f0e7c21faa08c5ffc1625c27cd1cb01040f89d6aab01c53b541a45ff7e759" gracePeriod=600 Jan 22 09:12:36 crc kubenswrapper[4811]: I0122 09:12:36.164144 4811 generic.go:334] "Generic (PLEG): container finished" podID="84068a6b-e189-419b-87f5-f31428f6eafe" containerID="1b2f0e7c21faa08c5ffc1625c27cd1cb01040f89d6aab01c53b541a45ff7e759" exitCode=0 Jan 22 09:12:36 crc kubenswrapper[4811]: I0122 09:12:36.164244 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" event={"ID":"84068a6b-e189-419b-87f5-f31428f6eafe","Type":"ContainerDied","Data":"1b2f0e7c21faa08c5ffc1625c27cd1cb01040f89d6aab01c53b541a45ff7e759"} Jan 22 09:12:36 crc kubenswrapper[4811]: I0122 09:12:36.164597 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" event={"ID":"84068a6b-e189-419b-87f5-f31428f6eafe","Type":"ContainerStarted","Data":"7a6f566969cf05ffa4068902405d136144c4f92f8d1b0d3256e6fd01cf51ac2e"} Jan 22 09:12:36 crc kubenswrapper[4811]: I0122 09:12:36.164652 4811 scope.go:117] "RemoveContainer" containerID="bbb067d04c1683ffd06d01aab41bb5988fdca44b8bd16012d35c42795d0d8011" Jan 22 09:14:04 crc kubenswrapper[4811]: I0122 09:14:04.453247 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-jbj4c"] Jan 22 09:14:04 crc kubenswrapper[4811]: E0122 09:14:04.453998 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73533561-14fb-4481-872e-1b47096f9d30" containerName="registry" Jan 22 09:14:04 crc kubenswrapper[4811]: I0122 09:14:04.454041 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="73533561-14fb-4481-872e-1b47096f9d30" containerName="registry" Jan 22 09:14:04 crc kubenswrapper[4811]: I0122 09:14:04.454140 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="73533561-14fb-4481-872e-1b47096f9d30" containerName="registry" Jan 22 09:14:04 crc kubenswrapper[4811]: I0122 09:14:04.454525 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-jbj4c" Jan 22 09:14:04 crc kubenswrapper[4811]: I0122 09:14:04.457501 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 22 09:14:04 crc kubenswrapper[4811]: I0122 09:14:04.458920 4811 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-kxwd6" Jan 22 09:14:04 crc kubenswrapper[4811]: I0122 09:14:04.461659 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 22 09:14:04 crc kubenswrapper[4811]: I0122 09:14:04.462776 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-xvdbh"] Jan 22 09:14:04 crc kubenswrapper[4811]: I0122 09:14:04.463462 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-xvdbh" Jan 22 09:14:04 crc kubenswrapper[4811]: I0122 09:14:04.465175 4811 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-qwk78" Jan 22 09:14:04 crc kubenswrapper[4811]: I0122 09:14:04.478221 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-sgnhm"] Jan 22 09:14:04 crc kubenswrapper[4811]: I0122 09:14:04.478923 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-sgnhm" Jan 22 09:14:04 crc kubenswrapper[4811]: I0122 09:14:04.484426 4811 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-bn6nz" Jan 22 09:14:04 crc kubenswrapper[4811]: I0122 09:14:04.492949 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-xvdbh"] Jan 22 09:14:04 crc kubenswrapper[4811]: I0122 09:14:04.495635 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m524d\" (UniqueName: \"kubernetes.io/projected/4dbc71dd-a371-4735-bc7e-6c29eb855fbd-kube-api-access-m524d\") pod \"cert-manager-cainjector-cf98fcc89-jbj4c\" (UID: \"4dbc71dd-a371-4735-bc7e-6c29eb855fbd\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-jbj4c" Jan 22 09:14:04 crc kubenswrapper[4811]: I0122 09:14:04.495797 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shk4m\" (UniqueName: \"kubernetes.io/projected/34e7fc62-f1c1-41cb-b44c-2ef705fa2a15-kube-api-access-shk4m\") pod \"cert-manager-webhook-687f57d79b-sgnhm\" (UID: \"34e7fc62-f1c1-41cb-b44c-2ef705fa2a15\") " pod="cert-manager/cert-manager-webhook-687f57d79b-sgnhm" Jan 22 09:14:04 crc kubenswrapper[4811]: I0122 09:14:04.495930 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dn9xb\" (UniqueName: \"kubernetes.io/projected/97d60b95-f52c-4946-919a-e8fd73251ed5-kube-api-access-dn9xb\") pod \"cert-manager-858654f9db-xvdbh\" (UID: \"97d60b95-f52c-4946-919a-e8fd73251ed5\") " pod="cert-manager/cert-manager-858654f9db-xvdbh" Jan 22 09:14:04 crc kubenswrapper[4811]: I0122 09:14:04.511376 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-jbj4c"] Jan 22 09:14:04 crc kubenswrapper[4811]: I0122 09:14:04.542756 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-sgnhm"] Jan 22 09:14:04 crc kubenswrapper[4811]: I0122 09:14:04.596830 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dn9xb\" (UniqueName: \"kubernetes.io/projected/97d60b95-f52c-4946-919a-e8fd73251ed5-kube-api-access-dn9xb\") pod \"cert-manager-858654f9db-xvdbh\" (UID: \"97d60b95-f52c-4946-919a-e8fd73251ed5\") " pod="cert-manager/cert-manager-858654f9db-xvdbh" Jan 22 09:14:04 crc kubenswrapper[4811]: I0122 09:14:04.596945 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m524d\" (UniqueName: \"kubernetes.io/projected/4dbc71dd-a371-4735-bc7e-6c29eb855fbd-kube-api-access-m524d\") pod \"cert-manager-cainjector-cf98fcc89-jbj4c\" (UID: \"4dbc71dd-a371-4735-bc7e-6c29eb855fbd\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-jbj4c" Jan 22 09:14:04 crc kubenswrapper[4811]: I0122 09:14:04.596996 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shk4m\" (UniqueName: \"kubernetes.io/projected/34e7fc62-f1c1-41cb-b44c-2ef705fa2a15-kube-api-access-shk4m\") pod \"cert-manager-webhook-687f57d79b-sgnhm\" (UID: \"34e7fc62-f1c1-41cb-b44c-2ef705fa2a15\") " pod="cert-manager/cert-manager-webhook-687f57d79b-sgnhm" Jan 22 09:14:04 crc kubenswrapper[4811]: I0122 09:14:04.615241 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dn9xb\" (UniqueName: \"kubernetes.io/projected/97d60b95-f52c-4946-919a-e8fd73251ed5-kube-api-access-dn9xb\") pod \"cert-manager-858654f9db-xvdbh\" (UID: \"97d60b95-f52c-4946-919a-e8fd73251ed5\") " pod="cert-manager/cert-manager-858654f9db-xvdbh" Jan 22 09:14:04 crc kubenswrapper[4811]: I0122 09:14:04.615876 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shk4m\" (UniqueName: \"kubernetes.io/projected/34e7fc62-f1c1-41cb-b44c-2ef705fa2a15-kube-api-access-shk4m\") pod \"cert-manager-webhook-687f57d79b-sgnhm\" (UID: \"34e7fc62-f1c1-41cb-b44c-2ef705fa2a15\") " pod="cert-manager/cert-manager-webhook-687f57d79b-sgnhm" Jan 22 09:14:04 crc kubenswrapper[4811]: I0122 09:14:04.616163 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m524d\" (UniqueName: \"kubernetes.io/projected/4dbc71dd-a371-4735-bc7e-6c29eb855fbd-kube-api-access-m524d\") pod \"cert-manager-cainjector-cf98fcc89-jbj4c\" (UID: \"4dbc71dd-a371-4735-bc7e-6c29eb855fbd\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-jbj4c" Jan 22 09:14:04 crc kubenswrapper[4811]: I0122 09:14:04.768952 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-jbj4c" Jan 22 09:14:04 crc kubenswrapper[4811]: I0122 09:14:04.777503 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-xvdbh" Jan 22 09:14:04 crc kubenswrapper[4811]: I0122 09:14:04.788819 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-sgnhm" Jan 22 09:14:05 crc kubenswrapper[4811]: I0122 09:14:05.167834 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-sgnhm"] Jan 22 09:14:05 crc kubenswrapper[4811]: I0122 09:14:05.174932 4811 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 09:14:05 crc kubenswrapper[4811]: I0122 09:14:05.200879 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-xvdbh"] Jan 22 09:14:05 crc kubenswrapper[4811]: I0122 09:14:05.205806 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-jbj4c"] Jan 22 09:14:05 crc kubenswrapper[4811]: W0122 09:14:05.207948 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod97d60b95_f52c_4946_919a_e8fd73251ed5.slice/crio-ad0cc7f4c29483fe8f87a5c8adf838ae53c7d40573647a5945a56dbaeff0f232 WatchSource:0}: Error finding container ad0cc7f4c29483fe8f87a5c8adf838ae53c7d40573647a5945a56dbaeff0f232: Status 404 returned error can't find the container with id ad0cc7f4c29483fe8f87a5c8adf838ae53c7d40573647a5945a56dbaeff0f232 Jan 22 09:14:05 crc kubenswrapper[4811]: I0122 09:14:05.598824 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-jbj4c" event={"ID":"4dbc71dd-a371-4735-bc7e-6c29eb855fbd","Type":"ContainerStarted","Data":"cb9debe017f0055947c3623c61716a38f4741e5c26f02b8e0d1a1b65eb1e8f2f"} Jan 22 09:14:05 crc kubenswrapper[4811]: I0122 09:14:05.599877 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-sgnhm" event={"ID":"34e7fc62-f1c1-41cb-b44c-2ef705fa2a15","Type":"ContainerStarted","Data":"77640446ad9c4468ce487ebaf3ec9e67817087e210de967b157c7a1ece31a4d5"} Jan 22 09:14:05 crc kubenswrapper[4811]: I0122 09:14:05.600712 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-xvdbh" event={"ID":"97d60b95-f52c-4946-919a-e8fd73251ed5","Type":"ContainerStarted","Data":"ad0cc7f4c29483fe8f87a5c8adf838ae53c7d40573647a5945a56dbaeff0f232"} Jan 22 09:14:08 crc kubenswrapper[4811]: I0122 09:14:08.618403 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-sgnhm" event={"ID":"34e7fc62-f1c1-41cb-b44c-2ef705fa2a15","Type":"ContainerStarted","Data":"484fd532c15e49d3fa8fcb29f78509c22a9031c976c8b51441cc08e9777e40d7"} Jan 22 09:14:08 crc kubenswrapper[4811]: I0122 09:14:08.618586 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-sgnhm" Jan 22 09:14:08 crc kubenswrapper[4811]: I0122 09:14:08.619905 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-xvdbh" event={"ID":"97d60b95-f52c-4946-919a-e8fd73251ed5","Type":"ContainerStarted","Data":"d02766197ad2361c5353194d4a4df310682dca9424dc4cb946eabe45b4c446eb"} Jan 22 09:14:08 crc kubenswrapper[4811]: I0122 09:14:08.620893 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-jbj4c" event={"ID":"4dbc71dd-a371-4735-bc7e-6c29eb855fbd","Type":"ContainerStarted","Data":"a28c12f7e69a7b169b885878f13fffd0734feec22d6fa4ae6fd66719e8a56417"} Jan 22 09:14:08 crc kubenswrapper[4811]: I0122 09:14:08.634970 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-sgnhm" podStartSLOduration=1.5233060090000001 podStartE2EDuration="4.634948035s" podCreationTimestamp="2026-01-22 09:14:04 +0000 UTC" firstStartedPulling="2026-01-22 09:14:05.174681097 +0000 UTC m=+489.496868220" lastFinishedPulling="2026-01-22 09:14:08.286323123 +0000 UTC m=+492.608510246" observedRunningTime="2026-01-22 09:14:08.634447011 +0000 UTC m=+492.956634134" watchObservedRunningTime="2026-01-22 09:14:08.634948035 +0000 UTC m=+492.957135159" Jan 22 09:14:08 crc kubenswrapper[4811]: I0122 09:14:08.654379 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-jbj4c" podStartSLOduration=1.542923056 podStartE2EDuration="4.65435708s" podCreationTimestamp="2026-01-22 09:14:04 +0000 UTC" firstStartedPulling="2026-01-22 09:14:05.215596042 +0000 UTC m=+489.537783164" lastFinishedPulling="2026-01-22 09:14:08.327030074 +0000 UTC m=+492.649217188" observedRunningTime="2026-01-22 09:14:08.648859071 +0000 UTC m=+492.971046195" watchObservedRunningTime="2026-01-22 09:14:08.65435708 +0000 UTC m=+492.976544203" Jan 22 09:14:08 crc kubenswrapper[4811]: I0122 09:14:08.663396 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-xvdbh" podStartSLOduration=1.588711092 podStartE2EDuration="4.663365971s" podCreationTimestamp="2026-01-22 09:14:04 +0000 UTC" firstStartedPulling="2026-01-22 09:14:05.211700504 +0000 UTC m=+489.533887628" lastFinishedPulling="2026-01-22 09:14:08.286355384 +0000 UTC m=+492.608542507" observedRunningTime="2026-01-22 09:14:08.662218728 +0000 UTC m=+492.984405851" watchObservedRunningTime="2026-01-22 09:14:08.663365971 +0000 UTC m=+492.985553095" Jan 22 09:14:14 crc kubenswrapper[4811]: I0122 09:14:14.793078 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-sgnhm" Jan 22 09:14:28 crc kubenswrapper[4811]: I0122 09:14:28.653857 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-274vf"] Jan 22 09:14:28 crc kubenswrapper[4811]: I0122 09:14:28.654578 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-274vf" podUID="1cd0f0db-de53-47c0-9b45-2ce8b37392a3" containerName="ovn-controller" containerID="cri-o://55f26cdf9d61842467eb1367c6ec156b4de9c40a5b05fe60d3d100df5136a8a2" gracePeriod=30 Jan 22 09:14:28 crc kubenswrapper[4811]: I0122 09:14:28.654714 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-274vf" podUID="1cd0f0db-de53-47c0-9b45-2ce8b37392a3" containerName="kube-rbac-proxy-node" containerID="cri-o://b5f0ec565b046b22d600103b58946f6f520873f1686cc3042bf7ddc7f24bd4ad" gracePeriod=30 Jan 22 09:14:28 crc kubenswrapper[4811]: I0122 09:14:28.654755 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-274vf" podUID="1cd0f0db-de53-47c0-9b45-2ce8b37392a3" containerName="ovn-acl-logging" containerID="cri-o://9197a8c82d49fb9b4b1c97827ee95582ca8148656abb88904e06abb25c135597" gracePeriod=30 Jan 22 09:14:28 crc kubenswrapper[4811]: I0122 09:14:28.654752 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-274vf" podUID="1cd0f0db-de53-47c0-9b45-2ce8b37392a3" containerName="northd" containerID="cri-o://8f0736426a25dadac460e5ed467f3082cd8c4806190c053022d6f360c4290799" gracePeriod=30 Jan 22 09:14:28 crc kubenswrapper[4811]: I0122 09:14:28.654681 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-274vf" podUID="1cd0f0db-de53-47c0-9b45-2ce8b37392a3" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://e071df53e8cd1652f507eacae3e2c54ac75e76a6b9a426620c970762e81ecd84" gracePeriod=30 Jan 22 09:14:28 crc kubenswrapper[4811]: I0122 09:14:28.654932 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-274vf" podUID="1cd0f0db-de53-47c0-9b45-2ce8b37392a3" containerName="sbdb" containerID="cri-o://4c741f2990f908528ccab348e1eb02e217a9b659becc7a41c4eb8da28bfd42e5" gracePeriod=30 Jan 22 09:14:28 crc kubenswrapper[4811]: I0122 09:14:28.654680 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-274vf" podUID="1cd0f0db-de53-47c0-9b45-2ce8b37392a3" containerName="nbdb" containerID="cri-o://c261cdea10a90cb9b06369f3b725b7778d070fb329e81ab44552446e12662451" gracePeriod=30 Jan 22 09:14:28 crc kubenswrapper[4811]: I0122 09:14:28.681022 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-274vf" podUID="1cd0f0db-de53-47c0-9b45-2ce8b37392a3" containerName="ovnkube-controller" containerID="cri-o://5a45c9cc109c4284fd86db1bd72102f9fc2778b85d49eb651d9debff48557f02" gracePeriod=30 Jan 22 09:14:28 crc kubenswrapper[4811]: I0122 09:14:28.716195 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-kfqgt_f2555861-d1bb-4f21-be4a-165ed9212932/kube-multus/2.log" Jan 22 09:14:28 crc kubenswrapper[4811]: I0122 09:14:28.716619 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-kfqgt_f2555861-d1bb-4f21-be4a-165ed9212932/kube-multus/1.log" Jan 22 09:14:28 crc kubenswrapper[4811]: I0122 09:14:28.716754 4811 generic.go:334] "Generic (PLEG): container finished" podID="f2555861-d1bb-4f21-be4a-165ed9212932" containerID="387bd2033a6e44d48e86120e4dee106a7b97b0d54769314cb0c0424e36c0d88e" exitCode=2 Jan 22 09:14:28 crc kubenswrapper[4811]: I0122 09:14:28.716787 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-kfqgt" event={"ID":"f2555861-d1bb-4f21-be4a-165ed9212932","Type":"ContainerDied","Data":"387bd2033a6e44d48e86120e4dee106a7b97b0d54769314cb0c0424e36c0d88e"} Jan 22 09:14:28 crc kubenswrapper[4811]: I0122 09:14:28.716819 4811 scope.go:117] "RemoveContainer" containerID="c2e4b2026355f189cbe6a2d70999613aa1b5868f0b38c25e834e42eda1b41088" Jan 22 09:14:28 crc kubenswrapper[4811]: I0122 09:14:28.717231 4811 scope.go:117] "RemoveContainer" containerID="387bd2033a6e44d48e86120e4dee106a7b97b0d54769314cb0c0424e36c0d88e" Jan 22 09:14:28 crc kubenswrapper[4811]: E0122 09:14:28.717476 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-kfqgt_openshift-multus(f2555861-d1bb-4f21-be4a-165ed9212932)\"" pod="openshift-multus/multus-kfqgt" podUID="f2555861-d1bb-4f21-be4a-165ed9212932" Jan 22 09:14:28 crc kubenswrapper[4811]: I0122 09:14:28.942109 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-274vf_1cd0f0db-de53-47c0-9b45-2ce8b37392a3/ovnkube-controller/3.log" Jan 22 09:14:28 crc kubenswrapper[4811]: I0122 09:14:28.944310 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-274vf_1cd0f0db-de53-47c0-9b45-2ce8b37392a3/ovn-acl-logging/0.log" Jan 22 09:14:28 crc kubenswrapper[4811]: I0122 09:14:28.944737 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-274vf_1cd0f0db-de53-47c0-9b45-2ce8b37392a3/ovn-controller/0.log" Jan 22 09:14:28 crc kubenswrapper[4811]: I0122 09:14:28.945088 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-274vf" Jan 22 09:14:28 crc kubenswrapper[4811]: I0122 09:14:28.986191 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-gc7b5"] Jan 22 09:14:28 crc kubenswrapper[4811]: E0122 09:14:28.986416 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cd0f0db-de53-47c0-9b45-2ce8b37392a3" containerName="nbdb" Jan 22 09:14:28 crc kubenswrapper[4811]: I0122 09:14:28.986436 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cd0f0db-de53-47c0-9b45-2ce8b37392a3" containerName="nbdb" Jan 22 09:14:28 crc kubenswrapper[4811]: E0122 09:14:28.986443 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cd0f0db-de53-47c0-9b45-2ce8b37392a3" containerName="ovnkube-controller" Jan 22 09:14:28 crc kubenswrapper[4811]: I0122 09:14:28.986450 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cd0f0db-de53-47c0-9b45-2ce8b37392a3" containerName="ovnkube-controller" Jan 22 09:14:28 crc kubenswrapper[4811]: E0122 09:14:28.986457 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cd0f0db-de53-47c0-9b45-2ce8b37392a3" containerName="ovn-acl-logging" Jan 22 09:14:28 crc kubenswrapper[4811]: I0122 09:14:28.986463 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cd0f0db-de53-47c0-9b45-2ce8b37392a3" containerName="ovn-acl-logging" Jan 22 09:14:28 crc kubenswrapper[4811]: E0122 09:14:28.986473 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cd0f0db-de53-47c0-9b45-2ce8b37392a3" containerName="kube-rbac-proxy-node" Jan 22 09:14:28 crc kubenswrapper[4811]: I0122 09:14:28.986479 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cd0f0db-de53-47c0-9b45-2ce8b37392a3" containerName="kube-rbac-proxy-node" Jan 22 09:14:28 crc kubenswrapper[4811]: E0122 09:14:28.986485 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cd0f0db-de53-47c0-9b45-2ce8b37392a3" containerName="kube-rbac-proxy-ovn-metrics" Jan 22 09:14:28 crc kubenswrapper[4811]: I0122 09:14:28.986490 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cd0f0db-de53-47c0-9b45-2ce8b37392a3" containerName="kube-rbac-proxy-ovn-metrics" Jan 22 09:14:28 crc kubenswrapper[4811]: E0122 09:14:28.986499 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cd0f0db-de53-47c0-9b45-2ce8b37392a3" containerName="kubecfg-setup" Jan 22 09:14:28 crc kubenswrapper[4811]: I0122 09:14:28.986505 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cd0f0db-de53-47c0-9b45-2ce8b37392a3" containerName="kubecfg-setup" Jan 22 09:14:28 crc kubenswrapper[4811]: E0122 09:14:28.986511 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cd0f0db-de53-47c0-9b45-2ce8b37392a3" containerName="northd" Jan 22 09:14:28 crc kubenswrapper[4811]: I0122 09:14:28.986516 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cd0f0db-de53-47c0-9b45-2ce8b37392a3" containerName="northd" Jan 22 09:14:28 crc kubenswrapper[4811]: E0122 09:14:28.986523 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cd0f0db-de53-47c0-9b45-2ce8b37392a3" containerName="sbdb" Jan 22 09:14:28 crc kubenswrapper[4811]: I0122 09:14:28.986528 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cd0f0db-de53-47c0-9b45-2ce8b37392a3" containerName="sbdb" Jan 22 09:14:28 crc kubenswrapper[4811]: E0122 09:14:28.986538 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cd0f0db-de53-47c0-9b45-2ce8b37392a3" containerName="ovnkube-controller" Jan 22 09:14:28 crc kubenswrapper[4811]: I0122 09:14:28.986544 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cd0f0db-de53-47c0-9b45-2ce8b37392a3" containerName="ovnkube-controller" Jan 22 09:14:28 crc kubenswrapper[4811]: E0122 09:14:28.986553 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cd0f0db-de53-47c0-9b45-2ce8b37392a3" containerName="ovn-controller" Jan 22 09:14:28 crc kubenswrapper[4811]: I0122 09:14:28.986559 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cd0f0db-de53-47c0-9b45-2ce8b37392a3" containerName="ovn-controller" Jan 22 09:14:28 crc kubenswrapper[4811]: E0122 09:14:28.986565 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cd0f0db-de53-47c0-9b45-2ce8b37392a3" containerName="ovnkube-controller" Jan 22 09:14:28 crc kubenswrapper[4811]: I0122 09:14:28.986571 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cd0f0db-de53-47c0-9b45-2ce8b37392a3" containerName="ovnkube-controller" Jan 22 09:14:28 crc kubenswrapper[4811]: E0122 09:14:28.986577 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cd0f0db-de53-47c0-9b45-2ce8b37392a3" containerName="ovnkube-controller" Jan 22 09:14:28 crc kubenswrapper[4811]: I0122 09:14:28.986581 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cd0f0db-de53-47c0-9b45-2ce8b37392a3" containerName="ovnkube-controller" Jan 22 09:14:28 crc kubenswrapper[4811]: I0122 09:14:28.986711 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="1cd0f0db-de53-47c0-9b45-2ce8b37392a3" containerName="ovnkube-controller" Jan 22 09:14:28 crc kubenswrapper[4811]: I0122 09:14:28.986720 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="1cd0f0db-de53-47c0-9b45-2ce8b37392a3" containerName="ovn-controller" Jan 22 09:14:28 crc kubenswrapper[4811]: I0122 09:14:28.986726 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="1cd0f0db-de53-47c0-9b45-2ce8b37392a3" containerName="ovn-acl-logging" Jan 22 09:14:28 crc kubenswrapper[4811]: I0122 09:14:28.986733 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="1cd0f0db-de53-47c0-9b45-2ce8b37392a3" containerName="kube-rbac-proxy-ovn-metrics" Jan 22 09:14:28 crc kubenswrapper[4811]: I0122 09:14:28.986741 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="1cd0f0db-de53-47c0-9b45-2ce8b37392a3" containerName="ovnkube-controller" Jan 22 09:14:28 crc kubenswrapper[4811]: I0122 09:14:28.986752 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="1cd0f0db-de53-47c0-9b45-2ce8b37392a3" containerName="nbdb" Jan 22 09:14:28 crc kubenswrapper[4811]: I0122 09:14:28.986760 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="1cd0f0db-de53-47c0-9b45-2ce8b37392a3" containerName="ovnkube-controller" Jan 22 09:14:28 crc kubenswrapper[4811]: I0122 09:14:28.986768 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="1cd0f0db-de53-47c0-9b45-2ce8b37392a3" containerName="kube-rbac-proxy-node" Jan 22 09:14:28 crc kubenswrapper[4811]: I0122 09:14:28.986776 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="1cd0f0db-de53-47c0-9b45-2ce8b37392a3" containerName="northd" Jan 22 09:14:28 crc kubenswrapper[4811]: I0122 09:14:28.986783 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="1cd0f0db-de53-47c0-9b45-2ce8b37392a3" containerName="sbdb" Jan 22 09:14:28 crc kubenswrapper[4811]: E0122 09:14:28.986878 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cd0f0db-de53-47c0-9b45-2ce8b37392a3" containerName="ovnkube-controller" Jan 22 09:14:28 crc kubenswrapper[4811]: I0122 09:14:28.986886 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cd0f0db-de53-47c0-9b45-2ce8b37392a3" containerName="ovnkube-controller" Jan 22 09:14:28 crc kubenswrapper[4811]: I0122 09:14:28.986963 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="1cd0f0db-de53-47c0-9b45-2ce8b37392a3" containerName="ovnkube-controller" Jan 22 09:14:28 crc kubenswrapper[4811]: I0122 09:14:28.986972 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="1cd0f0db-de53-47c0-9b45-2ce8b37392a3" containerName="ovnkube-controller" Jan 22 09:14:28 crc kubenswrapper[4811]: I0122 09:14:28.988354 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-gc7b5" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.071271 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1cd0f0db-de53-47c0-9b45-2ce8b37392a3-var-lib-openvswitch\") pod \"1cd0f0db-de53-47c0-9b45-2ce8b37392a3\" (UID: \"1cd0f0db-de53-47c0-9b45-2ce8b37392a3\") " Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.071330 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/1cd0f0db-de53-47c0-9b45-2ce8b37392a3-log-socket\") pod \"1cd0f0db-de53-47c0-9b45-2ce8b37392a3\" (UID: \"1cd0f0db-de53-47c0-9b45-2ce8b37392a3\") " Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.071363 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1cd0f0db-de53-47c0-9b45-2ce8b37392a3-ovnkube-config\") pod \"1cd0f0db-de53-47c0-9b45-2ce8b37392a3\" (UID: \"1cd0f0db-de53-47c0-9b45-2ce8b37392a3\") " Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.071384 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1cd0f0db-de53-47c0-9b45-2ce8b37392a3-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "1cd0f0db-de53-47c0-9b45-2ce8b37392a3" (UID: "1cd0f0db-de53-47c0-9b45-2ce8b37392a3"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.071470 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1cd0f0db-de53-47c0-9b45-2ce8b37392a3-log-socket" (OuterVolumeSpecName: "log-socket") pod "1cd0f0db-de53-47c0-9b45-2ce8b37392a3" (UID: "1cd0f0db-de53-47c0-9b45-2ce8b37392a3"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.071541 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/1cd0f0db-de53-47c0-9b45-2ce8b37392a3-host-kubelet\") pod \"1cd0f0db-de53-47c0-9b45-2ce8b37392a3\" (UID: \"1cd0f0db-de53-47c0-9b45-2ce8b37392a3\") " Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.071570 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1cd0f0db-de53-47c0-9b45-2ce8b37392a3-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "1cd0f0db-de53-47c0-9b45-2ce8b37392a3" (UID: "1cd0f0db-de53-47c0-9b45-2ce8b37392a3"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.071748 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1cd0f0db-de53-47c0-9b45-2ce8b37392a3-host-cni-netd\") pod \"1cd0f0db-de53-47c0-9b45-2ce8b37392a3\" (UID: \"1cd0f0db-de53-47c0-9b45-2ce8b37392a3\") " Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.071769 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1cd0f0db-de53-47c0-9b45-2ce8b37392a3-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "1cd0f0db-de53-47c0-9b45-2ce8b37392a3" (UID: "1cd0f0db-de53-47c0-9b45-2ce8b37392a3"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.071777 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1cd0f0db-de53-47c0-9b45-2ce8b37392a3-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "1cd0f0db-de53-47c0-9b45-2ce8b37392a3" (UID: "1cd0f0db-de53-47c0-9b45-2ce8b37392a3"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.071817 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/1cd0f0db-de53-47c0-9b45-2ce8b37392a3-ovnkube-script-lib\") pod \"1cd0f0db-de53-47c0-9b45-2ce8b37392a3\" (UID: \"1cd0f0db-de53-47c0-9b45-2ce8b37392a3\") " Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.071840 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1cd0f0db-de53-47c0-9b45-2ce8b37392a3-host-run-netns\") pod \"1cd0f0db-de53-47c0-9b45-2ce8b37392a3\" (UID: \"1cd0f0db-de53-47c0-9b45-2ce8b37392a3\") " Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.072067 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1cd0f0db-de53-47c0-9b45-2ce8b37392a3-etc-openvswitch\") pod \"1cd0f0db-de53-47c0-9b45-2ce8b37392a3\" (UID: \"1cd0f0db-de53-47c0-9b45-2ce8b37392a3\") " Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.072087 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1cd0f0db-de53-47c0-9b45-2ce8b37392a3-host-cni-bin\") pod \"1cd0f0db-de53-47c0-9b45-2ce8b37392a3\" (UID: \"1cd0f0db-de53-47c0-9b45-2ce8b37392a3\") " Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.072087 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1cd0f0db-de53-47c0-9b45-2ce8b37392a3-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "1cd0f0db-de53-47c0-9b45-2ce8b37392a3" (UID: "1cd0f0db-de53-47c0-9b45-2ce8b37392a3"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.072105 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/1cd0f0db-de53-47c0-9b45-2ce8b37392a3-node-log\") pod \"1cd0f0db-de53-47c0-9b45-2ce8b37392a3\" (UID: \"1cd0f0db-de53-47c0-9b45-2ce8b37392a3\") " Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.072120 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1cd0f0db-de53-47c0-9b45-2ce8b37392a3-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "1cd0f0db-de53-47c0-9b45-2ce8b37392a3" (UID: "1cd0f0db-de53-47c0-9b45-2ce8b37392a3"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.072121 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1cd0f0db-de53-47c0-9b45-2ce8b37392a3-host-run-ovn-kubernetes\") pod \"1cd0f0db-de53-47c0-9b45-2ce8b37392a3\" (UID: \"1cd0f0db-de53-47c0-9b45-2ce8b37392a3\") " Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.072138 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1cd0f0db-de53-47c0-9b45-2ce8b37392a3-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "1cd0f0db-de53-47c0-9b45-2ce8b37392a3" (UID: "1cd0f0db-de53-47c0-9b45-2ce8b37392a3"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.072153 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1cd0f0db-de53-47c0-9b45-2ce8b37392a3-env-overrides\") pod \"1cd0f0db-de53-47c0-9b45-2ce8b37392a3\" (UID: \"1cd0f0db-de53-47c0-9b45-2ce8b37392a3\") " Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.072177 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1cd0f0db-de53-47c0-9b45-2ce8b37392a3-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "1cd0f0db-de53-47c0-9b45-2ce8b37392a3" (UID: "1cd0f0db-de53-47c0-9b45-2ce8b37392a3"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.072183 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/1cd0f0db-de53-47c0-9b45-2ce8b37392a3-systemd-units\") pod \"1cd0f0db-de53-47c0-9b45-2ce8b37392a3\" (UID: \"1cd0f0db-de53-47c0-9b45-2ce8b37392a3\") " Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.072200 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1cd0f0db-de53-47c0-9b45-2ce8b37392a3-node-log" (OuterVolumeSpecName: "node-log") pod "1cd0f0db-de53-47c0-9b45-2ce8b37392a3" (UID: "1cd0f0db-de53-47c0-9b45-2ce8b37392a3"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.072204 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1cd0f0db-de53-47c0-9b45-2ce8b37392a3-host-var-lib-cni-networks-ovn-kubernetes\") pod \"1cd0f0db-de53-47c0-9b45-2ce8b37392a3\" (UID: \"1cd0f0db-de53-47c0-9b45-2ce8b37392a3\") " Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.072240 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vwxdh\" (UniqueName: \"kubernetes.io/projected/1cd0f0db-de53-47c0-9b45-2ce8b37392a3-kube-api-access-vwxdh\") pod \"1cd0f0db-de53-47c0-9b45-2ce8b37392a3\" (UID: \"1cd0f0db-de53-47c0-9b45-2ce8b37392a3\") " Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.072291 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/1cd0f0db-de53-47c0-9b45-2ce8b37392a3-run-ovn\") pod \"1cd0f0db-de53-47c0-9b45-2ce8b37392a3\" (UID: \"1cd0f0db-de53-47c0-9b45-2ce8b37392a3\") " Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.072311 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/1cd0f0db-de53-47c0-9b45-2ce8b37392a3-host-slash\") pod \"1cd0f0db-de53-47c0-9b45-2ce8b37392a3\" (UID: \"1cd0f0db-de53-47c0-9b45-2ce8b37392a3\") " Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.072326 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1cd0f0db-de53-47c0-9b45-2ce8b37392a3-run-openvswitch\") pod \"1cd0f0db-de53-47c0-9b45-2ce8b37392a3\" (UID: \"1cd0f0db-de53-47c0-9b45-2ce8b37392a3\") " Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.072351 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/1cd0f0db-de53-47c0-9b45-2ce8b37392a3-run-systemd\") pod \"1cd0f0db-de53-47c0-9b45-2ce8b37392a3\" (UID: \"1cd0f0db-de53-47c0-9b45-2ce8b37392a3\") " Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.072369 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1cd0f0db-de53-47c0-9b45-2ce8b37392a3-ovn-node-metrics-cert\") pod \"1cd0f0db-de53-47c0-9b45-2ce8b37392a3\" (UID: \"1cd0f0db-de53-47c0-9b45-2ce8b37392a3\") " Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.072447 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1cd0f0db-de53-47c0-9b45-2ce8b37392a3-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "1cd0f0db-de53-47c0-9b45-2ce8b37392a3" (UID: "1cd0f0db-de53-47c0-9b45-2ce8b37392a3"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.072474 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1cd0f0db-de53-47c0-9b45-2ce8b37392a3-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "1cd0f0db-de53-47c0-9b45-2ce8b37392a3" (UID: "1cd0f0db-de53-47c0-9b45-2ce8b37392a3"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.072493 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1cd0f0db-de53-47c0-9b45-2ce8b37392a3-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "1cd0f0db-de53-47c0-9b45-2ce8b37392a3" (UID: "1cd0f0db-de53-47c0-9b45-2ce8b37392a3"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.072512 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1cd0f0db-de53-47c0-9b45-2ce8b37392a3-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "1cd0f0db-de53-47c0-9b45-2ce8b37392a3" (UID: "1cd0f0db-de53-47c0-9b45-2ce8b37392a3"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.072531 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1cd0f0db-de53-47c0-9b45-2ce8b37392a3-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "1cd0f0db-de53-47c0-9b45-2ce8b37392a3" (UID: "1cd0f0db-de53-47c0-9b45-2ce8b37392a3"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.072530 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0188bfdb-1df0-4a96-a2bc-152f80c7f326-run-openvswitch\") pod \"ovnkube-node-gc7b5\" (UID: \"0188bfdb-1df0-4a96-a2bc-152f80c7f326\") " pod="openshift-ovn-kubernetes/ovnkube-node-gc7b5" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.072562 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0188bfdb-1df0-4a96-a2bc-152f80c7f326-host-run-ovn-kubernetes\") pod \"ovnkube-node-gc7b5\" (UID: \"0188bfdb-1df0-4a96-a2bc-152f80c7f326\") " pod="openshift-ovn-kubernetes/ovnkube-node-gc7b5" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.072581 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0188bfdb-1df0-4a96-a2bc-152f80c7f326-ovnkube-config\") pod \"ovnkube-node-gc7b5\" (UID: \"0188bfdb-1df0-4a96-a2bc-152f80c7f326\") " pod="openshift-ovn-kubernetes/ovnkube-node-gc7b5" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.072612 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0188bfdb-1df0-4a96-a2bc-152f80c7f326-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-gc7b5\" (UID: \"0188bfdb-1df0-4a96-a2bc-152f80c7f326\") " pod="openshift-ovn-kubernetes/ovnkube-node-gc7b5" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.072655 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0188bfdb-1df0-4a96-a2bc-152f80c7f326-log-socket\") pod \"ovnkube-node-gc7b5\" (UID: \"0188bfdb-1df0-4a96-a2bc-152f80c7f326\") " pod="openshift-ovn-kubernetes/ovnkube-node-gc7b5" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.072709 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1cd0f0db-de53-47c0-9b45-2ce8b37392a3-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "1cd0f0db-de53-47c0-9b45-2ce8b37392a3" (UID: "1cd0f0db-de53-47c0-9b45-2ce8b37392a3"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.072736 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1cd0f0db-de53-47c0-9b45-2ce8b37392a3-host-slash" (OuterVolumeSpecName: "host-slash") pod "1cd0f0db-de53-47c0-9b45-2ce8b37392a3" (UID: "1cd0f0db-de53-47c0-9b45-2ce8b37392a3"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.072781 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0188bfdb-1df0-4a96-a2bc-152f80c7f326-host-cni-bin\") pod \"ovnkube-node-gc7b5\" (UID: \"0188bfdb-1df0-4a96-a2bc-152f80c7f326\") " pod="openshift-ovn-kubernetes/ovnkube-node-gc7b5" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.072804 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0188bfdb-1df0-4a96-a2bc-152f80c7f326-host-slash\") pod \"ovnkube-node-gc7b5\" (UID: \"0188bfdb-1df0-4a96-a2bc-152f80c7f326\") " pod="openshift-ovn-kubernetes/ovnkube-node-gc7b5" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.072864 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/0188bfdb-1df0-4a96-a2bc-152f80c7f326-host-kubelet\") pod \"ovnkube-node-gc7b5\" (UID: \"0188bfdb-1df0-4a96-a2bc-152f80c7f326\") " pod="openshift-ovn-kubernetes/ovnkube-node-gc7b5" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.072883 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0188bfdb-1df0-4a96-a2bc-152f80c7f326-systemd-units\") pod \"ovnkube-node-gc7b5\" (UID: \"0188bfdb-1df0-4a96-a2bc-152f80c7f326\") " pod="openshift-ovn-kubernetes/ovnkube-node-gc7b5" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.072931 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0188bfdb-1df0-4a96-a2bc-152f80c7f326-run-ovn\") pod \"ovnkube-node-gc7b5\" (UID: \"0188bfdb-1df0-4a96-a2bc-152f80c7f326\") " pod="openshift-ovn-kubernetes/ovnkube-node-gc7b5" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.072950 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0188bfdb-1df0-4a96-a2bc-152f80c7f326-host-cni-netd\") pod \"ovnkube-node-gc7b5\" (UID: \"0188bfdb-1df0-4a96-a2bc-152f80c7f326\") " pod="openshift-ovn-kubernetes/ovnkube-node-gc7b5" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.073012 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0188bfdb-1df0-4a96-a2bc-152f80c7f326-ovn-node-metrics-cert\") pod \"ovnkube-node-gc7b5\" (UID: \"0188bfdb-1df0-4a96-a2bc-152f80c7f326\") " pod="openshift-ovn-kubernetes/ovnkube-node-gc7b5" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.073029 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/0188bfdb-1df0-4a96-a2bc-152f80c7f326-ovnkube-script-lib\") pod \"ovnkube-node-gc7b5\" (UID: \"0188bfdb-1df0-4a96-a2bc-152f80c7f326\") " pod="openshift-ovn-kubernetes/ovnkube-node-gc7b5" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.073045 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0188bfdb-1df0-4a96-a2bc-152f80c7f326-env-overrides\") pod \"ovnkube-node-gc7b5\" (UID: \"0188bfdb-1df0-4a96-a2bc-152f80c7f326\") " pod="openshift-ovn-kubernetes/ovnkube-node-gc7b5" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.073070 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0188bfdb-1df0-4a96-a2bc-152f80c7f326-node-log\") pod \"ovnkube-node-gc7b5\" (UID: \"0188bfdb-1df0-4a96-a2bc-152f80c7f326\") " pod="openshift-ovn-kubernetes/ovnkube-node-gc7b5" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.073102 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0188bfdb-1df0-4a96-a2bc-152f80c7f326-var-lib-openvswitch\") pod \"ovnkube-node-gc7b5\" (UID: \"0188bfdb-1df0-4a96-a2bc-152f80c7f326\") " pod="openshift-ovn-kubernetes/ovnkube-node-gc7b5" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.073125 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9v9f\" (UniqueName: \"kubernetes.io/projected/0188bfdb-1df0-4a96-a2bc-152f80c7f326-kube-api-access-p9v9f\") pod \"ovnkube-node-gc7b5\" (UID: \"0188bfdb-1df0-4a96-a2bc-152f80c7f326\") " pod="openshift-ovn-kubernetes/ovnkube-node-gc7b5" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.073145 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/0188bfdb-1df0-4a96-a2bc-152f80c7f326-run-systemd\") pod \"ovnkube-node-gc7b5\" (UID: \"0188bfdb-1df0-4a96-a2bc-152f80c7f326\") " pod="openshift-ovn-kubernetes/ovnkube-node-gc7b5" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.073163 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0188bfdb-1df0-4a96-a2bc-152f80c7f326-host-run-netns\") pod \"ovnkube-node-gc7b5\" (UID: \"0188bfdb-1df0-4a96-a2bc-152f80c7f326\") " pod="openshift-ovn-kubernetes/ovnkube-node-gc7b5" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.073179 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0188bfdb-1df0-4a96-a2bc-152f80c7f326-etc-openvswitch\") pod \"ovnkube-node-gc7b5\" (UID: \"0188bfdb-1df0-4a96-a2bc-152f80c7f326\") " pod="openshift-ovn-kubernetes/ovnkube-node-gc7b5" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.073219 4811 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/1cd0f0db-de53-47c0-9b45-2ce8b37392a3-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.073229 4811 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/1cd0f0db-de53-47c0-9b45-2ce8b37392a3-host-slash\") on node \"crc\" DevicePath \"\"" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.073238 4811 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1cd0f0db-de53-47c0-9b45-2ce8b37392a3-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.073246 4811 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1cd0f0db-de53-47c0-9b45-2ce8b37392a3-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.073256 4811 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/1cd0f0db-de53-47c0-9b45-2ce8b37392a3-log-socket\") on node \"crc\" DevicePath \"\"" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.073262 4811 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1cd0f0db-de53-47c0-9b45-2ce8b37392a3-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.073270 4811 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/1cd0f0db-de53-47c0-9b45-2ce8b37392a3-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.073278 4811 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1cd0f0db-de53-47c0-9b45-2ce8b37392a3-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.073285 4811 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/1cd0f0db-de53-47c0-9b45-2ce8b37392a3-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.073295 4811 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1cd0f0db-de53-47c0-9b45-2ce8b37392a3-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.073303 4811 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1cd0f0db-de53-47c0-9b45-2ce8b37392a3-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.073310 4811 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1cd0f0db-de53-47c0-9b45-2ce8b37392a3-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.073318 4811 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/1cd0f0db-de53-47c0-9b45-2ce8b37392a3-node-log\") on node \"crc\" DevicePath \"\"" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.073328 4811 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1cd0f0db-de53-47c0-9b45-2ce8b37392a3-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.073336 4811 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1cd0f0db-de53-47c0-9b45-2ce8b37392a3-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.073342 4811 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/1cd0f0db-de53-47c0-9b45-2ce8b37392a3-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.073350 4811 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1cd0f0db-de53-47c0-9b45-2ce8b37392a3-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.077856 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1cd0f0db-de53-47c0-9b45-2ce8b37392a3-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "1cd0f0db-de53-47c0-9b45-2ce8b37392a3" (UID: "1cd0f0db-de53-47c0-9b45-2ce8b37392a3"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.078139 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1cd0f0db-de53-47c0-9b45-2ce8b37392a3-kube-api-access-vwxdh" (OuterVolumeSpecName: "kube-api-access-vwxdh") pod "1cd0f0db-de53-47c0-9b45-2ce8b37392a3" (UID: "1cd0f0db-de53-47c0-9b45-2ce8b37392a3"). InnerVolumeSpecName "kube-api-access-vwxdh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.084718 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1cd0f0db-de53-47c0-9b45-2ce8b37392a3-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "1cd0f0db-de53-47c0-9b45-2ce8b37392a3" (UID: "1cd0f0db-de53-47c0-9b45-2ce8b37392a3"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.174416 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0188bfdb-1df0-4a96-a2bc-152f80c7f326-node-log\") pod \"ovnkube-node-gc7b5\" (UID: \"0188bfdb-1df0-4a96-a2bc-152f80c7f326\") " pod="openshift-ovn-kubernetes/ovnkube-node-gc7b5" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.174475 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0188bfdb-1df0-4a96-a2bc-152f80c7f326-var-lib-openvswitch\") pod \"ovnkube-node-gc7b5\" (UID: \"0188bfdb-1df0-4a96-a2bc-152f80c7f326\") " pod="openshift-ovn-kubernetes/ovnkube-node-gc7b5" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.174502 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9v9f\" (UniqueName: \"kubernetes.io/projected/0188bfdb-1df0-4a96-a2bc-152f80c7f326-kube-api-access-p9v9f\") pod \"ovnkube-node-gc7b5\" (UID: \"0188bfdb-1df0-4a96-a2bc-152f80c7f326\") " pod="openshift-ovn-kubernetes/ovnkube-node-gc7b5" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.174528 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/0188bfdb-1df0-4a96-a2bc-152f80c7f326-run-systemd\") pod \"ovnkube-node-gc7b5\" (UID: \"0188bfdb-1df0-4a96-a2bc-152f80c7f326\") " pod="openshift-ovn-kubernetes/ovnkube-node-gc7b5" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.174549 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0188bfdb-1df0-4a96-a2bc-152f80c7f326-host-run-netns\") pod \"ovnkube-node-gc7b5\" (UID: \"0188bfdb-1df0-4a96-a2bc-152f80c7f326\") " pod="openshift-ovn-kubernetes/ovnkube-node-gc7b5" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.174568 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0188bfdb-1df0-4a96-a2bc-152f80c7f326-etc-openvswitch\") pod \"ovnkube-node-gc7b5\" (UID: \"0188bfdb-1df0-4a96-a2bc-152f80c7f326\") " pod="openshift-ovn-kubernetes/ovnkube-node-gc7b5" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.174606 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0188bfdb-1df0-4a96-a2bc-152f80c7f326-run-openvswitch\") pod \"ovnkube-node-gc7b5\" (UID: \"0188bfdb-1df0-4a96-a2bc-152f80c7f326\") " pod="openshift-ovn-kubernetes/ovnkube-node-gc7b5" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.174645 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0188bfdb-1df0-4a96-a2bc-152f80c7f326-host-run-ovn-kubernetes\") pod \"ovnkube-node-gc7b5\" (UID: \"0188bfdb-1df0-4a96-a2bc-152f80c7f326\") " pod="openshift-ovn-kubernetes/ovnkube-node-gc7b5" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.174640 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/0188bfdb-1df0-4a96-a2bc-152f80c7f326-run-systemd\") pod \"ovnkube-node-gc7b5\" (UID: \"0188bfdb-1df0-4a96-a2bc-152f80c7f326\") " pod="openshift-ovn-kubernetes/ovnkube-node-gc7b5" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.174660 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0188bfdb-1df0-4a96-a2bc-152f80c7f326-host-run-netns\") pod \"ovnkube-node-gc7b5\" (UID: \"0188bfdb-1df0-4a96-a2bc-152f80c7f326\") " pod="openshift-ovn-kubernetes/ovnkube-node-gc7b5" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.174666 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0188bfdb-1df0-4a96-a2bc-152f80c7f326-ovnkube-config\") pod \"ovnkube-node-gc7b5\" (UID: \"0188bfdb-1df0-4a96-a2bc-152f80c7f326\") " pod="openshift-ovn-kubernetes/ovnkube-node-gc7b5" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.174585 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0188bfdb-1df0-4a96-a2bc-152f80c7f326-node-log\") pod \"ovnkube-node-gc7b5\" (UID: \"0188bfdb-1df0-4a96-a2bc-152f80c7f326\") " pod="openshift-ovn-kubernetes/ovnkube-node-gc7b5" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.174704 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0188bfdb-1df0-4a96-a2bc-152f80c7f326-var-lib-openvswitch\") pod \"ovnkube-node-gc7b5\" (UID: \"0188bfdb-1df0-4a96-a2bc-152f80c7f326\") " pod="openshift-ovn-kubernetes/ovnkube-node-gc7b5" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.174767 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0188bfdb-1df0-4a96-a2bc-152f80c7f326-host-run-ovn-kubernetes\") pod \"ovnkube-node-gc7b5\" (UID: \"0188bfdb-1df0-4a96-a2bc-152f80c7f326\") " pod="openshift-ovn-kubernetes/ovnkube-node-gc7b5" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.174798 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0188bfdb-1df0-4a96-a2bc-152f80c7f326-etc-openvswitch\") pod \"ovnkube-node-gc7b5\" (UID: \"0188bfdb-1df0-4a96-a2bc-152f80c7f326\") " pod="openshift-ovn-kubernetes/ovnkube-node-gc7b5" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.174836 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0188bfdb-1df0-4a96-a2bc-152f80c7f326-run-openvswitch\") pod \"ovnkube-node-gc7b5\" (UID: \"0188bfdb-1df0-4a96-a2bc-152f80c7f326\") " pod="openshift-ovn-kubernetes/ovnkube-node-gc7b5" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.174862 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0188bfdb-1df0-4a96-a2bc-152f80c7f326-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-gc7b5\" (UID: \"0188bfdb-1df0-4a96-a2bc-152f80c7f326\") " pod="openshift-ovn-kubernetes/ovnkube-node-gc7b5" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.174892 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0188bfdb-1df0-4a96-a2bc-152f80c7f326-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-gc7b5\" (UID: \"0188bfdb-1df0-4a96-a2bc-152f80c7f326\") " pod="openshift-ovn-kubernetes/ovnkube-node-gc7b5" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.174938 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0188bfdb-1df0-4a96-a2bc-152f80c7f326-log-socket\") pod \"ovnkube-node-gc7b5\" (UID: \"0188bfdb-1df0-4a96-a2bc-152f80c7f326\") " pod="openshift-ovn-kubernetes/ovnkube-node-gc7b5" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.174956 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0188bfdb-1df0-4a96-a2bc-152f80c7f326-host-cni-bin\") pod \"ovnkube-node-gc7b5\" (UID: \"0188bfdb-1df0-4a96-a2bc-152f80c7f326\") " pod="openshift-ovn-kubernetes/ovnkube-node-gc7b5" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.174976 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0188bfdb-1df0-4a96-a2bc-152f80c7f326-host-slash\") pod \"ovnkube-node-gc7b5\" (UID: \"0188bfdb-1df0-4a96-a2bc-152f80c7f326\") " pod="openshift-ovn-kubernetes/ovnkube-node-gc7b5" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.175023 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/0188bfdb-1df0-4a96-a2bc-152f80c7f326-host-kubelet\") pod \"ovnkube-node-gc7b5\" (UID: \"0188bfdb-1df0-4a96-a2bc-152f80c7f326\") " pod="openshift-ovn-kubernetes/ovnkube-node-gc7b5" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.175043 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0188bfdb-1df0-4a96-a2bc-152f80c7f326-systemd-units\") pod \"ovnkube-node-gc7b5\" (UID: \"0188bfdb-1df0-4a96-a2bc-152f80c7f326\") " pod="openshift-ovn-kubernetes/ovnkube-node-gc7b5" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.175083 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0188bfdb-1df0-4a96-a2bc-152f80c7f326-run-ovn\") pod \"ovnkube-node-gc7b5\" (UID: \"0188bfdb-1df0-4a96-a2bc-152f80c7f326\") " pod="openshift-ovn-kubernetes/ovnkube-node-gc7b5" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.175107 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0188bfdb-1df0-4a96-a2bc-152f80c7f326-host-cni-netd\") pod \"ovnkube-node-gc7b5\" (UID: \"0188bfdb-1df0-4a96-a2bc-152f80c7f326\") " pod="openshift-ovn-kubernetes/ovnkube-node-gc7b5" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.175150 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0188bfdb-1df0-4a96-a2bc-152f80c7f326-ovn-node-metrics-cert\") pod \"ovnkube-node-gc7b5\" (UID: \"0188bfdb-1df0-4a96-a2bc-152f80c7f326\") " pod="openshift-ovn-kubernetes/ovnkube-node-gc7b5" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.175172 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/0188bfdb-1df0-4a96-a2bc-152f80c7f326-ovnkube-script-lib\") pod \"ovnkube-node-gc7b5\" (UID: \"0188bfdb-1df0-4a96-a2bc-152f80c7f326\") " pod="openshift-ovn-kubernetes/ovnkube-node-gc7b5" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.175187 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0188bfdb-1df0-4a96-a2bc-152f80c7f326-env-overrides\") pod \"ovnkube-node-gc7b5\" (UID: \"0188bfdb-1df0-4a96-a2bc-152f80c7f326\") " pod="openshift-ovn-kubernetes/ovnkube-node-gc7b5" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.175245 4811 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/1cd0f0db-de53-47c0-9b45-2ce8b37392a3-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.175255 4811 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1cd0f0db-de53-47c0-9b45-2ce8b37392a3-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.175267 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vwxdh\" (UniqueName: \"kubernetes.io/projected/1cd0f0db-de53-47c0-9b45-2ce8b37392a3-kube-api-access-vwxdh\") on node \"crc\" DevicePath \"\"" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.175398 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0188bfdb-1df0-4a96-a2bc-152f80c7f326-ovnkube-config\") pod \"ovnkube-node-gc7b5\" (UID: \"0188bfdb-1df0-4a96-a2bc-152f80c7f326\") " pod="openshift-ovn-kubernetes/ovnkube-node-gc7b5" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.175459 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0188bfdb-1df0-4a96-a2bc-152f80c7f326-host-cni-netd\") pod \"ovnkube-node-gc7b5\" (UID: \"0188bfdb-1df0-4a96-a2bc-152f80c7f326\") " pod="openshift-ovn-kubernetes/ovnkube-node-gc7b5" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.175498 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0188bfdb-1df0-4a96-a2bc-152f80c7f326-run-ovn\") pod \"ovnkube-node-gc7b5\" (UID: \"0188bfdb-1df0-4a96-a2bc-152f80c7f326\") " pod="openshift-ovn-kubernetes/ovnkube-node-gc7b5" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.175501 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0188bfdb-1df0-4a96-a2bc-152f80c7f326-host-cni-bin\") pod \"ovnkube-node-gc7b5\" (UID: \"0188bfdb-1df0-4a96-a2bc-152f80c7f326\") " pod="openshift-ovn-kubernetes/ovnkube-node-gc7b5" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.175499 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0188bfdb-1df0-4a96-a2bc-152f80c7f326-systemd-units\") pod \"ovnkube-node-gc7b5\" (UID: \"0188bfdb-1df0-4a96-a2bc-152f80c7f326\") " pod="openshift-ovn-kubernetes/ovnkube-node-gc7b5" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.175543 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0188bfdb-1df0-4a96-a2bc-152f80c7f326-log-socket\") pod \"ovnkube-node-gc7b5\" (UID: \"0188bfdb-1df0-4a96-a2bc-152f80c7f326\") " pod="openshift-ovn-kubernetes/ovnkube-node-gc7b5" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.175523 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0188bfdb-1df0-4a96-a2bc-152f80c7f326-host-slash\") pod \"ovnkube-node-gc7b5\" (UID: \"0188bfdb-1df0-4a96-a2bc-152f80c7f326\") " pod="openshift-ovn-kubernetes/ovnkube-node-gc7b5" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.175563 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/0188bfdb-1df0-4a96-a2bc-152f80c7f326-host-kubelet\") pod \"ovnkube-node-gc7b5\" (UID: \"0188bfdb-1df0-4a96-a2bc-152f80c7f326\") " pod="openshift-ovn-kubernetes/ovnkube-node-gc7b5" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.175929 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0188bfdb-1df0-4a96-a2bc-152f80c7f326-env-overrides\") pod \"ovnkube-node-gc7b5\" (UID: \"0188bfdb-1df0-4a96-a2bc-152f80c7f326\") " pod="openshift-ovn-kubernetes/ovnkube-node-gc7b5" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.176015 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/0188bfdb-1df0-4a96-a2bc-152f80c7f326-ovnkube-script-lib\") pod \"ovnkube-node-gc7b5\" (UID: \"0188bfdb-1df0-4a96-a2bc-152f80c7f326\") " pod="openshift-ovn-kubernetes/ovnkube-node-gc7b5" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.178410 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0188bfdb-1df0-4a96-a2bc-152f80c7f326-ovn-node-metrics-cert\") pod \"ovnkube-node-gc7b5\" (UID: \"0188bfdb-1df0-4a96-a2bc-152f80c7f326\") " pod="openshift-ovn-kubernetes/ovnkube-node-gc7b5" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.190561 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9v9f\" (UniqueName: \"kubernetes.io/projected/0188bfdb-1df0-4a96-a2bc-152f80c7f326-kube-api-access-p9v9f\") pod \"ovnkube-node-gc7b5\" (UID: \"0188bfdb-1df0-4a96-a2bc-152f80c7f326\") " pod="openshift-ovn-kubernetes/ovnkube-node-gc7b5" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.298905 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-gc7b5" Jan 22 09:14:29 crc kubenswrapper[4811]: W0122 09:14:29.313698 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0188bfdb_1df0_4a96_a2bc_152f80c7f326.slice/crio-5d4b343f2f766add9b126994b7c09765727a0e4d58437d67b8b5d45c783fa67e WatchSource:0}: Error finding container 5d4b343f2f766add9b126994b7c09765727a0e4d58437d67b8b5d45c783fa67e: Status 404 returned error can't find the container with id 5d4b343f2f766add9b126994b7c09765727a0e4d58437d67b8b5d45c783fa67e Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.723126 4811 generic.go:334] "Generic (PLEG): container finished" podID="0188bfdb-1df0-4a96-a2bc-152f80c7f326" containerID="530bbde359bf58b6bf6121630725c64cca0a6adc3c52f6f9cd99f9f9934562b2" exitCode=0 Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.723206 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gc7b5" event={"ID":"0188bfdb-1df0-4a96-a2bc-152f80c7f326","Type":"ContainerDied","Data":"530bbde359bf58b6bf6121630725c64cca0a6adc3c52f6f9cd99f9f9934562b2"} Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.723453 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gc7b5" event={"ID":"0188bfdb-1df0-4a96-a2bc-152f80c7f326","Type":"ContainerStarted","Data":"5d4b343f2f766add9b126994b7c09765727a0e4d58437d67b8b5d45c783fa67e"} Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.726255 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-274vf_1cd0f0db-de53-47c0-9b45-2ce8b37392a3/ovnkube-controller/3.log" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.730466 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-274vf_1cd0f0db-de53-47c0-9b45-2ce8b37392a3/ovn-acl-logging/0.log" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.730957 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-274vf_1cd0f0db-de53-47c0-9b45-2ce8b37392a3/ovn-controller/0.log" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.731269 4811 generic.go:334] "Generic (PLEG): container finished" podID="1cd0f0db-de53-47c0-9b45-2ce8b37392a3" containerID="5a45c9cc109c4284fd86db1bd72102f9fc2778b85d49eb651d9debff48557f02" exitCode=0 Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.731297 4811 generic.go:334] "Generic (PLEG): container finished" podID="1cd0f0db-de53-47c0-9b45-2ce8b37392a3" containerID="4c741f2990f908528ccab348e1eb02e217a9b659becc7a41c4eb8da28bfd42e5" exitCode=0 Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.731305 4811 generic.go:334] "Generic (PLEG): container finished" podID="1cd0f0db-de53-47c0-9b45-2ce8b37392a3" containerID="c261cdea10a90cb9b06369f3b725b7778d070fb329e81ab44552446e12662451" exitCode=0 Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.731305 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-274vf" event={"ID":"1cd0f0db-de53-47c0-9b45-2ce8b37392a3","Type":"ContainerDied","Data":"5a45c9cc109c4284fd86db1bd72102f9fc2778b85d49eb651d9debff48557f02"} Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.731336 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-274vf" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.731350 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-274vf" event={"ID":"1cd0f0db-de53-47c0-9b45-2ce8b37392a3","Type":"ContainerDied","Data":"4c741f2990f908528ccab348e1eb02e217a9b659becc7a41c4eb8da28bfd42e5"} Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.731367 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-274vf" event={"ID":"1cd0f0db-de53-47c0-9b45-2ce8b37392a3","Type":"ContainerDied","Data":"c261cdea10a90cb9b06369f3b725b7778d070fb329e81ab44552446e12662451"} Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.731378 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-274vf" event={"ID":"1cd0f0db-de53-47c0-9b45-2ce8b37392a3","Type":"ContainerDied","Data":"8f0736426a25dadac460e5ed467f3082cd8c4806190c053022d6f360c4290799"} Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.731397 4811 scope.go:117] "RemoveContainer" containerID="5a45c9cc109c4284fd86db1bd72102f9fc2778b85d49eb651d9debff48557f02" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.731312 4811 generic.go:334] "Generic (PLEG): container finished" podID="1cd0f0db-de53-47c0-9b45-2ce8b37392a3" containerID="8f0736426a25dadac460e5ed467f3082cd8c4806190c053022d6f360c4290799" exitCode=0 Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.731460 4811 generic.go:334] "Generic (PLEG): container finished" podID="1cd0f0db-de53-47c0-9b45-2ce8b37392a3" containerID="e071df53e8cd1652f507eacae3e2c54ac75e76a6b9a426620c970762e81ecd84" exitCode=0 Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.731475 4811 generic.go:334] "Generic (PLEG): container finished" podID="1cd0f0db-de53-47c0-9b45-2ce8b37392a3" containerID="b5f0ec565b046b22d600103b58946f6f520873f1686cc3042bf7ddc7f24bd4ad" exitCode=0 Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.731482 4811 generic.go:334] "Generic (PLEG): container finished" podID="1cd0f0db-de53-47c0-9b45-2ce8b37392a3" containerID="9197a8c82d49fb9b4b1c97827ee95582ca8148656abb88904e06abb25c135597" exitCode=143 Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.731491 4811 generic.go:334] "Generic (PLEG): container finished" podID="1cd0f0db-de53-47c0-9b45-2ce8b37392a3" containerID="55f26cdf9d61842467eb1367c6ec156b4de9c40a5b05fe60d3d100df5136a8a2" exitCode=143 Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.731500 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-274vf" event={"ID":"1cd0f0db-de53-47c0-9b45-2ce8b37392a3","Type":"ContainerDied","Data":"e071df53e8cd1652f507eacae3e2c54ac75e76a6b9a426620c970762e81ecd84"} Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.731542 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-274vf" event={"ID":"1cd0f0db-de53-47c0-9b45-2ce8b37392a3","Type":"ContainerDied","Data":"b5f0ec565b046b22d600103b58946f6f520873f1686cc3042bf7ddc7f24bd4ad"} Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.731556 4811 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c4ae4bac915c177448079f710c8f3f5cb8ee9368318699d1500a8e2ec2075fb5"} Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.731570 4811 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4c741f2990f908528ccab348e1eb02e217a9b659becc7a41c4eb8da28bfd42e5"} Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.731576 4811 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c261cdea10a90cb9b06369f3b725b7778d070fb329e81ab44552446e12662451"} Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.731581 4811 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8f0736426a25dadac460e5ed467f3082cd8c4806190c053022d6f360c4290799"} Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.731589 4811 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e071df53e8cd1652f507eacae3e2c54ac75e76a6b9a426620c970762e81ecd84"} Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.731605 4811 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b5f0ec565b046b22d600103b58946f6f520873f1686cc3042bf7ddc7f24bd4ad"} Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.731611 4811 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9197a8c82d49fb9b4b1c97827ee95582ca8148656abb88904e06abb25c135597"} Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.731616 4811 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"55f26cdf9d61842467eb1367c6ec156b4de9c40a5b05fe60d3d100df5136a8a2"} Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.731635 4811 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"523ab909aa2cbac6043b3c572a17c40733e9b2155d4017253d68998cb85d3e7a"} Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.731643 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-274vf" event={"ID":"1cd0f0db-de53-47c0-9b45-2ce8b37392a3","Type":"ContainerDied","Data":"9197a8c82d49fb9b4b1c97827ee95582ca8148656abb88904e06abb25c135597"} Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.731654 4811 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5a45c9cc109c4284fd86db1bd72102f9fc2778b85d49eb651d9debff48557f02"} Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.731661 4811 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c4ae4bac915c177448079f710c8f3f5cb8ee9368318699d1500a8e2ec2075fb5"} Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.731665 4811 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4c741f2990f908528ccab348e1eb02e217a9b659becc7a41c4eb8da28bfd42e5"} Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.731671 4811 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c261cdea10a90cb9b06369f3b725b7778d070fb329e81ab44552446e12662451"} Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.731677 4811 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8f0736426a25dadac460e5ed467f3082cd8c4806190c053022d6f360c4290799"} Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.731682 4811 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e071df53e8cd1652f507eacae3e2c54ac75e76a6b9a426620c970762e81ecd84"} Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.731686 4811 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b5f0ec565b046b22d600103b58946f6f520873f1686cc3042bf7ddc7f24bd4ad"} Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.731691 4811 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9197a8c82d49fb9b4b1c97827ee95582ca8148656abb88904e06abb25c135597"} Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.731695 4811 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"55f26cdf9d61842467eb1367c6ec156b4de9c40a5b05fe60d3d100df5136a8a2"} Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.731700 4811 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"523ab909aa2cbac6043b3c572a17c40733e9b2155d4017253d68998cb85d3e7a"} Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.731706 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-274vf" event={"ID":"1cd0f0db-de53-47c0-9b45-2ce8b37392a3","Type":"ContainerDied","Data":"55f26cdf9d61842467eb1367c6ec156b4de9c40a5b05fe60d3d100df5136a8a2"} Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.731713 4811 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5a45c9cc109c4284fd86db1bd72102f9fc2778b85d49eb651d9debff48557f02"} Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.731720 4811 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c4ae4bac915c177448079f710c8f3f5cb8ee9368318699d1500a8e2ec2075fb5"} Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.731724 4811 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4c741f2990f908528ccab348e1eb02e217a9b659becc7a41c4eb8da28bfd42e5"} Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.731730 4811 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c261cdea10a90cb9b06369f3b725b7778d070fb329e81ab44552446e12662451"} Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.731735 4811 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8f0736426a25dadac460e5ed467f3082cd8c4806190c053022d6f360c4290799"} Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.731739 4811 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e071df53e8cd1652f507eacae3e2c54ac75e76a6b9a426620c970762e81ecd84"} Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.731744 4811 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b5f0ec565b046b22d600103b58946f6f520873f1686cc3042bf7ddc7f24bd4ad"} Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.731749 4811 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9197a8c82d49fb9b4b1c97827ee95582ca8148656abb88904e06abb25c135597"} Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.731753 4811 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"55f26cdf9d61842467eb1367c6ec156b4de9c40a5b05fe60d3d100df5136a8a2"} Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.731757 4811 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"523ab909aa2cbac6043b3c572a17c40733e9b2155d4017253d68998cb85d3e7a"} Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.731764 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-274vf" event={"ID":"1cd0f0db-de53-47c0-9b45-2ce8b37392a3","Type":"ContainerDied","Data":"9a8029475e24f2282a3c68e7ce56df83c50df1ab181a0cd417f5c4db3884084f"} Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.731771 4811 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5a45c9cc109c4284fd86db1bd72102f9fc2778b85d49eb651d9debff48557f02"} Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.731777 4811 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c4ae4bac915c177448079f710c8f3f5cb8ee9368318699d1500a8e2ec2075fb5"} Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.731782 4811 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4c741f2990f908528ccab348e1eb02e217a9b659becc7a41c4eb8da28bfd42e5"} Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.731786 4811 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c261cdea10a90cb9b06369f3b725b7778d070fb329e81ab44552446e12662451"} Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.731790 4811 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8f0736426a25dadac460e5ed467f3082cd8c4806190c053022d6f360c4290799"} Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.731795 4811 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e071df53e8cd1652f507eacae3e2c54ac75e76a6b9a426620c970762e81ecd84"} Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.731799 4811 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b5f0ec565b046b22d600103b58946f6f520873f1686cc3042bf7ddc7f24bd4ad"} Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.731803 4811 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9197a8c82d49fb9b4b1c97827ee95582ca8148656abb88904e06abb25c135597"} Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.731809 4811 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"55f26cdf9d61842467eb1367c6ec156b4de9c40a5b05fe60d3d100df5136a8a2"} Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.731813 4811 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"523ab909aa2cbac6043b3c572a17c40733e9b2155d4017253d68998cb85d3e7a"} Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.733836 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-kfqgt_f2555861-d1bb-4f21-be4a-165ed9212932/kube-multus/2.log" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.769667 4811 scope.go:117] "RemoveContainer" containerID="c4ae4bac915c177448079f710c8f3f5cb8ee9368318699d1500a8e2ec2075fb5" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.785749 4811 scope.go:117] "RemoveContainer" containerID="4c741f2990f908528ccab348e1eb02e217a9b659becc7a41c4eb8da28bfd42e5" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.799749 4811 scope.go:117] "RemoveContainer" containerID="c261cdea10a90cb9b06369f3b725b7778d070fb329e81ab44552446e12662451" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.812967 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-274vf"] Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.816100 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-274vf"] Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.821909 4811 scope.go:117] "RemoveContainer" containerID="8f0736426a25dadac460e5ed467f3082cd8c4806190c053022d6f360c4290799" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.831840 4811 scope.go:117] "RemoveContainer" containerID="e071df53e8cd1652f507eacae3e2c54ac75e76a6b9a426620c970762e81ecd84" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.841526 4811 scope.go:117] "RemoveContainer" containerID="b5f0ec565b046b22d600103b58946f6f520873f1686cc3042bf7ddc7f24bd4ad" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.858746 4811 scope.go:117] "RemoveContainer" containerID="9197a8c82d49fb9b4b1c97827ee95582ca8148656abb88904e06abb25c135597" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.869403 4811 scope.go:117] "RemoveContainer" containerID="55f26cdf9d61842467eb1367c6ec156b4de9c40a5b05fe60d3d100df5136a8a2" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.891220 4811 scope.go:117] "RemoveContainer" containerID="523ab909aa2cbac6043b3c572a17c40733e9b2155d4017253d68998cb85d3e7a" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.909814 4811 scope.go:117] "RemoveContainer" containerID="5a45c9cc109c4284fd86db1bd72102f9fc2778b85d49eb651d9debff48557f02" Jan 22 09:14:29 crc kubenswrapper[4811]: E0122 09:14:29.910125 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a45c9cc109c4284fd86db1bd72102f9fc2778b85d49eb651d9debff48557f02\": container with ID starting with 5a45c9cc109c4284fd86db1bd72102f9fc2778b85d49eb651d9debff48557f02 not found: ID does not exist" containerID="5a45c9cc109c4284fd86db1bd72102f9fc2778b85d49eb651d9debff48557f02" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.910166 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a45c9cc109c4284fd86db1bd72102f9fc2778b85d49eb651d9debff48557f02"} err="failed to get container status \"5a45c9cc109c4284fd86db1bd72102f9fc2778b85d49eb651d9debff48557f02\": rpc error: code = NotFound desc = could not find container \"5a45c9cc109c4284fd86db1bd72102f9fc2778b85d49eb651d9debff48557f02\": container with ID starting with 5a45c9cc109c4284fd86db1bd72102f9fc2778b85d49eb651d9debff48557f02 not found: ID does not exist" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.910193 4811 scope.go:117] "RemoveContainer" containerID="c4ae4bac915c177448079f710c8f3f5cb8ee9368318699d1500a8e2ec2075fb5" Jan 22 09:14:29 crc kubenswrapper[4811]: E0122 09:14:29.910427 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c4ae4bac915c177448079f710c8f3f5cb8ee9368318699d1500a8e2ec2075fb5\": container with ID starting with c4ae4bac915c177448079f710c8f3f5cb8ee9368318699d1500a8e2ec2075fb5 not found: ID does not exist" containerID="c4ae4bac915c177448079f710c8f3f5cb8ee9368318699d1500a8e2ec2075fb5" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.910453 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c4ae4bac915c177448079f710c8f3f5cb8ee9368318699d1500a8e2ec2075fb5"} err="failed to get container status \"c4ae4bac915c177448079f710c8f3f5cb8ee9368318699d1500a8e2ec2075fb5\": rpc error: code = NotFound desc = could not find container \"c4ae4bac915c177448079f710c8f3f5cb8ee9368318699d1500a8e2ec2075fb5\": container with ID starting with c4ae4bac915c177448079f710c8f3f5cb8ee9368318699d1500a8e2ec2075fb5 not found: ID does not exist" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.910467 4811 scope.go:117] "RemoveContainer" containerID="4c741f2990f908528ccab348e1eb02e217a9b659becc7a41c4eb8da28bfd42e5" Jan 22 09:14:29 crc kubenswrapper[4811]: E0122 09:14:29.910690 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c741f2990f908528ccab348e1eb02e217a9b659becc7a41c4eb8da28bfd42e5\": container with ID starting with 4c741f2990f908528ccab348e1eb02e217a9b659becc7a41c4eb8da28bfd42e5 not found: ID does not exist" containerID="4c741f2990f908528ccab348e1eb02e217a9b659becc7a41c4eb8da28bfd42e5" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.910719 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c741f2990f908528ccab348e1eb02e217a9b659becc7a41c4eb8da28bfd42e5"} err="failed to get container status \"4c741f2990f908528ccab348e1eb02e217a9b659becc7a41c4eb8da28bfd42e5\": rpc error: code = NotFound desc = could not find container \"4c741f2990f908528ccab348e1eb02e217a9b659becc7a41c4eb8da28bfd42e5\": container with ID starting with 4c741f2990f908528ccab348e1eb02e217a9b659becc7a41c4eb8da28bfd42e5 not found: ID does not exist" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.910742 4811 scope.go:117] "RemoveContainer" containerID="c261cdea10a90cb9b06369f3b725b7778d070fb329e81ab44552446e12662451" Jan 22 09:14:29 crc kubenswrapper[4811]: E0122 09:14:29.911100 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c261cdea10a90cb9b06369f3b725b7778d070fb329e81ab44552446e12662451\": container with ID starting with c261cdea10a90cb9b06369f3b725b7778d070fb329e81ab44552446e12662451 not found: ID does not exist" containerID="c261cdea10a90cb9b06369f3b725b7778d070fb329e81ab44552446e12662451" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.911120 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c261cdea10a90cb9b06369f3b725b7778d070fb329e81ab44552446e12662451"} err="failed to get container status \"c261cdea10a90cb9b06369f3b725b7778d070fb329e81ab44552446e12662451\": rpc error: code = NotFound desc = could not find container \"c261cdea10a90cb9b06369f3b725b7778d070fb329e81ab44552446e12662451\": container with ID starting with c261cdea10a90cb9b06369f3b725b7778d070fb329e81ab44552446e12662451 not found: ID does not exist" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.911137 4811 scope.go:117] "RemoveContainer" containerID="8f0736426a25dadac460e5ed467f3082cd8c4806190c053022d6f360c4290799" Jan 22 09:14:29 crc kubenswrapper[4811]: E0122 09:14:29.911313 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f0736426a25dadac460e5ed467f3082cd8c4806190c053022d6f360c4290799\": container with ID starting with 8f0736426a25dadac460e5ed467f3082cd8c4806190c053022d6f360c4290799 not found: ID does not exist" containerID="8f0736426a25dadac460e5ed467f3082cd8c4806190c053022d6f360c4290799" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.911332 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f0736426a25dadac460e5ed467f3082cd8c4806190c053022d6f360c4290799"} err="failed to get container status \"8f0736426a25dadac460e5ed467f3082cd8c4806190c053022d6f360c4290799\": rpc error: code = NotFound desc = could not find container \"8f0736426a25dadac460e5ed467f3082cd8c4806190c053022d6f360c4290799\": container with ID starting with 8f0736426a25dadac460e5ed467f3082cd8c4806190c053022d6f360c4290799 not found: ID does not exist" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.911345 4811 scope.go:117] "RemoveContainer" containerID="e071df53e8cd1652f507eacae3e2c54ac75e76a6b9a426620c970762e81ecd84" Jan 22 09:14:29 crc kubenswrapper[4811]: E0122 09:14:29.911588 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e071df53e8cd1652f507eacae3e2c54ac75e76a6b9a426620c970762e81ecd84\": container with ID starting with e071df53e8cd1652f507eacae3e2c54ac75e76a6b9a426620c970762e81ecd84 not found: ID does not exist" containerID="e071df53e8cd1652f507eacae3e2c54ac75e76a6b9a426620c970762e81ecd84" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.911652 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e071df53e8cd1652f507eacae3e2c54ac75e76a6b9a426620c970762e81ecd84"} err="failed to get container status \"e071df53e8cd1652f507eacae3e2c54ac75e76a6b9a426620c970762e81ecd84\": rpc error: code = NotFound desc = could not find container \"e071df53e8cd1652f507eacae3e2c54ac75e76a6b9a426620c970762e81ecd84\": container with ID starting with e071df53e8cd1652f507eacae3e2c54ac75e76a6b9a426620c970762e81ecd84 not found: ID does not exist" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.911682 4811 scope.go:117] "RemoveContainer" containerID="b5f0ec565b046b22d600103b58946f6f520873f1686cc3042bf7ddc7f24bd4ad" Jan 22 09:14:29 crc kubenswrapper[4811]: E0122 09:14:29.911880 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5f0ec565b046b22d600103b58946f6f520873f1686cc3042bf7ddc7f24bd4ad\": container with ID starting with b5f0ec565b046b22d600103b58946f6f520873f1686cc3042bf7ddc7f24bd4ad not found: ID does not exist" containerID="b5f0ec565b046b22d600103b58946f6f520873f1686cc3042bf7ddc7f24bd4ad" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.911905 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5f0ec565b046b22d600103b58946f6f520873f1686cc3042bf7ddc7f24bd4ad"} err="failed to get container status \"b5f0ec565b046b22d600103b58946f6f520873f1686cc3042bf7ddc7f24bd4ad\": rpc error: code = NotFound desc = could not find container \"b5f0ec565b046b22d600103b58946f6f520873f1686cc3042bf7ddc7f24bd4ad\": container with ID starting with b5f0ec565b046b22d600103b58946f6f520873f1686cc3042bf7ddc7f24bd4ad not found: ID does not exist" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.911918 4811 scope.go:117] "RemoveContainer" containerID="9197a8c82d49fb9b4b1c97827ee95582ca8148656abb88904e06abb25c135597" Jan 22 09:14:29 crc kubenswrapper[4811]: E0122 09:14:29.912130 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9197a8c82d49fb9b4b1c97827ee95582ca8148656abb88904e06abb25c135597\": container with ID starting with 9197a8c82d49fb9b4b1c97827ee95582ca8148656abb88904e06abb25c135597 not found: ID does not exist" containerID="9197a8c82d49fb9b4b1c97827ee95582ca8148656abb88904e06abb25c135597" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.912152 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9197a8c82d49fb9b4b1c97827ee95582ca8148656abb88904e06abb25c135597"} err="failed to get container status \"9197a8c82d49fb9b4b1c97827ee95582ca8148656abb88904e06abb25c135597\": rpc error: code = NotFound desc = could not find container \"9197a8c82d49fb9b4b1c97827ee95582ca8148656abb88904e06abb25c135597\": container with ID starting with 9197a8c82d49fb9b4b1c97827ee95582ca8148656abb88904e06abb25c135597 not found: ID does not exist" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.912168 4811 scope.go:117] "RemoveContainer" containerID="55f26cdf9d61842467eb1367c6ec156b4de9c40a5b05fe60d3d100df5136a8a2" Jan 22 09:14:29 crc kubenswrapper[4811]: E0122 09:14:29.912414 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"55f26cdf9d61842467eb1367c6ec156b4de9c40a5b05fe60d3d100df5136a8a2\": container with ID starting with 55f26cdf9d61842467eb1367c6ec156b4de9c40a5b05fe60d3d100df5136a8a2 not found: ID does not exist" containerID="55f26cdf9d61842467eb1367c6ec156b4de9c40a5b05fe60d3d100df5136a8a2" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.912439 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55f26cdf9d61842467eb1367c6ec156b4de9c40a5b05fe60d3d100df5136a8a2"} err="failed to get container status \"55f26cdf9d61842467eb1367c6ec156b4de9c40a5b05fe60d3d100df5136a8a2\": rpc error: code = NotFound desc = could not find container \"55f26cdf9d61842467eb1367c6ec156b4de9c40a5b05fe60d3d100df5136a8a2\": container with ID starting with 55f26cdf9d61842467eb1367c6ec156b4de9c40a5b05fe60d3d100df5136a8a2 not found: ID does not exist" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.912455 4811 scope.go:117] "RemoveContainer" containerID="523ab909aa2cbac6043b3c572a17c40733e9b2155d4017253d68998cb85d3e7a" Jan 22 09:14:29 crc kubenswrapper[4811]: E0122 09:14:29.912679 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"523ab909aa2cbac6043b3c572a17c40733e9b2155d4017253d68998cb85d3e7a\": container with ID starting with 523ab909aa2cbac6043b3c572a17c40733e9b2155d4017253d68998cb85d3e7a not found: ID does not exist" containerID="523ab909aa2cbac6043b3c572a17c40733e9b2155d4017253d68998cb85d3e7a" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.912695 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"523ab909aa2cbac6043b3c572a17c40733e9b2155d4017253d68998cb85d3e7a"} err="failed to get container status \"523ab909aa2cbac6043b3c572a17c40733e9b2155d4017253d68998cb85d3e7a\": rpc error: code = NotFound desc = could not find container \"523ab909aa2cbac6043b3c572a17c40733e9b2155d4017253d68998cb85d3e7a\": container with ID starting with 523ab909aa2cbac6043b3c572a17c40733e9b2155d4017253d68998cb85d3e7a not found: ID does not exist" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.912708 4811 scope.go:117] "RemoveContainer" containerID="5a45c9cc109c4284fd86db1bd72102f9fc2778b85d49eb651d9debff48557f02" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.912992 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a45c9cc109c4284fd86db1bd72102f9fc2778b85d49eb651d9debff48557f02"} err="failed to get container status \"5a45c9cc109c4284fd86db1bd72102f9fc2778b85d49eb651d9debff48557f02\": rpc error: code = NotFound desc = could not find container \"5a45c9cc109c4284fd86db1bd72102f9fc2778b85d49eb651d9debff48557f02\": container with ID starting with 5a45c9cc109c4284fd86db1bd72102f9fc2778b85d49eb651d9debff48557f02 not found: ID does not exist" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.913011 4811 scope.go:117] "RemoveContainer" containerID="c4ae4bac915c177448079f710c8f3f5cb8ee9368318699d1500a8e2ec2075fb5" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.913273 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c4ae4bac915c177448079f710c8f3f5cb8ee9368318699d1500a8e2ec2075fb5"} err="failed to get container status \"c4ae4bac915c177448079f710c8f3f5cb8ee9368318699d1500a8e2ec2075fb5\": rpc error: code = NotFound desc = could not find container \"c4ae4bac915c177448079f710c8f3f5cb8ee9368318699d1500a8e2ec2075fb5\": container with ID starting with c4ae4bac915c177448079f710c8f3f5cb8ee9368318699d1500a8e2ec2075fb5 not found: ID does not exist" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.913295 4811 scope.go:117] "RemoveContainer" containerID="4c741f2990f908528ccab348e1eb02e217a9b659becc7a41c4eb8da28bfd42e5" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.913676 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c741f2990f908528ccab348e1eb02e217a9b659becc7a41c4eb8da28bfd42e5"} err="failed to get container status \"4c741f2990f908528ccab348e1eb02e217a9b659becc7a41c4eb8da28bfd42e5\": rpc error: code = NotFound desc = could not find container \"4c741f2990f908528ccab348e1eb02e217a9b659becc7a41c4eb8da28bfd42e5\": container with ID starting with 4c741f2990f908528ccab348e1eb02e217a9b659becc7a41c4eb8da28bfd42e5 not found: ID does not exist" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.913697 4811 scope.go:117] "RemoveContainer" containerID="c261cdea10a90cb9b06369f3b725b7778d070fb329e81ab44552446e12662451" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.913906 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c261cdea10a90cb9b06369f3b725b7778d070fb329e81ab44552446e12662451"} err="failed to get container status \"c261cdea10a90cb9b06369f3b725b7778d070fb329e81ab44552446e12662451\": rpc error: code = NotFound desc = could not find container \"c261cdea10a90cb9b06369f3b725b7778d070fb329e81ab44552446e12662451\": container with ID starting with c261cdea10a90cb9b06369f3b725b7778d070fb329e81ab44552446e12662451 not found: ID does not exist" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.913927 4811 scope.go:117] "RemoveContainer" containerID="8f0736426a25dadac460e5ed467f3082cd8c4806190c053022d6f360c4290799" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.914218 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f0736426a25dadac460e5ed467f3082cd8c4806190c053022d6f360c4290799"} err="failed to get container status \"8f0736426a25dadac460e5ed467f3082cd8c4806190c053022d6f360c4290799\": rpc error: code = NotFound desc = could not find container \"8f0736426a25dadac460e5ed467f3082cd8c4806190c053022d6f360c4290799\": container with ID starting with 8f0736426a25dadac460e5ed467f3082cd8c4806190c053022d6f360c4290799 not found: ID does not exist" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.914237 4811 scope.go:117] "RemoveContainer" containerID="e071df53e8cd1652f507eacae3e2c54ac75e76a6b9a426620c970762e81ecd84" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.914516 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e071df53e8cd1652f507eacae3e2c54ac75e76a6b9a426620c970762e81ecd84"} err="failed to get container status \"e071df53e8cd1652f507eacae3e2c54ac75e76a6b9a426620c970762e81ecd84\": rpc error: code = NotFound desc = could not find container \"e071df53e8cd1652f507eacae3e2c54ac75e76a6b9a426620c970762e81ecd84\": container with ID starting with e071df53e8cd1652f507eacae3e2c54ac75e76a6b9a426620c970762e81ecd84 not found: ID does not exist" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.914538 4811 scope.go:117] "RemoveContainer" containerID="b5f0ec565b046b22d600103b58946f6f520873f1686cc3042bf7ddc7f24bd4ad" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.915492 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5f0ec565b046b22d600103b58946f6f520873f1686cc3042bf7ddc7f24bd4ad"} err="failed to get container status \"b5f0ec565b046b22d600103b58946f6f520873f1686cc3042bf7ddc7f24bd4ad\": rpc error: code = NotFound desc = could not find container \"b5f0ec565b046b22d600103b58946f6f520873f1686cc3042bf7ddc7f24bd4ad\": container with ID starting with b5f0ec565b046b22d600103b58946f6f520873f1686cc3042bf7ddc7f24bd4ad not found: ID does not exist" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.915516 4811 scope.go:117] "RemoveContainer" containerID="9197a8c82d49fb9b4b1c97827ee95582ca8148656abb88904e06abb25c135597" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.915729 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9197a8c82d49fb9b4b1c97827ee95582ca8148656abb88904e06abb25c135597"} err="failed to get container status \"9197a8c82d49fb9b4b1c97827ee95582ca8148656abb88904e06abb25c135597\": rpc error: code = NotFound desc = could not find container \"9197a8c82d49fb9b4b1c97827ee95582ca8148656abb88904e06abb25c135597\": container with ID starting with 9197a8c82d49fb9b4b1c97827ee95582ca8148656abb88904e06abb25c135597 not found: ID does not exist" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.915747 4811 scope.go:117] "RemoveContainer" containerID="55f26cdf9d61842467eb1367c6ec156b4de9c40a5b05fe60d3d100df5136a8a2" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.915967 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55f26cdf9d61842467eb1367c6ec156b4de9c40a5b05fe60d3d100df5136a8a2"} err="failed to get container status \"55f26cdf9d61842467eb1367c6ec156b4de9c40a5b05fe60d3d100df5136a8a2\": rpc error: code = NotFound desc = could not find container \"55f26cdf9d61842467eb1367c6ec156b4de9c40a5b05fe60d3d100df5136a8a2\": container with ID starting with 55f26cdf9d61842467eb1367c6ec156b4de9c40a5b05fe60d3d100df5136a8a2 not found: ID does not exist" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.915985 4811 scope.go:117] "RemoveContainer" containerID="523ab909aa2cbac6043b3c572a17c40733e9b2155d4017253d68998cb85d3e7a" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.916403 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"523ab909aa2cbac6043b3c572a17c40733e9b2155d4017253d68998cb85d3e7a"} err="failed to get container status \"523ab909aa2cbac6043b3c572a17c40733e9b2155d4017253d68998cb85d3e7a\": rpc error: code = NotFound desc = could not find container \"523ab909aa2cbac6043b3c572a17c40733e9b2155d4017253d68998cb85d3e7a\": container with ID starting with 523ab909aa2cbac6043b3c572a17c40733e9b2155d4017253d68998cb85d3e7a not found: ID does not exist" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.916422 4811 scope.go:117] "RemoveContainer" containerID="5a45c9cc109c4284fd86db1bd72102f9fc2778b85d49eb651d9debff48557f02" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.917271 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a45c9cc109c4284fd86db1bd72102f9fc2778b85d49eb651d9debff48557f02"} err="failed to get container status \"5a45c9cc109c4284fd86db1bd72102f9fc2778b85d49eb651d9debff48557f02\": rpc error: code = NotFound desc = could not find container \"5a45c9cc109c4284fd86db1bd72102f9fc2778b85d49eb651d9debff48557f02\": container with ID starting with 5a45c9cc109c4284fd86db1bd72102f9fc2778b85d49eb651d9debff48557f02 not found: ID does not exist" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.917290 4811 scope.go:117] "RemoveContainer" containerID="c4ae4bac915c177448079f710c8f3f5cb8ee9368318699d1500a8e2ec2075fb5" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.917742 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c4ae4bac915c177448079f710c8f3f5cb8ee9368318699d1500a8e2ec2075fb5"} err="failed to get container status \"c4ae4bac915c177448079f710c8f3f5cb8ee9368318699d1500a8e2ec2075fb5\": rpc error: code = NotFound desc = could not find container \"c4ae4bac915c177448079f710c8f3f5cb8ee9368318699d1500a8e2ec2075fb5\": container with ID starting with c4ae4bac915c177448079f710c8f3f5cb8ee9368318699d1500a8e2ec2075fb5 not found: ID does not exist" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.917763 4811 scope.go:117] "RemoveContainer" containerID="4c741f2990f908528ccab348e1eb02e217a9b659becc7a41c4eb8da28bfd42e5" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.918004 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c741f2990f908528ccab348e1eb02e217a9b659becc7a41c4eb8da28bfd42e5"} err="failed to get container status \"4c741f2990f908528ccab348e1eb02e217a9b659becc7a41c4eb8da28bfd42e5\": rpc error: code = NotFound desc = could not find container \"4c741f2990f908528ccab348e1eb02e217a9b659becc7a41c4eb8da28bfd42e5\": container with ID starting with 4c741f2990f908528ccab348e1eb02e217a9b659becc7a41c4eb8da28bfd42e5 not found: ID does not exist" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.918028 4811 scope.go:117] "RemoveContainer" containerID="c261cdea10a90cb9b06369f3b725b7778d070fb329e81ab44552446e12662451" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.918236 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c261cdea10a90cb9b06369f3b725b7778d070fb329e81ab44552446e12662451"} err="failed to get container status \"c261cdea10a90cb9b06369f3b725b7778d070fb329e81ab44552446e12662451\": rpc error: code = NotFound desc = could not find container \"c261cdea10a90cb9b06369f3b725b7778d070fb329e81ab44552446e12662451\": container with ID starting with c261cdea10a90cb9b06369f3b725b7778d070fb329e81ab44552446e12662451 not found: ID does not exist" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.918254 4811 scope.go:117] "RemoveContainer" containerID="8f0736426a25dadac460e5ed467f3082cd8c4806190c053022d6f360c4290799" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.918431 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f0736426a25dadac460e5ed467f3082cd8c4806190c053022d6f360c4290799"} err="failed to get container status \"8f0736426a25dadac460e5ed467f3082cd8c4806190c053022d6f360c4290799\": rpc error: code = NotFound desc = could not find container \"8f0736426a25dadac460e5ed467f3082cd8c4806190c053022d6f360c4290799\": container with ID starting with 8f0736426a25dadac460e5ed467f3082cd8c4806190c053022d6f360c4290799 not found: ID does not exist" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.918452 4811 scope.go:117] "RemoveContainer" containerID="e071df53e8cd1652f507eacae3e2c54ac75e76a6b9a426620c970762e81ecd84" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.918766 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e071df53e8cd1652f507eacae3e2c54ac75e76a6b9a426620c970762e81ecd84"} err="failed to get container status \"e071df53e8cd1652f507eacae3e2c54ac75e76a6b9a426620c970762e81ecd84\": rpc error: code = NotFound desc = could not find container \"e071df53e8cd1652f507eacae3e2c54ac75e76a6b9a426620c970762e81ecd84\": container with ID starting with e071df53e8cd1652f507eacae3e2c54ac75e76a6b9a426620c970762e81ecd84 not found: ID does not exist" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.918788 4811 scope.go:117] "RemoveContainer" containerID="b5f0ec565b046b22d600103b58946f6f520873f1686cc3042bf7ddc7f24bd4ad" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.919037 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5f0ec565b046b22d600103b58946f6f520873f1686cc3042bf7ddc7f24bd4ad"} err="failed to get container status \"b5f0ec565b046b22d600103b58946f6f520873f1686cc3042bf7ddc7f24bd4ad\": rpc error: code = NotFound desc = could not find container \"b5f0ec565b046b22d600103b58946f6f520873f1686cc3042bf7ddc7f24bd4ad\": container with ID starting with b5f0ec565b046b22d600103b58946f6f520873f1686cc3042bf7ddc7f24bd4ad not found: ID does not exist" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.919067 4811 scope.go:117] "RemoveContainer" containerID="9197a8c82d49fb9b4b1c97827ee95582ca8148656abb88904e06abb25c135597" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.919258 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9197a8c82d49fb9b4b1c97827ee95582ca8148656abb88904e06abb25c135597"} err="failed to get container status \"9197a8c82d49fb9b4b1c97827ee95582ca8148656abb88904e06abb25c135597\": rpc error: code = NotFound desc = could not find container \"9197a8c82d49fb9b4b1c97827ee95582ca8148656abb88904e06abb25c135597\": container with ID starting with 9197a8c82d49fb9b4b1c97827ee95582ca8148656abb88904e06abb25c135597 not found: ID does not exist" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.919274 4811 scope.go:117] "RemoveContainer" containerID="55f26cdf9d61842467eb1367c6ec156b4de9c40a5b05fe60d3d100df5136a8a2" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.919548 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55f26cdf9d61842467eb1367c6ec156b4de9c40a5b05fe60d3d100df5136a8a2"} err="failed to get container status \"55f26cdf9d61842467eb1367c6ec156b4de9c40a5b05fe60d3d100df5136a8a2\": rpc error: code = NotFound desc = could not find container \"55f26cdf9d61842467eb1367c6ec156b4de9c40a5b05fe60d3d100df5136a8a2\": container with ID starting with 55f26cdf9d61842467eb1367c6ec156b4de9c40a5b05fe60d3d100df5136a8a2 not found: ID does not exist" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.919585 4811 scope.go:117] "RemoveContainer" containerID="523ab909aa2cbac6043b3c572a17c40733e9b2155d4017253d68998cb85d3e7a" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.919898 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"523ab909aa2cbac6043b3c572a17c40733e9b2155d4017253d68998cb85d3e7a"} err="failed to get container status \"523ab909aa2cbac6043b3c572a17c40733e9b2155d4017253d68998cb85d3e7a\": rpc error: code = NotFound desc = could not find container \"523ab909aa2cbac6043b3c572a17c40733e9b2155d4017253d68998cb85d3e7a\": container with ID starting with 523ab909aa2cbac6043b3c572a17c40733e9b2155d4017253d68998cb85d3e7a not found: ID does not exist" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.919918 4811 scope.go:117] "RemoveContainer" containerID="5a45c9cc109c4284fd86db1bd72102f9fc2778b85d49eb651d9debff48557f02" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.920160 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a45c9cc109c4284fd86db1bd72102f9fc2778b85d49eb651d9debff48557f02"} err="failed to get container status \"5a45c9cc109c4284fd86db1bd72102f9fc2778b85d49eb651d9debff48557f02\": rpc error: code = NotFound desc = could not find container \"5a45c9cc109c4284fd86db1bd72102f9fc2778b85d49eb651d9debff48557f02\": container with ID starting with 5a45c9cc109c4284fd86db1bd72102f9fc2778b85d49eb651d9debff48557f02 not found: ID does not exist" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.920181 4811 scope.go:117] "RemoveContainer" containerID="c4ae4bac915c177448079f710c8f3f5cb8ee9368318699d1500a8e2ec2075fb5" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.920377 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c4ae4bac915c177448079f710c8f3f5cb8ee9368318699d1500a8e2ec2075fb5"} err="failed to get container status \"c4ae4bac915c177448079f710c8f3f5cb8ee9368318699d1500a8e2ec2075fb5\": rpc error: code = NotFound desc = could not find container \"c4ae4bac915c177448079f710c8f3f5cb8ee9368318699d1500a8e2ec2075fb5\": container with ID starting with c4ae4bac915c177448079f710c8f3f5cb8ee9368318699d1500a8e2ec2075fb5 not found: ID does not exist" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.920395 4811 scope.go:117] "RemoveContainer" containerID="4c741f2990f908528ccab348e1eb02e217a9b659becc7a41c4eb8da28bfd42e5" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.920658 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c741f2990f908528ccab348e1eb02e217a9b659becc7a41c4eb8da28bfd42e5"} err="failed to get container status \"4c741f2990f908528ccab348e1eb02e217a9b659becc7a41c4eb8da28bfd42e5\": rpc error: code = NotFound desc = could not find container \"4c741f2990f908528ccab348e1eb02e217a9b659becc7a41c4eb8da28bfd42e5\": container with ID starting with 4c741f2990f908528ccab348e1eb02e217a9b659becc7a41c4eb8da28bfd42e5 not found: ID does not exist" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.920686 4811 scope.go:117] "RemoveContainer" containerID="c261cdea10a90cb9b06369f3b725b7778d070fb329e81ab44552446e12662451" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.920931 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c261cdea10a90cb9b06369f3b725b7778d070fb329e81ab44552446e12662451"} err="failed to get container status \"c261cdea10a90cb9b06369f3b725b7778d070fb329e81ab44552446e12662451\": rpc error: code = NotFound desc = could not find container \"c261cdea10a90cb9b06369f3b725b7778d070fb329e81ab44552446e12662451\": container with ID starting with c261cdea10a90cb9b06369f3b725b7778d070fb329e81ab44552446e12662451 not found: ID does not exist" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.920949 4811 scope.go:117] "RemoveContainer" containerID="8f0736426a25dadac460e5ed467f3082cd8c4806190c053022d6f360c4290799" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.921163 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f0736426a25dadac460e5ed467f3082cd8c4806190c053022d6f360c4290799"} err="failed to get container status \"8f0736426a25dadac460e5ed467f3082cd8c4806190c053022d6f360c4290799\": rpc error: code = NotFound desc = could not find container \"8f0736426a25dadac460e5ed467f3082cd8c4806190c053022d6f360c4290799\": container with ID starting with 8f0736426a25dadac460e5ed467f3082cd8c4806190c053022d6f360c4290799 not found: ID does not exist" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.921180 4811 scope.go:117] "RemoveContainer" containerID="e071df53e8cd1652f507eacae3e2c54ac75e76a6b9a426620c970762e81ecd84" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.921404 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e071df53e8cd1652f507eacae3e2c54ac75e76a6b9a426620c970762e81ecd84"} err="failed to get container status \"e071df53e8cd1652f507eacae3e2c54ac75e76a6b9a426620c970762e81ecd84\": rpc error: code = NotFound desc = could not find container \"e071df53e8cd1652f507eacae3e2c54ac75e76a6b9a426620c970762e81ecd84\": container with ID starting with e071df53e8cd1652f507eacae3e2c54ac75e76a6b9a426620c970762e81ecd84 not found: ID does not exist" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.921471 4811 scope.go:117] "RemoveContainer" containerID="b5f0ec565b046b22d600103b58946f6f520873f1686cc3042bf7ddc7f24bd4ad" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.921816 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5f0ec565b046b22d600103b58946f6f520873f1686cc3042bf7ddc7f24bd4ad"} err="failed to get container status \"b5f0ec565b046b22d600103b58946f6f520873f1686cc3042bf7ddc7f24bd4ad\": rpc error: code = NotFound desc = could not find container \"b5f0ec565b046b22d600103b58946f6f520873f1686cc3042bf7ddc7f24bd4ad\": container with ID starting with b5f0ec565b046b22d600103b58946f6f520873f1686cc3042bf7ddc7f24bd4ad not found: ID does not exist" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.921840 4811 scope.go:117] "RemoveContainer" containerID="9197a8c82d49fb9b4b1c97827ee95582ca8148656abb88904e06abb25c135597" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.922055 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9197a8c82d49fb9b4b1c97827ee95582ca8148656abb88904e06abb25c135597"} err="failed to get container status \"9197a8c82d49fb9b4b1c97827ee95582ca8148656abb88904e06abb25c135597\": rpc error: code = NotFound desc = could not find container \"9197a8c82d49fb9b4b1c97827ee95582ca8148656abb88904e06abb25c135597\": container with ID starting with 9197a8c82d49fb9b4b1c97827ee95582ca8148656abb88904e06abb25c135597 not found: ID does not exist" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.922068 4811 scope.go:117] "RemoveContainer" containerID="55f26cdf9d61842467eb1367c6ec156b4de9c40a5b05fe60d3d100df5136a8a2" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.922342 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55f26cdf9d61842467eb1367c6ec156b4de9c40a5b05fe60d3d100df5136a8a2"} err="failed to get container status \"55f26cdf9d61842467eb1367c6ec156b4de9c40a5b05fe60d3d100df5136a8a2\": rpc error: code = NotFound desc = could not find container \"55f26cdf9d61842467eb1367c6ec156b4de9c40a5b05fe60d3d100df5136a8a2\": container with ID starting with 55f26cdf9d61842467eb1367c6ec156b4de9c40a5b05fe60d3d100df5136a8a2 not found: ID does not exist" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.922361 4811 scope.go:117] "RemoveContainer" containerID="523ab909aa2cbac6043b3c572a17c40733e9b2155d4017253d68998cb85d3e7a" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.922674 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"523ab909aa2cbac6043b3c572a17c40733e9b2155d4017253d68998cb85d3e7a"} err="failed to get container status \"523ab909aa2cbac6043b3c572a17c40733e9b2155d4017253d68998cb85d3e7a\": rpc error: code = NotFound desc = could not find container \"523ab909aa2cbac6043b3c572a17c40733e9b2155d4017253d68998cb85d3e7a\": container with ID starting with 523ab909aa2cbac6043b3c572a17c40733e9b2155d4017253d68998cb85d3e7a not found: ID does not exist" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.922700 4811 scope.go:117] "RemoveContainer" containerID="5a45c9cc109c4284fd86db1bd72102f9fc2778b85d49eb651d9debff48557f02" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.922914 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a45c9cc109c4284fd86db1bd72102f9fc2778b85d49eb651d9debff48557f02"} err="failed to get container status \"5a45c9cc109c4284fd86db1bd72102f9fc2778b85d49eb651d9debff48557f02\": rpc error: code = NotFound desc = could not find container \"5a45c9cc109c4284fd86db1bd72102f9fc2778b85d49eb651d9debff48557f02\": container with ID starting with 5a45c9cc109c4284fd86db1bd72102f9fc2778b85d49eb651d9debff48557f02 not found: ID does not exist" Jan 22 09:14:29 crc kubenswrapper[4811]: I0122 09:14:29.997860 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1cd0f0db-de53-47c0-9b45-2ce8b37392a3" path="/var/lib/kubelet/pods/1cd0f0db-de53-47c0-9b45-2ce8b37392a3/volumes" Jan 22 09:14:30 crc kubenswrapper[4811]: I0122 09:14:30.742004 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gc7b5" event={"ID":"0188bfdb-1df0-4a96-a2bc-152f80c7f326","Type":"ContainerStarted","Data":"ff04cc322212252ba8f9ff599d45561397925606643569df4d2e0e352198c072"} Jan 22 09:14:30 crc kubenswrapper[4811]: I0122 09:14:30.742065 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gc7b5" event={"ID":"0188bfdb-1df0-4a96-a2bc-152f80c7f326","Type":"ContainerStarted","Data":"f76800aa3074b0c11703225acc0f88950b664d166099759aad60be2433626a36"} Jan 22 09:14:30 crc kubenswrapper[4811]: I0122 09:14:30.742081 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gc7b5" event={"ID":"0188bfdb-1df0-4a96-a2bc-152f80c7f326","Type":"ContainerStarted","Data":"2422e363822a97a6a0a5ef3a8f3ea6c5e0b0b81bf9ca7fccbd35ee743dd82f60"} Jan 22 09:14:30 crc kubenswrapper[4811]: I0122 09:14:30.742090 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gc7b5" event={"ID":"0188bfdb-1df0-4a96-a2bc-152f80c7f326","Type":"ContainerStarted","Data":"6272fbac1db9fefff879ae2b0fd9f5f737d07841960e30cf7d4c71f65bc591a2"} Jan 22 09:14:30 crc kubenswrapper[4811]: I0122 09:14:30.742097 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gc7b5" event={"ID":"0188bfdb-1df0-4a96-a2bc-152f80c7f326","Type":"ContainerStarted","Data":"7d25f2a8c312ac54e76189cde2b2dc80b02e9eef95de9e637d6e605f7cd0a612"} Jan 22 09:14:30 crc kubenswrapper[4811]: I0122 09:14:30.742105 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gc7b5" event={"ID":"0188bfdb-1df0-4a96-a2bc-152f80c7f326","Type":"ContainerStarted","Data":"aebffa6d7f802dd174c7c6af4a9753f3a03a05212daea8bee17ef42385c0a774"} Jan 22 09:14:32 crc kubenswrapper[4811]: I0122 09:14:32.756860 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gc7b5" event={"ID":"0188bfdb-1df0-4a96-a2bc-152f80c7f326","Type":"ContainerStarted","Data":"5f01de799205aadfccd50a30c0ff1aba6ea089d00047740820b8a7ca06c498db"} Jan 22 09:14:34 crc kubenswrapper[4811]: I0122 09:14:34.771414 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gc7b5" event={"ID":"0188bfdb-1df0-4a96-a2bc-152f80c7f326","Type":"ContainerStarted","Data":"f9264cb78f32b7aa1837abb676c2941419530a64214da22ca4ba423c820b084c"} Jan 22 09:14:34 crc kubenswrapper[4811]: I0122 09:14:34.771997 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-gc7b5" Jan 22 09:14:34 crc kubenswrapper[4811]: I0122 09:14:34.772010 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-gc7b5" Jan 22 09:14:34 crc kubenswrapper[4811]: I0122 09:14:34.772018 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-gc7b5" Jan 22 09:14:34 crc kubenswrapper[4811]: I0122 09:14:34.795793 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-gc7b5" podStartSLOduration=6.795775883 podStartE2EDuration="6.795775883s" podCreationTimestamp="2026-01-22 09:14:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:14:34.793158907 +0000 UTC m=+519.115346030" watchObservedRunningTime="2026-01-22 09:14:34.795775883 +0000 UTC m=+519.117963006" Jan 22 09:14:34 crc kubenswrapper[4811]: I0122 09:14:34.798299 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-gc7b5" Jan 22 09:14:34 crc kubenswrapper[4811]: I0122 09:14:34.799180 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-gc7b5" Jan 22 09:14:35 crc kubenswrapper[4811]: I0122 09:14:35.502284 4811 patch_prober.go:28] interesting pod/machine-config-daemon-txvcq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 09:14:35 crc kubenswrapper[4811]: I0122 09:14:35.502349 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 09:14:42 crc kubenswrapper[4811]: I0122 09:14:42.991784 4811 scope.go:117] "RemoveContainer" containerID="387bd2033a6e44d48e86120e4dee106a7b97b0d54769314cb0c0424e36c0d88e" Jan 22 09:14:42 crc kubenswrapper[4811]: E0122 09:14:42.992518 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-kfqgt_openshift-multus(f2555861-d1bb-4f21-be4a-165ed9212932)\"" pod="openshift-multus/multus-kfqgt" podUID="f2555861-d1bb-4f21-be4a-165ed9212932" Jan 22 09:14:46 crc kubenswrapper[4811]: I0122 09:14:46.452822 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713p6bdc"] Jan 22 09:14:46 crc kubenswrapper[4811]: I0122 09:14:46.454884 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713p6bdc" Jan 22 09:14:46 crc kubenswrapper[4811]: I0122 09:14:46.456604 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 22 09:14:46 crc kubenswrapper[4811]: I0122 09:14:46.464385 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713p6bdc"] Jan 22 09:14:46 crc kubenswrapper[4811]: I0122 09:14:46.564352 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/df76885d-11e8-4fce-a69a-dee26f62c562-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713p6bdc\" (UID: \"df76885d-11e8-4fce-a69a-dee26f62c562\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713p6bdc" Jan 22 09:14:46 crc kubenswrapper[4811]: I0122 09:14:46.564405 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smj8s\" (UniqueName: \"kubernetes.io/projected/df76885d-11e8-4fce-a69a-dee26f62c562-kube-api-access-smj8s\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713p6bdc\" (UID: \"df76885d-11e8-4fce-a69a-dee26f62c562\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713p6bdc" Jan 22 09:14:46 crc kubenswrapper[4811]: I0122 09:14:46.564448 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/df76885d-11e8-4fce-a69a-dee26f62c562-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713p6bdc\" (UID: \"df76885d-11e8-4fce-a69a-dee26f62c562\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713p6bdc" Jan 22 09:14:46 crc kubenswrapper[4811]: I0122 09:14:46.666195 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/df76885d-11e8-4fce-a69a-dee26f62c562-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713p6bdc\" (UID: \"df76885d-11e8-4fce-a69a-dee26f62c562\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713p6bdc" Jan 22 09:14:46 crc kubenswrapper[4811]: I0122 09:14:46.666254 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-smj8s\" (UniqueName: \"kubernetes.io/projected/df76885d-11e8-4fce-a69a-dee26f62c562-kube-api-access-smj8s\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713p6bdc\" (UID: \"df76885d-11e8-4fce-a69a-dee26f62c562\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713p6bdc" Jan 22 09:14:46 crc kubenswrapper[4811]: I0122 09:14:46.666327 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/df76885d-11e8-4fce-a69a-dee26f62c562-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713p6bdc\" (UID: \"df76885d-11e8-4fce-a69a-dee26f62c562\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713p6bdc" Jan 22 09:14:46 crc kubenswrapper[4811]: I0122 09:14:46.666707 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/df76885d-11e8-4fce-a69a-dee26f62c562-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713p6bdc\" (UID: \"df76885d-11e8-4fce-a69a-dee26f62c562\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713p6bdc" Jan 22 09:14:46 crc kubenswrapper[4811]: I0122 09:14:46.666754 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/df76885d-11e8-4fce-a69a-dee26f62c562-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713p6bdc\" (UID: \"df76885d-11e8-4fce-a69a-dee26f62c562\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713p6bdc" Jan 22 09:14:46 crc kubenswrapper[4811]: I0122 09:14:46.693183 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-smj8s\" (UniqueName: \"kubernetes.io/projected/df76885d-11e8-4fce-a69a-dee26f62c562-kube-api-access-smj8s\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713p6bdc\" (UID: \"df76885d-11e8-4fce-a69a-dee26f62c562\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713p6bdc" Jan 22 09:14:46 crc kubenswrapper[4811]: I0122 09:14:46.769429 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713p6bdc" Jan 22 09:14:46 crc kubenswrapper[4811]: E0122 09:14:46.790195 4811 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713p6bdc_openshift-marketplace_df76885d-11e8-4fce-a69a-dee26f62c562_0(ced91a046a1d778337352820db009fe0030686f130fbe472ef8e4b1c74d77809): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 22 09:14:46 crc kubenswrapper[4811]: E0122 09:14:46.790265 4811 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713p6bdc_openshift-marketplace_df76885d-11e8-4fce-a69a-dee26f62c562_0(ced91a046a1d778337352820db009fe0030686f130fbe472ef8e4b1c74d77809): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713p6bdc" Jan 22 09:14:46 crc kubenswrapper[4811]: E0122 09:14:46.790288 4811 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713p6bdc_openshift-marketplace_df76885d-11e8-4fce-a69a-dee26f62c562_0(ced91a046a1d778337352820db009fe0030686f130fbe472ef8e4b1c74d77809): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713p6bdc" Jan 22 09:14:46 crc kubenswrapper[4811]: E0122 09:14:46.790334 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713p6bdc_openshift-marketplace(df76885d-11e8-4fce-a69a-dee26f62c562)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713p6bdc_openshift-marketplace(df76885d-11e8-4fce-a69a-dee26f62c562)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713p6bdc_openshift-marketplace_df76885d-11e8-4fce-a69a-dee26f62c562_0(ced91a046a1d778337352820db009fe0030686f130fbe472ef8e4b1c74d77809): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713p6bdc" podUID="df76885d-11e8-4fce-a69a-dee26f62c562" Jan 22 09:14:46 crc kubenswrapper[4811]: I0122 09:14:46.830531 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713p6bdc" Jan 22 09:14:46 crc kubenswrapper[4811]: I0122 09:14:46.830940 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713p6bdc" Jan 22 09:14:46 crc kubenswrapper[4811]: E0122 09:14:46.850381 4811 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713p6bdc_openshift-marketplace_df76885d-11e8-4fce-a69a-dee26f62c562_0(3fbe3e74c7996d6b4377ae09434e93066f21ebd9175aa36781670d9cc84afeff): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 22 09:14:46 crc kubenswrapper[4811]: E0122 09:14:46.850446 4811 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713p6bdc_openshift-marketplace_df76885d-11e8-4fce-a69a-dee26f62c562_0(3fbe3e74c7996d6b4377ae09434e93066f21ebd9175aa36781670d9cc84afeff): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713p6bdc" Jan 22 09:14:46 crc kubenswrapper[4811]: E0122 09:14:46.850475 4811 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713p6bdc_openshift-marketplace_df76885d-11e8-4fce-a69a-dee26f62c562_0(3fbe3e74c7996d6b4377ae09434e93066f21ebd9175aa36781670d9cc84afeff): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713p6bdc" Jan 22 09:14:46 crc kubenswrapper[4811]: E0122 09:14:46.850525 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713p6bdc_openshift-marketplace(df76885d-11e8-4fce-a69a-dee26f62c562)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713p6bdc_openshift-marketplace(df76885d-11e8-4fce-a69a-dee26f62c562)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713p6bdc_openshift-marketplace_df76885d-11e8-4fce-a69a-dee26f62c562_0(3fbe3e74c7996d6b4377ae09434e93066f21ebd9175aa36781670d9cc84afeff): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713p6bdc" podUID="df76885d-11e8-4fce-a69a-dee26f62c562" Jan 22 09:14:55 crc kubenswrapper[4811]: I0122 09:14:55.994161 4811 scope.go:117] "RemoveContainer" containerID="387bd2033a6e44d48e86120e4dee106a7b97b0d54769314cb0c0424e36c0d88e" Jan 22 09:14:56 crc kubenswrapper[4811]: I0122 09:14:56.876171 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-kfqgt_f2555861-d1bb-4f21-be4a-165ed9212932/kube-multus/2.log" Jan 22 09:14:56 crc kubenswrapper[4811]: I0122 09:14:56.876474 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-kfqgt" event={"ID":"f2555861-d1bb-4f21-be4a-165ed9212932","Type":"ContainerStarted","Data":"103c16fd7da0f8e49997fe7a5fa06c22a360541b2dfea2eb0eb6b689c22affd5"} Jan 22 09:14:59 crc kubenswrapper[4811]: I0122 09:14:59.315270 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-gc7b5" Jan 22 09:14:59 crc kubenswrapper[4811]: I0122 09:14:59.991476 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713p6bdc" Jan 22 09:14:59 crc kubenswrapper[4811]: I0122 09:14:59.992108 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713p6bdc" Jan 22 09:15:00 crc kubenswrapper[4811]: I0122 09:15:00.134372 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713p6bdc"] Jan 22 09:15:00 crc kubenswrapper[4811]: I0122 09:15:00.150296 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484555-ggvph"] Jan 22 09:15:00 crc kubenswrapper[4811]: I0122 09:15:00.151402 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484555-ggvph" Jan 22 09:15:00 crc kubenswrapper[4811]: W0122 09:15:00.154461 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddf76885d_11e8_4fce_a69a_dee26f62c562.slice/crio-866ce6762065cb42fd4d82fef270c8b86383d439e582e7db36ba1c877ac5386d WatchSource:0}: Error finding container 866ce6762065cb42fd4d82fef270c8b86383d439e582e7db36ba1c877ac5386d: Status 404 returned error can't find the container with id 866ce6762065cb42fd4d82fef270c8b86383d439e582e7db36ba1c877ac5386d Jan 22 09:15:00 crc kubenswrapper[4811]: I0122 09:15:00.157367 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 22 09:15:00 crc kubenswrapper[4811]: I0122 09:15:00.160223 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 22 09:15:00 crc kubenswrapper[4811]: I0122 09:15:00.172810 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484555-ggvph"] Jan 22 09:15:00 crc kubenswrapper[4811]: I0122 09:15:00.193021 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a17cbe8b-d81b-4480-ae34-a9467583c105-secret-volume\") pod \"collect-profiles-29484555-ggvph\" (UID: \"a17cbe8b-d81b-4480-ae34-a9467583c105\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484555-ggvph" Jan 22 09:15:00 crc kubenswrapper[4811]: I0122 09:15:00.193165 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hfrn\" (UniqueName: \"kubernetes.io/projected/a17cbe8b-d81b-4480-ae34-a9467583c105-kube-api-access-6hfrn\") pod \"collect-profiles-29484555-ggvph\" (UID: \"a17cbe8b-d81b-4480-ae34-a9467583c105\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484555-ggvph" Jan 22 09:15:00 crc kubenswrapper[4811]: I0122 09:15:00.193257 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a17cbe8b-d81b-4480-ae34-a9467583c105-config-volume\") pod \"collect-profiles-29484555-ggvph\" (UID: \"a17cbe8b-d81b-4480-ae34-a9467583c105\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484555-ggvph" Jan 22 09:15:00 crc kubenswrapper[4811]: I0122 09:15:00.294343 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6hfrn\" (UniqueName: \"kubernetes.io/projected/a17cbe8b-d81b-4480-ae34-a9467583c105-kube-api-access-6hfrn\") pod \"collect-profiles-29484555-ggvph\" (UID: \"a17cbe8b-d81b-4480-ae34-a9467583c105\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484555-ggvph" Jan 22 09:15:00 crc kubenswrapper[4811]: I0122 09:15:00.294395 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a17cbe8b-d81b-4480-ae34-a9467583c105-config-volume\") pod \"collect-profiles-29484555-ggvph\" (UID: \"a17cbe8b-d81b-4480-ae34-a9467583c105\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484555-ggvph" Jan 22 09:15:00 crc kubenswrapper[4811]: I0122 09:15:00.294477 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a17cbe8b-d81b-4480-ae34-a9467583c105-secret-volume\") pod \"collect-profiles-29484555-ggvph\" (UID: \"a17cbe8b-d81b-4480-ae34-a9467583c105\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484555-ggvph" Jan 22 09:15:00 crc kubenswrapper[4811]: I0122 09:15:00.295307 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a17cbe8b-d81b-4480-ae34-a9467583c105-config-volume\") pod \"collect-profiles-29484555-ggvph\" (UID: \"a17cbe8b-d81b-4480-ae34-a9467583c105\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484555-ggvph" Jan 22 09:15:00 crc kubenswrapper[4811]: I0122 09:15:00.299731 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a17cbe8b-d81b-4480-ae34-a9467583c105-secret-volume\") pod \"collect-profiles-29484555-ggvph\" (UID: \"a17cbe8b-d81b-4480-ae34-a9467583c105\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484555-ggvph" Jan 22 09:15:00 crc kubenswrapper[4811]: I0122 09:15:00.308675 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6hfrn\" (UniqueName: \"kubernetes.io/projected/a17cbe8b-d81b-4480-ae34-a9467583c105-kube-api-access-6hfrn\") pod \"collect-profiles-29484555-ggvph\" (UID: \"a17cbe8b-d81b-4480-ae34-a9467583c105\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484555-ggvph" Jan 22 09:15:00 crc kubenswrapper[4811]: I0122 09:15:00.476205 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484555-ggvph" Jan 22 09:15:00 crc kubenswrapper[4811]: I0122 09:15:00.807808 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484555-ggvph"] Jan 22 09:15:00 crc kubenswrapper[4811]: W0122 09:15:00.811411 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda17cbe8b_d81b_4480_ae34_a9467583c105.slice/crio-55deca2353e917f730ad822d64ae3c9d08f1f1fdf335d91981eb4da668b63574 WatchSource:0}: Error finding container 55deca2353e917f730ad822d64ae3c9d08f1f1fdf335d91981eb4da668b63574: Status 404 returned error can't find the container with id 55deca2353e917f730ad822d64ae3c9d08f1f1fdf335d91981eb4da668b63574 Jan 22 09:15:00 crc kubenswrapper[4811]: I0122 09:15:00.893291 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484555-ggvph" event={"ID":"a17cbe8b-d81b-4480-ae34-a9467583c105","Type":"ContainerStarted","Data":"cb012ca0648dd3e27c8806e2210b67d3c6a4841a15fd17948d36677964180120"} Jan 22 09:15:00 crc kubenswrapper[4811]: I0122 09:15:00.893337 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484555-ggvph" event={"ID":"a17cbe8b-d81b-4480-ae34-a9467583c105","Type":"ContainerStarted","Data":"55deca2353e917f730ad822d64ae3c9d08f1f1fdf335d91981eb4da668b63574"} Jan 22 09:15:00 crc kubenswrapper[4811]: I0122 09:15:00.894737 4811 generic.go:334] "Generic (PLEG): container finished" podID="df76885d-11e8-4fce-a69a-dee26f62c562" containerID="6ca8e3e60679cc2ad9ba3f3271ebdcd74c98282b14fc20ad2c46b86254ceb4d4" exitCode=0 Jan 22 09:15:00 crc kubenswrapper[4811]: I0122 09:15:00.894802 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713p6bdc" event={"ID":"df76885d-11e8-4fce-a69a-dee26f62c562","Type":"ContainerDied","Data":"6ca8e3e60679cc2ad9ba3f3271ebdcd74c98282b14fc20ad2c46b86254ceb4d4"} Jan 22 09:15:00 crc kubenswrapper[4811]: I0122 09:15:00.894830 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713p6bdc" event={"ID":"df76885d-11e8-4fce-a69a-dee26f62c562","Type":"ContainerStarted","Data":"866ce6762065cb42fd4d82fef270c8b86383d439e582e7db36ba1c877ac5386d"} Jan 22 09:15:00 crc kubenswrapper[4811]: I0122 09:15:00.908437 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29484555-ggvph" podStartSLOduration=0.908420658 podStartE2EDuration="908.420658ms" podCreationTimestamp="2026-01-22 09:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:15:00.905014969 +0000 UTC m=+545.227202092" watchObservedRunningTime="2026-01-22 09:15:00.908420658 +0000 UTC m=+545.230607782" Jan 22 09:15:01 crc kubenswrapper[4811]: I0122 09:15:01.900782 4811 generic.go:334] "Generic (PLEG): container finished" podID="a17cbe8b-d81b-4480-ae34-a9467583c105" containerID="cb012ca0648dd3e27c8806e2210b67d3c6a4841a15fd17948d36677964180120" exitCode=0 Jan 22 09:15:01 crc kubenswrapper[4811]: I0122 09:15:01.900845 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484555-ggvph" event={"ID":"a17cbe8b-d81b-4480-ae34-a9467583c105","Type":"ContainerDied","Data":"cb012ca0648dd3e27c8806e2210b67d3c6a4841a15fd17948d36677964180120"} Jan 22 09:15:02 crc kubenswrapper[4811]: I0122 09:15:02.906800 4811 generic.go:334] "Generic (PLEG): container finished" podID="df76885d-11e8-4fce-a69a-dee26f62c562" containerID="7a18b3a37948c0ef6738af1750112418c7a1508c2dae0f67a568a5ddb056aa11" exitCode=0 Jan 22 09:15:02 crc kubenswrapper[4811]: I0122 09:15:02.907004 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713p6bdc" event={"ID":"df76885d-11e8-4fce-a69a-dee26f62c562","Type":"ContainerDied","Data":"7a18b3a37948c0ef6738af1750112418c7a1508c2dae0f67a568a5ddb056aa11"} Jan 22 09:15:03 crc kubenswrapper[4811]: I0122 09:15:03.118844 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484555-ggvph" Jan 22 09:15:03 crc kubenswrapper[4811]: I0122 09:15:03.222307 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6hfrn\" (UniqueName: \"kubernetes.io/projected/a17cbe8b-d81b-4480-ae34-a9467583c105-kube-api-access-6hfrn\") pod \"a17cbe8b-d81b-4480-ae34-a9467583c105\" (UID: \"a17cbe8b-d81b-4480-ae34-a9467583c105\") " Jan 22 09:15:03 crc kubenswrapper[4811]: I0122 09:15:03.222386 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a17cbe8b-d81b-4480-ae34-a9467583c105-config-volume\") pod \"a17cbe8b-d81b-4480-ae34-a9467583c105\" (UID: \"a17cbe8b-d81b-4480-ae34-a9467583c105\") " Jan 22 09:15:03 crc kubenswrapper[4811]: I0122 09:15:03.222483 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a17cbe8b-d81b-4480-ae34-a9467583c105-secret-volume\") pod \"a17cbe8b-d81b-4480-ae34-a9467583c105\" (UID: \"a17cbe8b-d81b-4480-ae34-a9467583c105\") " Jan 22 09:15:03 crc kubenswrapper[4811]: I0122 09:15:03.223089 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a17cbe8b-d81b-4480-ae34-a9467583c105-config-volume" (OuterVolumeSpecName: "config-volume") pod "a17cbe8b-d81b-4480-ae34-a9467583c105" (UID: "a17cbe8b-d81b-4480-ae34-a9467583c105"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:15:03 crc kubenswrapper[4811]: I0122 09:15:03.227493 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a17cbe8b-d81b-4480-ae34-a9467583c105-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "a17cbe8b-d81b-4480-ae34-a9467583c105" (UID: "a17cbe8b-d81b-4480-ae34-a9467583c105"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:15:03 crc kubenswrapper[4811]: I0122 09:15:03.227602 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a17cbe8b-d81b-4480-ae34-a9467583c105-kube-api-access-6hfrn" (OuterVolumeSpecName: "kube-api-access-6hfrn") pod "a17cbe8b-d81b-4480-ae34-a9467583c105" (UID: "a17cbe8b-d81b-4480-ae34-a9467583c105"). InnerVolumeSpecName "kube-api-access-6hfrn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:15:03 crc kubenswrapper[4811]: I0122 09:15:03.325188 4811 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a17cbe8b-d81b-4480-ae34-a9467583c105-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 22 09:15:03 crc kubenswrapper[4811]: I0122 09:15:03.325229 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6hfrn\" (UniqueName: \"kubernetes.io/projected/a17cbe8b-d81b-4480-ae34-a9467583c105-kube-api-access-6hfrn\") on node \"crc\" DevicePath \"\"" Jan 22 09:15:03 crc kubenswrapper[4811]: I0122 09:15:03.325241 4811 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a17cbe8b-d81b-4480-ae34-a9467583c105-config-volume\") on node \"crc\" DevicePath \"\"" Jan 22 09:15:03 crc kubenswrapper[4811]: I0122 09:15:03.914161 4811 generic.go:334] "Generic (PLEG): container finished" podID="df76885d-11e8-4fce-a69a-dee26f62c562" containerID="e9b159b5b1690bf29e113ad5bf58e7f7e81f5208c3f03994d86025e5a5cb84cf" exitCode=0 Jan 22 09:15:03 crc kubenswrapper[4811]: I0122 09:15:03.914245 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713p6bdc" event={"ID":"df76885d-11e8-4fce-a69a-dee26f62c562","Type":"ContainerDied","Data":"e9b159b5b1690bf29e113ad5bf58e7f7e81f5208c3f03994d86025e5a5cb84cf"} Jan 22 09:15:03 crc kubenswrapper[4811]: I0122 09:15:03.915749 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484555-ggvph" event={"ID":"a17cbe8b-d81b-4480-ae34-a9467583c105","Type":"ContainerDied","Data":"55deca2353e917f730ad822d64ae3c9d08f1f1fdf335d91981eb4da668b63574"} Jan 22 09:15:03 crc kubenswrapper[4811]: I0122 09:15:03.915788 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="55deca2353e917f730ad822d64ae3c9d08f1f1fdf335d91981eb4da668b63574" Jan 22 09:15:03 crc kubenswrapper[4811]: I0122 09:15:03.915804 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484555-ggvph" Jan 22 09:15:05 crc kubenswrapper[4811]: I0122 09:15:05.086975 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713p6bdc" Jan 22 09:15:05 crc kubenswrapper[4811]: I0122 09:15:05.244783 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/df76885d-11e8-4fce-a69a-dee26f62c562-bundle\") pod \"df76885d-11e8-4fce-a69a-dee26f62c562\" (UID: \"df76885d-11e8-4fce-a69a-dee26f62c562\") " Jan 22 09:15:05 crc kubenswrapper[4811]: I0122 09:15:05.244862 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-smj8s\" (UniqueName: \"kubernetes.io/projected/df76885d-11e8-4fce-a69a-dee26f62c562-kube-api-access-smj8s\") pod \"df76885d-11e8-4fce-a69a-dee26f62c562\" (UID: \"df76885d-11e8-4fce-a69a-dee26f62c562\") " Jan 22 09:15:05 crc kubenswrapper[4811]: I0122 09:15:05.244918 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/df76885d-11e8-4fce-a69a-dee26f62c562-util\") pod \"df76885d-11e8-4fce-a69a-dee26f62c562\" (UID: \"df76885d-11e8-4fce-a69a-dee26f62c562\") " Jan 22 09:15:05 crc kubenswrapper[4811]: I0122 09:15:05.245746 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df76885d-11e8-4fce-a69a-dee26f62c562-bundle" (OuterVolumeSpecName: "bundle") pod "df76885d-11e8-4fce-a69a-dee26f62c562" (UID: "df76885d-11e8-4fce-a69a-dee26f62c562"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:15:05 crc kubenswrapper[4811]: I0122 09:15:05.249015 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df76885d-11e8-4fce-a69a-dee26f62c562-kube-api-access-smj8s" (OuterVolumeSpecName: "kube-api-access-smj8s") pod "df76885d-11e8-4fce-a69a-dee26f62c562" (UID: "df76885d-11e8-4fce-a69a-dee26f62c562"). InnerVolumeSpecName "kube-api-access-smj8s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:15:05 crc kubenswrapper[4811]: I0122 09:15:05.255049 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df76885d-11e8-4fce-a69a-dee26f62c562-util" (OuterVolumeSpecName: "util") pod "df76885d-11e8-4fce-a69a-dee26f62c562" (UID: "df76885d-11e8-4fce-a69a-dee26f62c562"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:15:05 crc kubenswrapper[4811]: I0122 09:15:05.345759 4811 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/df76885d-11e8-4fce-a69a-dee26f62c562-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 09:15:05 crc kubenswrapper[4811]: I0122 09:15:05.345955 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-smj8s\" (UniqueName: \"kubernetes.io/projected/df76885d-11e8-4fce-a69a-dee26f62c562-kube-api-access-smj8s\") on node \"crc\" DevicePath \"\"" Jan 22 09:15:05 crc kubenswrapper[4811]: I0122 09:15:05.346033 4811 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/df76885d-11e8-4fce-a69a-dee26f62c562-util\") on node \"crc\" DevicePath \"\"" Jan 22 09:15:05 crc kubenswrapper[4811]: I0122 09:15:05.502458 4811 patch_prober.go:28] interesting pod/machine-config-daemon-txvcq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 09:15:05 crc kubenswrapper[4811]: I0122 09:15:05.502503 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 09:15:05 crc kubenswrapper[4811]: I0122 09:15:05.927780 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713p6bdc" event={"ID":"df76885d-11e8-4fce-a69a-dee26f62c562","Type":"ContainerDied","Data":"866ce6762065cb42fd4d82fef270c8b86383d439e582e7db36ba1c877ac5386d"} Jan 22 09:15:05 crc kubenswrapper[4811]: I0122 09:15:05.928019 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="866ce6762065cb42fd4d82fef270c8b86383d439e582e7db36ba1c877ac5386d" Jan 22 09:15:05 crc kubenswrapper[4811]: I0122 09:15:05.928031 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713p6bdc" Jan 22 09:15:07 crc kubenswrapper[4811]: I0122 09:15:07.986658 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-v76qn"] Jan 22 09:15:07 crc kubenswrapper[4811]: E0122 09:15:07.986889 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df76885d-11e8-4fce-a69a-dee26f62c562" containerName="extract" Jan 22 09:15:07 crc kubenswrapper[4811]: I0122 09:15:07.986904 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="df76885d-11e8-4fce-a69a-dee26f62c562" containerName="extract" Jan 22 09:15:07 crc kubenswrapper[4811]: E0122 09:15:07.986915 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df76885d-11e8-4fce-a69a-dee26f62c562" containerName="util" Jan 22 09:15:07 crc kubenswrapper[4811]: I0122 09:15:07.986920 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="df76885d-11e8-4fce-a69a-dee26f62c562" containerName="util" Jan 22 09:15:07 crc kubenswrapper[4811]: E0122 09:15:07.986929 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df76885d-11e8-4fce-a69a-dee26f62c562" containerName="pull" Jan 22 09:15:07 crc kubenswrapper[4811]: I0122 09:15:07.986934 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="df76885d-11e8-4fce-a69a-dee26f62c562" containerName="pull" Jan 22 09:15:07 crc kubenswrapper[4811]: E0122 09:15:07.986943 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a17cbe8b-d81b-4480-ae34-a9467583c105" containerName="collect-profiles" Jan 22 09:15:07 crc kubenswrapper[4811]: I0122 09:15:07.986948 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="a17cbe8b-d81b-4480-ae34-a9467583c105" containerName="collect-profiles" Jan 22 09:15:07 crc kubenswrapper[4811]: I0122 09:15:07.987045 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="df76885d-11e8-4fce-a69a-dee26f62c562" containerName="extract" Jan 22 09:15:07 crc kubenswrapper[4811]: I0122 09:15:07.987053 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="a17cbe8b-d81b-4480-ae34-a9467583c105" containerName="collect-profiles" Jan 22 09:15:07 crc kubenswrapper[4811]: I0122 09:15:07.987421 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-v76qn" Jan 22 09:15:07 crc kubenswrapper[4811]: I0122 09:15:07.989970 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 22 09:15:07 crc kubenswrapper[4811]: I0122 09:15:07.990968 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-cb4mk" Jan 22 09:15:07 crc kubenswrapper[4811]: I0122 09:15:07.991325 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 22 09:15:08 crc kubenswrapper[4811]: I0122 09:15:08.008300 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-v76qn"] Jan 22 09:15:08 crc kubenswrapper[4811]: I0122 09:15:08.173771 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzvcj\" (UniqueName: \"kubernetes.io/projected/952cfa08-4a5f-43b8-aa83-58839cc92523-kube-api-access-dzvcj\") pod \"nmstate-operator-646758c888-v76qn\" (UID: \"952cfa08-4a5f-43b8-aa83-58839cc92523\") " pod="openshift-nmstate/nmstate-operator-646758c888-v76qn" Jan 22 09:15:08 crc kubenswrapper[4811]: I0122 09:15:08.275235 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzvcj\" (UniqueName: \"kubernetes.io/projected/952cfa08-4a5f-43b8-aa83-58839cc92523-kube-api-access-dzvcj\") pod \"nmstate-operator-646758c888-v76qn\" (UID: \"952cfa08-4a5f-43b8-aa83-58839cc92523\") " pod="openshift-nmstate/nmstate-operator-646758c888-v76qn" Jan 22 09:15:08 crc kubenswrapper[4811]: I0122 09:15:08.290596 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dzvcj\" (UniqueName: \"kubernetes.io/projected/952cfa08-4a5f-43b8-aa83-58839cc92523-kube-api-access-dzvcj\") pod \"nmstate-operator-646758c888-v76qn\" (UID: \"952cfa08-4a5f-43b8-aa83-58839cc92523\") " pod="openshift-nmstate/nmstate-operator-646758c888-v76qn" Jan 22 09:15:08 crc kubenswrapper[4811]: I0122 09:15:08.303005 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-v76qn" Jan 22 09:15:08 crc kubenswrapper[4811]: I0122 09:15:08.445060 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-v76qn"] Jan 22 09:15:08 crc kubenswrapper[4811]: I0122 09:15:08.942848 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-v76qn" event={"ID":"952cfa08-4a5f-43b8-aa83-58839cc92523","Type":"ContainerStarted","Data":"59786e9aa242e1920202b46d259552247da8a12f79b5614eb99a80f7b1f9778c"} Jan 22 09:15:11 crc kubenswrapper[4811]: I0122 09:15:11.958172 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-v76qn" event={"ID":"952cfa08-4a5f-43b8-aa83-58839cc92523","Type":"ContainerStarted","Data":"ada4edf4185c3f81e298dffc9f2566b8c5be27652e2c97885a48278d458abb01"} Jan 22 09:15:11 crc kubenswrapper[4811]: I0122 09:15:11.972870 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-v76qn" podStartSLOduration=2.513175763 podStartE2EDuration="4.972852559s" podCreationTimestamp="2026-01-22 09:15:07 +0000 UTC" firstStartedPulling="2026-01-22 09:15:08.45348935 +0000 UTC m=+552.775676473" lastFinishedPulling="2026-01-22 09:15:10.913166146 +0000 UTC m=+555.235353269" observedRunningTime="2026-01-22 09:15:11.972648885 +0000 UTC m=+556.294836008" watchObservedRunningTime="2026-01-22 09:15:11.972852559 +0000 UTC m=+556.295039681" Jan 22 09:15:16 crc kubenswrapper[4811]: I0122 09:15:16.780335 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-nmrg4"] Jan 22 09:15:16 crc kubenswrapper[4811]: I0122 09:15:16.781384 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-nmrg4" Jan 22 09:15:16 crc kubenswrapper[4811]: I0122 09:15:16.784008 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-9wxc8" Jan 22 09:15:16 crc kubenswrapper[4811]: I0122 09:15:16.797314 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-nmrg4"] Jan 22 09:15:16 crc kubenswrapper[4811]: I0122 09:15:16.802669 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-tnk97"] Jan 22 09:15:16 crc kubenswrapper[4811]: I0122 09:15:16.803376 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-tnk97" Jan 22 09:15:16 crc kubenswrapper[4811]: I0122 09:15:16.804877 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 22 09:15:16 crc kubenswrapper[4811]: I0122 09:15:16.839191 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-tnk97"] Jan 22 09:15:16 crc kubenswrapper[4811]: I0122 09:15:16.842357 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-tvjnz"] Jan 22 09:15:16 crc kubenswrapper[4811]: I0122 09:15:16.843160 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-tvjnz" Jan 22 09:15:16 crc kubenswrapper[4811]: I0122 09:15:16.868833 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/74fc22de-195f-452c-b18c-f12c53f2465f-nmstate-lock\") pod \"nmstate-handler-tvjnz\" (UID: \"74fc22de-195f-452c-b18c-f12c53f2465f\") " pod="openshift-nmstate/nmstate-handler-tvjnz" Jan 22 09:15:16 crc kubenswrapper[4811]: I0122 09:15:16.868879 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nwpb\" (UniqueName: \"kubernetes.io/projected/66e8ec28-33fd-440b-9064-dd5c40cf4b61-kube-api-access-7nwpb\") pod \"nmstate-metrics-54757c584b-nmrg4\" (UID: \"66e8ec28-33fd-440b-9064-dd5c40cf4b61\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-nmrg4" Jan 22 09:15:16 crc kubenswrapper[4811]: I0122 09:15:16.868983 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/74fc22de-195f-452c-b18c-f12c53f2465f-ovs-socket\") pod \"nmstate-handler-tvjnz\" (UID: \"74fc22de-195f-452c-b18c-f12c53f2465f\") " pod="openshift-nmstate/nmstate-handler-tvjnz" Jan 22 09:15:16 crc kubenswrapper[4811]: I0122 09:15:16.869038 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/9e9c9633-f916-440c-b02c-5bb58eb51e76-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-tnk97\" (UID: \"9e9c9633-f916-440c-b02c-5bb58eb51e76\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-tnk97" Jan 22 09:15:16 crc kubenswrapper[4811]: I0122 09:15:16.869082 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kp5p8\" (UniqueName: \"kubernetes.io/projected/74fc22de-195f-452c-b18c-f12c53f2465f-kube-api-access-kp5p8\") pod \"nmstate-handler-tvjnz\" (UID: \"74fc22de-195f-452c-b18c-f12c53f2465f\") " pod="openshift-nmstate/nmstate-handler-tvjnz" Jan 22 09:15:16 crc kubenswrapper[4811]: I0122 09:15:16.869121 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6944\" (UniqueName: \"kubernetes.io/projected/9e9c9633-f916-440c-b02c-5bb58eb51e76-kube-api-access-b6944\") pod \"nmstate-webhook-8474b5b9d8-tnk97\" (UID: \"9e9c9633-f916-440c-b02c-5bb58eb51e76\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-tnk97" Jan 22 09:15:16 crc kubenswrapper[4811]: I0122 09:15:16.869152 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/74fc22de-195f-452c-b18c-f12c53f2465f-dbus-socket\") pod \"nmstate-handler-tvjnz\" (UID: \"74fc22de-195f-452c-b18c-f12c53f2465f\") " pod="openshift-nmstate/nmstate-handler-tvjnz" Jan 22 09:15:16 crc kubenswrapper[4811]: I0122 09:15:16.922348 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-22qhf"] Jan 22 09:15:16 crc kubenswrapper[4811]: I0122 09:15:16.923033 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-22qhf" Jan 22 09:15:16 crc kubenswrapper[4811]: I0122 09:15:16.925012 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 22 09:15:16 crc kubenswrapper[4811]: I0122 09:15:16.925021 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 22 09:15:16 crc kubenswrapper[4811]: I0122 09:15:16.926246 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-rqqbk" Jan 22 09:15:16 crc kubenswrapper[4811]: I0122 09:15:16.943756 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-22qhf"] Jan 22 09:15:16 crc kubenswrapper[4811]: I0122 09:15:16.969958 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/d63fcc2e-ef3c-4a10-9444-43070aa0dc77-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-22qhf\" (UID: \"d63fcc2e-ef3c-4a10-9444-43070aa0dc77\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-22qhf" Jan 22 09:15:16 crc kubenswrapper[4811]: I0122 09:15:16.970006 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kp5p8\" (UniqueName: \"kubernetes.io/projected/74fc22de-195f-452c-b18c-f12c53f2465f-kube-api-access-kp5p8\") pod \"nmstate-handler-tvjnz\" (UID: \"74fc22de-195f-452c-b18c-f12c53f2465f\") " pod="openshift-nmstate/nmstate-handler-tvjnz" Jan 22 09:15:16 crc kubenswrapper[4811]: I0122 09:15:16.970068 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6944\" (UniqueName: \"kubernetes.io/projected/9e9c9633-f916-440c-b02c-5bb58eb51e76-kube-api-access-b6944\") pod \"nmstate-webhook-8474b5b9d8-tnk97\" (UID: \"9e9c9633-f916-440c-b02c-5bb58eb51e76\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-tnk97" Jan 22 09:15:16 crc kubenswrapper[4811]: I0122 09:15:16.970099 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/74fc22de-195f-452c-b18c-f12c53f2465f-dbus-socket\") pod \"nmstate-handler-tvjnz\" (UID: \"74fc22de-195f-452c-b18c-f12c53f2465f\") " pod="openshift-nmstate/nmstate-handler-tvjnz" Jan 22 09:15:16 crc kubenswrapper[4811]: I0122 09:15:16.970134 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/74fc22de-195f-452c-b18c-f12c53f2465f-nmstate-lock\") pod \"nmstate-handler-tvjnz\" (UID: \"74fc22de-195f-452c-b18c-f12c53f2465f\") " pod="openshift-nmstate/nmstate-handler-tvjnz" Jan 22 09:15:16 crc kubenswrapper[4811]: I0122 09:15:16.970170 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7nwpb\" (UniqueName: \"kubernetes.io/projected/66e8ec28-33fd-440b-9064-dd5c40cf4b61-kube-api-access-7nwpb\") pod \"nmstate-metrics-54757c584b-nmrg4\" (UID: \"66e8ec28-33fd-440b-9064-dd5c40cf4b61\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-nmrg4" Jan 22 09:15:16 crc kubenswrapper[4811]: I0122 09:15:16.970205 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/d63fcc2e-ef3c-4a10-9444-43070aa0dc77-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-22qhf\" (UID: \"d63fcc2e-ef3c-4a10-9444-43070aa0dc77\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-22qhf" Jan 22 09:15:16 crc kubenswrapper[4811]: I0122 09:15:16.970220 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/74fc22de-195f-452c-b18c-f12c53f2465f-ovs-socket\") pod \"nmstate-handler-tvjnz\" (UID: \"74fc22de-195f-452c-b18c-f12c53f2465f\") " pod="openshift-nmstate/nmstate-handler-tvjnz" Jan 22 09:15:16 crc kubenswrapper[4811]: I0122 09:15:16.970225 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/74fc22de-195f-452c-b18c-f12c53f2465f-nmstate-lock\") pod \"nmstate-handler-tvjnz\" (UID: \"74fc22de-195f-452c-b18c-f12c53f2465f\") " pod="openshift-nmstate/nmstate-handler-tvjnz" Jan 22 09:15:16 crc kubenswrapper[4811]: I0122 09:15:16.970246 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/9e9c9633-f916-440c-b02c-5bb58eb51e76-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-tnk97\" (UID: \"9e9c9633-f916-440c-b02c-5bb58eb51e76\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-tnk97" Jan 22 09:15:16 crc kubenswrapper[4811]: I0122 09:15:16.970300 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlx4d\" (UniqueName: \"kubernetes.io/projected/d63fcc2e-ef3c-4a10-9444-43070aa0dc77-kube-api-access-mlx4d\") pod \"nmstate-console-plugin-7754f76f8b-22qhf\" (UID: \"d63fcc2e-ef3c-4a10-9444-43070aa0dc77\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-22qhf" Jan 22 09:15:16 crc kubenswrapper[4811]: I0122 09:15:16.970317 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/74fc22de-195f-452c-b18c-f12c53f2465f-ovs-socket\") pod \"nmstate-handler-tvjnz\" (UID: \"74fc22de-195f-452c-b18c-f12c53f2465f\") " pod="openshift-nmstate/nmstate-handler-tvjnz" Jan 22 09:15:16 crc kubenswrapper[4811]: I0122 09:15:16.970365 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/74fc22de-195f-452c-b18c-f12c53f2465f-dbus-socket\") pod \"nmstate-handler-tvjnz\" (UID: \"74fc22de-195f-452c-b18c-f12c53f2465f\") " pod="openshift-nmstate/nmstate-handler-tvjnz" Jan 22 09:15:16 crc kubenswrapper[4811]: I0122 09:15:16.974989 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/9e9c9633-f916-440c-b02c-5bb58eb51e76-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-tnk97\" (UID: \"9e9c9633-f916-440c-b02c-5bb58eb51e76\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-tnk97" Jan 22 09:15:16 crc kubenswrapper[4811]: I0122 09:15:16.983934 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6944\" (UniqueName: \"kubernetes.io/projected/9e9c9633-f916-440c-b02c-5bb58eb51e76-kube-api-access-b6944\") pod \"nmstate-webhook-8474b5b9d8-tnk97\" (UID: \"9e9c9633-f916-440c-b02c-5bb58eb51e76\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-tnk97" Jan 22 09:15:16 crc kubenswrapper[4811]: I0122 09:15:16.986486 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7nwpb\" (UniqueName: \"kubernetes.io/projected/66e8ec28-33fd-440b-9064-dd5c40cf4b61-kube-api-access-7nwpb\") pod \"nmstate-metrics-54757c584b-nmrg4\" (UID: \"66e8ec28-33fd-440b-9064-dd5c40cf4b61\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-nmrg4" Jan 22 09:15:16 crc kubenswrapper[4811]: I0122 09:15:16.989804 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kp5p8\" (UniqueName: \"kubernetes.io/projected/74fc22de-195f-452c-b18c-f12c53f2465f-kube-api-access-kp5p8\") pod \"nmstate-handler-tvjnz\" (UID: \"74fc22de-195f-452c-b18c-f12c53f2465f\") " pod="openshift-nmstate/nmstate-handler-tvjnz" Jan 22 09:15:17 crc kubenswrapper[4811]: I0122 09:15:17.071217 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/d63fcc2e-ef3c-4a10-9444-43070aa0dc77-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-22qhf\" (UID: \"d63fcc2e-ef3c-4a10-9444-43070aa0dc77\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-22qhf" Jan 22 09:15:17 crc kubenswrapper[4811]: I0122 09:15:17.071268 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mlx4d\" (UniqueName: \"kubernetes.io/projected/d63fcc2e-ef3c-4a10-9444-43070aa0dc77-kube-api-access-mlx4d\") pod \"nmstate-console-plugin-7754f76f8b-22qhf\" (UID: \"d63fcc2e-ef3c-4a10-9444-43070aa0dc77\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-22qhf" Jan 22 09:15:17 crc kubenswrapper[4811]: I0122 09:15:17.071289 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/d63fcc2e-ef3c-4a10-9444-43070aa0dc77-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-22qhf\" (UID: \"d63fcc2e-ef3c-4a10-9444-43070aa0dc77\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-22qhf" Jan 22 09:15:17 crc kubenswrapper[4811]: E0122 09:15:17.071366 4811 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Jan 22 09:15:17 crc kubenswrapper[4811]: E0122 09:15:17.071427 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d63fcc2e-ef3c-4a10-9444-43070aa0dc77-plugin-serving-cert podName:d63fcc2e-ef3c-4a10-9444-43070aa0dc77 nodeName:}" failed. No retries permitted until 2026-01-22 09:15:17.571406658 +0000 UTC m=+561.893593781 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/d63fcc2e-ef3c-4a10-9444-43070aa0dc77-plugin-serving-cert") pod "nmstate-console-plugin-7754f76f8b-22qhf" (UID: "d63fcc2e-ef3c-4a10-9444-43070aa0dc77") : secret "plugin-serving-cert" not found Jan 22 09:15:17 crc kubenswrapper[4811]: I0122 09:15:17.072104 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/d63fcc2e-ef3c-4a10-9444-43070aa0dc77-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-22qhf\" (UID: \"d63fcc2e-ef3c-4a10-9444-43070aa0dc77\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-22qhf" Jan 22 09:15:17 crc kubenswrapper[4811]: I0122 09:15:17.087200 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mlx4d\" (UniqueName: \"kubernetes.io/projected/d63fcc2e-ef3c-4a10-9444-43070aa0dc77-kube-api-access-mlx4d\") pod \"nmstate-console-plugin-7754f76f8b-22qhf\" (UID: \"d63fcc2e-ef3c-4a10-9444-43070aa0dc77\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-22qhf" Jan 22 09:15:17 crc kubenswrapper[4811]: I0122 09:15:17.092802 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-5c66854fc4-rr47k"] Jan 22 09:15:17 crc kubenswrapper[4811]: I0122 09:15:17.093476 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5c66854fc4-rr47k" Jan 22 09:15:17 crc kubenswrapper[4811]: I0122 09:15:17.094196 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-nmrg4" Jan 22 09:15:17 crc kubenswrapper[4811]: I0122 09:15:17.111799 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5c66854fc4-rr47k"] Jan 22 09:15:17 crc kubenswrapper[4811]: I0122 09:15:17.113842 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-tnk97" Jan 22 09:15:17 crc kubenswrapper[4811]: I0122 09:15:17.156482 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-tvjnz" Jan 22 09:15:17 crc kubenswrapper[4811]: I0122 09:15:17.174577 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/71cd0914-5faa-4fb6-93de-429bb3e44380-console-serving-cert\") pod \"console-5c66854fc4-rr47k\" (UID: \"71cd0914-5faa-4fb6-93de-429bb3e44380\") " pod="openshift-console/console-5c66854fc4-rr47k" Jan 22 09:15:17 crc kubenswrapper[4811]: I0122 09:15:17.174837 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/71cd0914-5faa-4fb6-93de-429bb3e44380-trusted-ca-bundle\") pod \"console-5c66854fc4-rr47k\" (UID: \"71cd0914-5faa-4fb6-93de-429bb3e44380\") " pod="openshift-console/console-5c66854fc4-rr47k" Jan 22 09:15:17 crc kubenswrapper[4811]: I0122 09:15:17.174865 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/71cd0914-5faa-4fb6-93de-429bb3e44380-oauth-serving-cert\") pod \"console-5c66854fc4-rr47k\" (UID: \"71cd0914-5faa-4fb6-93de-429bb3e44380\") " pod="openshift-console/console-5c66854fc4-rr47k" Jan 22 09:15:17 crc kubenswrapper[4811]: I0122 09:15:17.174928 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/71cd0914-5faa-4fb6-93de-429bb3e44380-console-oauth-config\") pod \"console-5c66854fc4-rr47k\" (UID: \"71cd0914-5faa-4fb6-93de-429bb3e44380\") " pod="openshift-console/console-5c66854fc4-rr47k" Jan 22 09:15:17 crc kubenswrapper[4811]: I0122 09:15:17.174982 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/71cd0914-5faa-4fb6-93de-429bb3e44380-console-config\") pod \"console-5c66854fc4-rr47k\" (UID: \"71cd0914-5faa-4fb6-93de-429bb3e44380\") " pod="openshift-console/console-5c66854fc4-rr47k" Jan 22 09:15:17 crc kubenswrapper[4811]: I0122 09:15:17.175035 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rmz7\" (UniqueName: \"kubernetes.io/projected/71cd0914-5faa-4fb6-93de-429bb3e44380-kube-api-access-6rmz7\") pod \"console-5c66854fc4-rr47k\" (UID: \"71cd0914-5faa-4fb6-93de-429bb3e44380\") " pod="openshift-console/console-5c66854fc4-rr47k" Jan 22 09:15:17 crc kubenswrapper[4811]: I0122 09:15:17.175072 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/71cd0914-5faa-4fb6-93de-429bb3e44380-service-ca\") pod \"console-5c66854fc4-rr47k\" (UID: \"71cd0914-5faa-4fb6-93de-429bb3e44380\") " pod="openshift-console/console-5c66854fc4-rr47k" Jan 22 09:15:17 crc kubenswrapper[4811]: W0122 09:15:17.176499 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod74fc22de_195f_452c_b18c_f12c53f2465f.slice/crio-893293b7240ffb6feae3559708873b6602d6fe3b8b6a077c96fa5f27e7a15411 WatchSource:0}: Error finding container 893293b7240ffb6feae3559708873b6602d6fe3b8b6a077c96fa5f27e7a15411: Status 404 returned error can't find the container with id 893293b7240ffb6feae3559708873b6602d6fe3b8b6a077c96fa5f27e7a15411 Jan 22 09:15:17 crc kubenswrapper[4811]: I0122 09:15:17.277471 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/71cd0914-5faa-4fb6-93de-429bb3e44380-console-serving-cert\") pod \"console-5c66854fc4-rr47k\" (UID: \"71cd0914-5faa-4fb6-93de-429bb3e44380\") " pod="openshift-console/console-5c66854fc4-rr47k" Jan 22 09:15:17 crc kubenswrapper[4811]: I0122 09:15:17.277550 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/71cd0914-5faa-4fb6-93de-429bb3e44380-trusted-ca-bundle\") pod \"console-5c66854fc4-rr47k\" (UID: \"71cd0914-5faa-4fb6-93de-429bb3e44380\") " pod="openshift-console/console-5c66854fc4-rr47k" Jan 22 09:15:17 crc kubenswrapper[4811]: I0122 09:15:17.277585 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/71cd0914-5faa-4fb6-93de-429bb3e44380-oauth-serving-cert\") pod \"console-5c66854fc4-rr47k\" (UID: \"71cd0914-5faa-4fb6-93de-429bb3e44380\") " pod="openshift-console/console-5c66854fc4-rr47k" Jan 22 09:15:17 crc kubenswrapper[4811]: I0122 09:15:17.277712 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/71cd0914-5faa-4fb6-93de-429bb3e44380-console-oauth-config\") pod \"console-5c66854fc4-rr47k\" (UID: \"71cd0914-5faa-4fb6-93de-429bb3e44380\") " pod="openshift-console/console-5c66854fc4-rr47k" Jan 22 09:15:17 crc kubenswrapper[4811]: I0122 09:15:17.277776 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/71cd0914-5faa-4fb6-93de-429bb3e44380-console-config\") pod \"console-5c66854fc4-rr47k\" (UID: \"71cd0914-5faa-4fb6-93de-429bb3e44380\") " pod="openshift-console/console-5c66854fc4-rr47k" Jan 22 09:15:17 crc kubenswrapper[4811]: I0122 09:15:17.277802 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rmz7\" (UniqueName: \"kubernetes.io/projected/71cd0914-5faa-4fb6-93de-429bb3e44380-kube-api-access-6rmz7\") pod \"console-5c66854fc4-rr47k\" (UID: \"71cd0914-5faa-4fb6-93de-429bb3e44380\") " pod="openshift-console/console-5c66854fc4-rr47k" Jan 22 09:15:17 crc kubenswrapper[4811]: I0122 09:15:17.277839 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/71cd0914-5faa-4fb6-93de-429bb3e44380-service-ca\") pod \"console-5c66854fc4-rr47k\" (UID: \"71cd0914-5faa-4fb6-93de-429bb3e44380\") " pod="openshift-console/console-5c66854fc4-rr47k" Jan 22 09:15:17 crc kubenswrapper[4811]: I0122 09:15:17.278617 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/71cd0914-5faa-4fb6-93de-429bb3e44380-service-ca\") pod \"console-5c66854fc4-rr47k\" (UID: \"71cd0914-5faa-4fb6-93de-429bb3e44380\") " pod="openshift-console/console-5c66854fc4-rr47k" Jan 22 09:15:17 crc kubenswrapper[4811]: I0122 09:15:17.279119 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/71cd0914-5faa-4fb6-93de-429bb3e44380-console-config\") pod \"console-5c66854fc4-rr47k\" (UID: \"71cd0914-5faa-4fb6-93de-429bb3e44380\") " pod="openshift-console/console-5c66854fc4-rr47k" Jan 22 09:15:17 crc kubenswrapper[4811]: I0122 09:15:17.279722 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/71cd0914-5faa-4fb6-93de-429bb3e44380-oauth-serving-cert\") pod \"console-5c66854fc4-rr47k\" (UID: \"71cd0914-5faa-4fb6-93de-429bb3e44380\") " pod="openshift-console/console-5c66854fc4-rr47k" Jan 22 09:15:17 crc kubenswrapper[4811]: I0122 09:15:17.283193 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/71cd0914-5faa-4fb6-93de-429bb3e44380-trusted-ca-bundle\") pod \"console-5c66854fc4-rr47k\" (UID: \"71cd0914-5faa-4fb6-93de-429bb3e44380\") " pod="openshift-console/console-5c66854fc4-rr47k" Jan 22 09:15:17 crc kubenswrapper[4811]: I0122 09:15:17.285242 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-nmrg4"] Jan 22 09:15:17 crc kubenswrapper[4811]: I0122 09:15:17.288058 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/71cd0914-5faa-4fb6-93de-429bb3e44380-console-serving-cert\") pod \"console-5c66854fc4-rr47k\" (UID: \"71cd0914-5faa-4fb6-93de-429bb3e44380\") " pod="openshift-console/console-5c66854fc4-rr47k" Jan 22 09:15:17 crc kubenswrapper[4811]: I0122 09:15:17.290312 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/71cd0914-5faa-4fb6-93de-429bb3e44380-console-oauth-config\") pod \"console-5c66854fc4-rr47k\" (UID: \"71cd0914-5faa-4fb6-93de-429bb3e44380\") " pod="openshift-console/console-5c66854fc4-rr47k" Jan 22 09:15:17 crc kubenswrapper[4811]: W0122 09:15:17.291981 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod66e8ec28_33fd_440b_9064_dd5c40cf4b61.slice/crio-999ae7d718cb0995fa73facc7f6d338c9b07bfaa14afe9959631a39b2cd51fb8 WatchSource:0}: Error finding container 999ae7d718cb0995fa73facc7f6d338c9b07bfaa14afe9959631a39b2cd51fb8: Status 404 returned error can't find the container with id 999ae7d718cb0995fa73facc7f6d338c9b07bfaa14afe9959631a39b2cd51fb8 Jan 22 09:15:17 crc kubenswrapper[4811]: I0122 09:15:17.293188 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rmz7\" (UniqueName: \"kubernetes.io/projected/71cd0914-5faa-4fb6-93de-429bb3e44380-kube-api-access-6rmz7\") pod \"console-5c66854fc4-rr47k\" (UID: \"71cd0914-5faa-4fb6-93de-429bb3e44380\") " pod="openshift-console/console-5c66854fc4-rr47k" Jan 22 09:15:17 crc kubenswrapper[4811]: I0122 09:15:17.341399 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-tnk97"] Jan 22 09:15:17 crc kubenswrapper[4811]: W0122 09:15:17.342889 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9e9c9633_f916_440c_b02c_5bb58eb51e76.slice/crio-678638b26a541e1878369aa6ed61cb0d26cefacdffc468475b084913aadf2a7a WatchSource:0}: Error finding container 678638b26a541e1878369aa6ed61cb0d26cefacdffc468475b084913aadf2a7a: Status 404 returned error can't find the container with id 678638b26a541e1878369aa6ed61cb0d26cefacdffc468475b084913aadf2a7a Jan 22 09:15:17 crc kubenswrapper[4811]: I0122 09:15:17.430517 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5c66854fc4-rr47k" Jan 22 09:15:17 crc kubenswrapper[4811]: I0122 09:15:17.580552 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/d63fcc2e-ef3c-4a10-9444-43070aa0dc77-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-22qhf\" (UID: \"d63fcc2e-ef3c-4a10-9444-43070aa0dc77\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-22qhf" Jan 22 09:15:17 crc kubenswrapper[4811]: I0122 09:15:17.583880 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/d63fcc2e-ef3c-4a10-9444-43070aa0dc77-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-22qhf\" (UID: \"d63fcc2e-ef3c-4a10-9444-43070aa0dc77\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-22qhf" Jan 22 09:15:17 crc kubenswrapper[4811]: I0122 09:15:17.773082 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5c66854fc4-rr47k"] Jan 22 09:15:17 crc kubenswrapper[4811]: W0122 09:15:17.776585 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71cd0914_5faa_4fb6_93de_429bb3e44380.slice/crio-748153c3ba23abc931a011238e004c31ecc0640de5c90802801ddc54c3ae70c7 WatchSource:0}: Error finding container 748153c3ba23abc931a011238e004c31ecc0640de5c90802801ddc54c3ae70c7: Status 404 returned error can't find the container with id 748153c3ba23abc931a011238e004c31ecc0640de5c90802801ddc54c3ae70c7 Jan 22 09:15:17 crc kubenswrapper[4811]: I0122 09:15:17.834290 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-22qhf" Jan 22 09:15:17 crc kubenswrapper[4811]: I0122 09:15:17.988804 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-tnk97" event={"ID":"9e9c9633-f916-440c-b02c-5bb58eb51e76","Type":"ContainerStarted","Data":"678638b26a541e1878369aa6ed61cb0d26cefacdffc468475b084913aadf2a7a"} Jan 22 09:15:18 crc kubenswrapper[4811]: I0122 09:15:18.025317 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5c66854fc4-rr47k" event={"ID":"71cd0914-5faa-4fb6-93de-429bb3e44380","Type":"ContainerStarted","Data":"b2cc0d468608c1cc5fe3c2cba945e031e612b9e4eefdfeff13f201d1c4427c87"} Jan 22 09:15:18 crc kubenswrapper[4811]: I0122 09:15:18.025366 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5c66854fc4-rr47k" event={"ID":"71cd0914-5faa-4fb6-93de-429bb3e44380","Type":"ContainerStarted","Data":"748153c3ba23abc931a011238e004c31ecc0640de5c90802801ddc54c3ae70c7"} Jan 22 09:15:18 crc kubenswrapper[4811]: I0122 09:15:18.025380 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-nmrg4" event={"ID":"66e8ec28-33fd-440b-9064-dd5c40cf4b61","Type":"ContainerStarted","Data":"999ae7d718cb0995fa73facc7f6d338c9b07bfaa14afe9959631a39b2cd51fb8"} Jan 22 09:15:18 crc kubenswrapper[4811]: I0122 09:15:18.031179 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-tvjnz" event={"ID":"74fc22de-195f-452c-b18c-f12c53f2465f","Type":"ContainerStarted","Data":"893293b7240ffb6feae3559708873b6602d6fe3b8b6a077c96fa5f27e7a15411"} Jan 22 09:15:18 crc kubenswrapper[4811]: I0122 09:15:18.074905 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-5c66854fc4-rr47k" podStartSLOduration=1.074888139 podStartE2EDuration="1.074888139s" podCreationTimestamp="2026-01-22 09:15:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:15:18.071517926 +0000 UTC m=+562.393705049" watchObservedRunningTime="2026-01-22 09:15:18.074888139 +0000 UTC m=+562.397075263" Jan 22 09:15:18 crc kubenswrapper[4811]: I0122 09:15:18.123810 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-22qhf"] Jan 22 09:15:19 crc kubenswrapper[4811]: I0122 09:15:19.043768 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-22qhf" event={"ID":"d63fcc2e-ef3c-4a10-9444-43070aa0dc77","Type":"ContainerStarted","Data":"7042b37e876641989f8123148607cd8f326508128099a2acfa2372d6017f8f1f"} Jan 22 09:15:20 crc kubenswrapper[4811]: I0122 09:15:20.049928 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-tnk97" event={"ID":"9e9c9633-f916-440c-b02c-5bb58eb51e76","Type":"ContainerStarted","Data":"99cbd0177ea990e4133d9ad71deb79a445e7b354b70ecee0fb87735643419e6d"} Jan 22 09:15:20 crc kubenswrapper[4811]: I0122 09:15:20.050266 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-tnk97" Jan 22 09:15:20 crc kubenswrapper[4811]: I0122 09:15:20.052338 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-nmrg4" event={"ID":"66e8ec28-33fd-440b-9064-dd5c40cf4b61","Type":"ContainerStarted","Data":"f30599eae205f1983d883964f694b431d1a3c4220b61ffd1f4e91cf6789a5990"} Jan 22 09:15:20 crc kubenswrapper[4811]: I0122 09:15:20.054727 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-tvjnz" event={"ID":"74fc22de-195f-452c-b18c-f12c53f2465f","Type":"ContainerStarted","Data":"696e499f7abfb70fb337be67522341085f1c734035e3834484978ecdab801d15"} Jan 22 09:15:20 crc kubenswrapper[4811]: I0122 09:15:20.055199 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-tvjnz" Jan 22 09:15:20 crc kubenswrapper[4811]: I0122 09:15:20.065899 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-tnk97" podStartSLOduration=1.842513801 podStartE2EDuration="4.06588401s" podCreationTimestamp="2026-01-22 09:15:16 +0000 UTC" firstStartedPulling="2026-01-22 09:15:17.344691021 +0000 UTC m=+561.666878143" lastFinishedPulling="2026-01-22 09:15:19.568061229 +0000 UTC m=+563.890248352" observedRunningTime="2026-01-22 09:15:20.064763622 +0000 UTC m=+564.386950745" watchObservedRunningTime="2026-01-22 09:15:20.06588401 +0000 UTC m=+564.388071123" Jan 22 09:15:20 crc kubenswrapper[4811]: I0122 09:15:20.085112 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-tvjnz" podStartSLOduration=1.682430322 podStartE2EDuration="4.085094545s" podCreationTimestamp="2026-01-22 09:15:16 +0000 UTC" firstStartedPulling="2026-01-22 09:15:17.179744799 +0000 UTC m=+561.501931922" lastFinishedPulling="2026-01-22 09:15:19.582409022 +0000 UTC m=+563.904596145" observedRunningTime="2026-01-22 09:15:20.082827649 +0000 UTC m=+564.405014772" watchObservedRunningTime="2026-01-22 09:15:20.085094545 +0000 UTC m=+564.407281668" Jan 22 09:15:21 crc kubenswrapper[4811]: I0122 09:15:21.061023 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-22qhf" event={"ID":"d63fcc2e-ef3c-4a10-9444-43070aa0dc77","Type":"ContainerStarted","Data":"99566cf9d78531ceb707d031b9003284f82662d79e68d57457b1db39d377c1ae"} Jan 22 09:15:21 crc kubenswrapper[4811]: I0122 09:15:21.087894 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-22qhf" podStartSLOduration=2.771669015 podStartE2EDuration="5.08787662s" podCreationTimestamp="2026-01-22 09:15:16 +0000 UTC" firstStartedPulling="2026-01-22 09:15:18.12976908 +0000 UTC m=+562.451956202" lastFinishedPulling="2026-01-22 09:15:20.445976684 +0000 UTC m=+564.768163807" observedRunningTime="2026-01-22 09:15:21.076226494 +0000 UTC m=+565.398413617" watchObservedRunningTime="2026-01-22 09:15:21.08787662 +0000 UTC m=+565.410063743" Jan 22 09:15:22 crc kubenswrapper[4811]: I0122 09:15:22.067986 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-nmrg4" event={"ID":"66e8ec28-33fd-440b-9064-dd5c40cf4b61","Type":"ContainerStarted","Data":"f5d1449afc7f65054b1946a7c11bc59e2497ed46408b8cf855a7f01e365ad739"} Jan 22 09:15:22 crc kubenswrapper[4811]: I0122 09:15:22.085556 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-nmrg4" podStartSLOduration=1.586363398 podStartE2EDuration="6.085542446s" podCreationTimestamp="2026-01-22 09:15:16 +0000 UTC" firstStartedPulling="2026-01-22 09:15:17.294831231 +0000 UTC m=+561.617018354" lastFinishedPulling="2026-01-22 09:15:21.794010279 +0000 UTC m=+566.116197402" observedRunningTime="2026-01-22 09:15:22.081164406 +0000 UTC m=+566.403351529" watchObservedRunningTime="2026-01-22 09:15:22.085542446 +0000 UTC m=+566.407729559" Jan 22 09:15:27 crc kubenswrapper[4811]: I0122 09:15:27.176101 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-tvjnz" Jan 22 09:15:27 crc kubenswrapper[4811]: I0122 09:15:27.430758 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-5c66854fc4-rr47k" Jan 22 09:15:27 crc kubenswrapper[4811]: I0122 09:15:27.430813 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-5c66854fc4-rr47k" Jan 22 09:15:27 crc kubenswrapper[4811]: I0122 09:15:27.435383 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-5c66854fc4-rr47k" Jan 22 09:15:28 crc kubenswrapper[4811]: I0122 09:15:28.110086 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-5c66854fc4-rr47k" Jan 22 09:15:28 crc kubenswrapper[4811]: I0122 09:15:28.153979 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-jhptg"] Jan 22 09:15:35 crc kubenswrapper[4811]: I0122 09:15:35.501969 4811 patch_prober.go:28] interesting pod/machine-config-daemon-txvcq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 09:15:35 crc kubenswrapper[4811]: I0122 09:15:35.502335 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 09:15:35 crc kubenswrapper[4811]: I0122 09:15:35.502375 4811 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" Jan 22 09:15:35 crc kubenswrapper[4811]: I0122 09:15:35.502856 4811 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7a6f566969cf05ffa4068902405d136144c4f92f8d1b0d3256e6fd01cf51ac2e"} pod="openshift-machine-config-operator/machine-config-daemon-txvcq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 09:15:35 crc kubenswrapper[4811]: I0122 09:15:35.502909 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" containerName="machine-config-daemon" containerID="cri-o://7a6f566969cf05ffa4068902405d136144c4f92f8d1b0d3256e6fd01cf51ac2e" gracePeriod=600 Jan 22 09:15:36 crc kubenswrapper[4811]: I0122 09:15:36.140544 4811 generic.go:334] "Generic (PLEG): container finished" podID="84068a6b-e189-419b-87f5-f31428f6eafe" containerID="7a6f566969cf05ffa4068902405d136144c4f92f8d1b0d3256e6fd01cf51ac2e" exitCode=0 Jan 22 09:15:36 crc kubenswrapper[4811]: I0122 09:15:36.140642 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" event={"ID":"84068a6b-e189-419b-87f5-f31428f6eafe","Type":"ContainerDied","Data":"7a6f566969cf05ffa4068902405d136144c4f92f8d1b0d3256e6fd01cf51ac2e"} Jan 22 09:15:36 crc kubenswrapper[4811]: I0122 09:15:36.140804 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" event={"ID":"84068a6b-e189-419b-87f5-f31428f6eafe","Type":"ContainerStarted","Data":"73255475db73f5da91cb9bd8424c8edb822b1f0ea5ba4103c87f2ef8a2771756"} Jan 22 09:15:36 crc kubenswrapper[4811]: I0122 09:15:36.140822 4811 scope.go:117] "RemoveContainer" containerID="1b2f0e7c21faa08c5ffc1625c27cd1cb01040f89d6aab01c53b541a45ff7e759" Jan 22 09:15:37 crc kubenswrapper[4811]: I0122 09:15:37.118845 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-tnk97" Jan 22 09:15:47 crc kubenswrapper[4811]: I0122 09:15:47.193327 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfbp48"] Jan 22 09:15:47 crc kubenswrapper[4811]: I0122 09:15:47.194556 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfbp48" Jan 22 09:15:47 crc kubenswrapper[4811]: I0122 09:15:47.196948 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 22 09:15:47 crc kubenswrapper[4811]: I0122 09:15:47.202293 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfbp48"] Jan 22 09:15:47 crc kubenswrapper[4811]: I0122 09:15:47.217188 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4069e1a9-a40a-4b76-bee8-4b35c06e818e-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfbp48\" (UID: \"4069e1a9-a40a-4b76-bee8-4b35c06e818e\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfbp48" Jan 22 09:15:47 crc kubenswrapper[4811]: I0122 09:15:47.217278 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzhp2\" (UniqueName: \"kubernetes.io/projected/4069e1a9-a40a-4b76-bee8-4b35c06e818e-kube-api-access-bzhp2\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfbp48\" (UID: \"4069e1a9-a40a-4b76-bee8-4b35c06e818e\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfbp48" Jan 22 09:15:47 crc kubenswrapper[4811]: I0122 09:15:47.217302 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4069e1a9-a40a-4b76-bee8-4b35c06e818e-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfbp48\" (UID: \"4069e1a9-a40a-4b76-bee8-4b35c06e818e\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfbp48" Jan 22 09:15:47 crc kubenswrapper[4811]: I0122 09:15:47.318035 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4069e1a9-a40a-4b76-bee8-4b35c06e818e-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfbp48\" (UID: \"4069e1a9-a40a-4b76-bee8-4b35c06e818e\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfbp48" Jan 22 09:15:47 crc kubenswrapper[4811]: I0122 09:15:47.318387 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4069e1a9-a40a-4b76-bee8-4b35c06e818e-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfbp48\" (UID: \"4069e1a9-a40a-4b76-bee8-4b35c06e818e\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfbp48" Jan 22 09:15:47 crc kubenswrapper[4811]: I0122 09:15:47.318459 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bzhp2\" (UniqueName: \"kubernetes.io/projected/4069e1a9-a40a-4b76-bee8-4b35c06e818e-kube-api-access-bzhp2\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfbp48\" (UID: \"4069e1a9-a40a-4b76-bee8-4b35c06e818e\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfbp48" Jan 22 09:15:47 crc kubenswrapper[4811]: I0122 09:15:47.318747 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4069e1a9-a40a-4b76-bee8-4b35c06e818e-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfbp48\" (UID: \"4069e1a9-a40a-4b76-bee8-4b35c06e818e\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfbp48" Jan 22 09:15:47 crc kubenswrapper[4811]: I0122 09:15:47.318993 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4069e1a9-a40a-4b76-bee8-4b35c06e818e-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfbp48\" (UID: \"4069e1a9-a40a-4b76-bee8-4b35c06e818e\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfbp48" Jan 22 09:15:47 crc kubenswrapper[4811]: I0122 09:15:47.332701 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bzhp2\" (UniqueName: \"kubernetes.io/projected/4069e1a9-a40a-4b76-bee8-4b35c06e818e-kube-api-access-bzhp2\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfbp48\" (UID: \"4069e1a9-a40a-4b76-bee8-4b35c06e818e\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfbp48" Jan 22 09:15:47 crc kubenswrapper[4811]: I0122 09:15:47.510105 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfbp48" Jan 22 09:15:47 crc kubenswrapper[4811]: I0122 09:15:47.848385 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfbp48"] Jan 22 09:15:48 crc kubenswrapper[4811]: I0122 09:15:48.201671 4811 generic.go:334] "Generic (PLEG): container finished" podID="4069e1a9-a40a-4b76-bee8-4b35c06e818e" containerID="96a7cccaa924bd92f7a194f4b8a6b5a8f11c53608885702b5491af10482b8bf3" exitCode=0 Jan 22 09:15:48 crc kubenswrapper[4811]: I0122 09:15:48.201770 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfbp48" event={"ID":"4069e1a9-a40a-4b76-bee8-4b35c06e818e","Type":"ContainerDied","Data":"96a7cccaa924bd92f7a194f4b8a6b5a8f11c53608885702b5491af10482b8bf3"} Jan 22 09:15:48 crc kubenswrapper[4811]: I0122 09:15:48.201876 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfbp48" event={"ID":"4069e1a9-a40a-4b76-bee8-4b35c06e818e","Type":"ContainerStarted","Data":"600d9a40725014e6b3072175e643221af0c7147b7fde85a6be0c36837b3055a1"} Jan 22 09:15:51 crc kubenswrapper[4811]: I0122 09:15:51.215340 4811 generic.go:334] "Generic (PLEG): container finished" podID="4069e1a9-a40a-4b76-bee8-4b35c06e818e" containerID="b740ad05b70e69b0d5ff6a322127c5dc835401a1d653322f03302dcdca6c166e" exitCode=0 Jan 22 09:15:51 crc kubenswrapper[4811]: I0122 09:15:51.215554 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfbp48" event={"ID":"4069e1a9-a40a-4b76-bee8-4b35c06e818e","Type":"ContainerDied","Data":"b740ad05b70e69b0d5ff6a322127c5dc835401a1d653322f03302dcdca6c166e"} Jan 22 09:15:52 crc kubenswrapper[4811]: I0122 09:15:52.222612 4811 generic.go:334] "Generic (PLEG): container finished" podID="4069e1a9-a40a-4b76-bee8-4b35c06e818e" containerID="e188f0b5d359e960ff04c3d830debcbffa7346fcf309e6ced3473a94d2459eba" exitCode=0 Jan 22 09:15:52 crc kubenswrapper[4811]: I0122 09:15:52.222731 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfbp48" event={"ID":"4069e1a9-a40a-4b76-bee8-4b35c06e818e","Type":"ContainerDied","Data":"e188f0b5d359e960ff04c3d830debcbffa7346fcf309e6ced3473a94d2459eba"} Jan 22 09:15:53 crc kubenswrapper[4811]: I0122 09:15:53.185972 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-jhptg" podUID="f58eb1d8-bb02-4af7-857c-138518c5bbf2" containerName="console" containerID="cri-o://5da706283e4a03d22b1e471a06cecfcd8037ec41b3baba8bfd796cc892c8fbdf" gracePeriod=15 Jan 22 09:15:53 crc kubenswrapper[4811]: I0122 09:15:53.384409 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfbp48" Jan 22 09:15:53 crc kubenswrapper[4811]: I0122 09:15:53.481012 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-jhptg_f58eb1d8-bb02-4af7-857c-138518c5bbf2/console/0.log" Jan 22 09:15:53 crc kubenswrapper[4811]: I0122 09:15:53.481066 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-jhptg" Jan 22 09:15:53 crc kubenswrapper[4811]: I0122 09:15:53.481562 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4069e1a9-a40a-4b76-bee8-4b35c06e818e-util\") pod \"4069e1a9-a40a-4b76-bee8-4b35c06e818e\" (UID: \"4069e1a9-a40a-4b76-bee8-4b35c06e818e\") " Jan 22 09:15:53 crc kubenswrapper[4811]: I0122 09:15:53.481608 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4069e1a9-a40a-4b76-bee8-4b35c06e818e-bundle\") pod \"4069e1a9-a40a-4b76-bee8-4b35c06e818e\" (UID: \"4069e1a9-a40a-4b76-bee8-4b35c06e818e\") " Jan 22 09:15:53 crc kubenswrapper[4811]: I0122 09:15:53.481662 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bzhp2\" (UniqueName: \"kubernetes.io/projected/4069e1a9-a40a-4b76-bee8-4b35c06e818e-kube-api-access-bzhp2\") pod \"4069e1a9-a40a-4b76-bee8-4b35c06e818e\" (UID: \"4069e1a9-a40a-4b76-bee8-4b35c06e818e\") " Jan 22 09:15:53 crc kubenswrapper[4811]: I0122 09:15:53.482318 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4069e1a9-a40a-4b76-bee8-4b35c06e818e-bundle" (OuterVolumeSpecName: "bundle") pod "4069e1a9-a40a-4b76-bee8-4b35c06e818e" (UID: "4069e1a9-a40a-4b76-bee8-4b35c06e818e"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:15:53 crc kubenswrapper[4811]: I0122 09:15:53.487702 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4069e1a9-a40a-4b76-bee8-4b35c06e818e-kube-api-access-bzhp2" (OuterVolumeSpecName: "kube-api-access-bzhp2") pod "4069e1a9-a40a-4b76-bee8-4b35c06e818e" (UID: "4069e1a9-a40a-4b76-bee8-4b35c06e818e"). InnerVolumeSpecName "kube-api-access-bzhp2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:15:53 crc kubenswrapper[4811]: I0122 09:15:53.582161 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f58eb1d8-bb02-4af7-857c-138518c5bbf2-service-ca\") pod \"f58eb1d8-bb02-4af7-857c-138518c5bbf2\" (UID: \"f58eb1d8-bb02-4af7-857c-138518c5bbf2\") " Jan 22 09:15:53 crc kubenswrapper[4811]: I0122 09:15:53.582215 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f58eb1d8-bb02-4af7-857c-138518c5bbf2-console-serving-cert\") pod \"f58eb1d8-bb02-4af7-857c-138518c5bbf2\" (UID: \"f58eb1d8-bb02-4af7-857c-138518c5bbf2\") " Jan 22 09:15:53 crc kubenswrapper[4811]: I0122 09:15:53.582254 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f58eb1d8-bb02-4af7-857c-138518c5bbf2-oauth-serving-cert\") pod \"f58eb1d8-bb02-4af7-857c-138518c5bbf2\" (UID: \"f58eb1d8-bb02-4af7-857c-138518c5bbf2\") " Jan 22 09:15:53 crc kubenswrapper[4811]: I0122 09:15:53.582274 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p795n\" (UniqueName: \"kubernetes.io/projected/f58eb1d8-bb02-4af7-857c-138518c5bbf2-kube-api-access-p795n\") pod \"f58eb1d8-bb02-4af7-857c-138518c5bbf2\" (UID: \"f58eb1d8-bb02-4af7-857c-138518c5bbf2\") " Jan 22 09:15:53 crc kubenswrapper[4811]: I0122 09:15:53.582319 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f58eb1d8-bb02-4af7-857c-138518c5bbf2-console-config\") pod \"f58eb1d8-bb02-4af7-857c-138518c5bbf2\" (UID: \"f58eb1d8-bb02-4af7-857c-138518c5bbf2\") " Jan 22 09:15:53 crc kubenswrapper[4811]: I0122 09:15:53.582361 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f58eb1d8-bb02-4af7-857c-138518c5bbf2-console-oauth-config\") pod \"f58eb1d8-bb02-4af7-857c-138518c5bbf2\" (UID: \"f58eb1d8-bb02-4af7-857c-138518c5bbf2\") " Jan 22 09:15:53 crc kubenswrapper[4811]: I0122 09:15:53.582392 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f58eb1d8-bb02-4af7-857c-138518c5bbf2-trusted-ca-bundle\") pod \"f58eb1d8-bb02-4af7-857c-138518c5bbf2\" (UID: \"f58eb1d8-bb02-4af7-857c-138518c5bbf2\") " Jan 22 09:15:53 crc kubenswrapper[4811]: I0122 09:15:53.582698 4811 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4069e1a9-a40a-4b76-bee8-4b35c06e818e-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 09:15:53 crc kubenswrapper[4811]: I0122 09:15:53.582718 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bzhp2\" (UniqueName: \"kubernetes.io/projected/4069e1a9-a40a-4b76-bee8-4b35c06e818e-kube-api-access-bzhp2\") on node \"crc\" DevicePath \"\"" Jan 22 09:15:53 crc kubenswrapper[4811]: I0122 09:15:53.582761 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f58eb1d8-bb02-4af7-857c-138518c5bbf2-service-ca" (OuterVolumeSpecName: "service-ca") pod "f58eb1d8-bb02-4af7-857c-138518c5bbf2" (UID: "f58eb1d8-bb02-4af7-857c-138518c5bbf2"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:15:53 crc kubenswrapper[4811]: I0122 09:15:53.583112 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f58eb1d8-bb02-4af7-857c-138518c5bbf2-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "f58eb1d8-bb02-4af7-857c-138518c5bbf2" (UID: "f58eb1d8-bb02-4af7-857c-138518c5bbf2"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:15:53 crc kubenswrapper[4811]: I0122 09:15:53.583145 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f58eb1d8-bb02-4af7-857c-138518c5bbf2-console-config" (OuterVolumeSpecName: "console-config") pod "f58eb1d8-bb02-4af7-857c-138518c5bbf2" (UID: "f58eb1d8-bb02-4af7-857c-138518c5bbf2"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:15:53 crc kubenswrapper[4811]: I0122 09:15:53.583498 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f58eb1d8-bb02-4af7-857c-138518c5bbf2-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "f58eb1d8-bb02-4af7-857c-138518c5bbf2" (UID: "f58eb1d8-bb02-4af7-857c-138518c5bbf2"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:15:53 crc kubenswrapper[4811]: I0122 09:15:53.585086 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f58eb1d8-bb02-4af7-857c-138518c5bbf2-kube-api-access-p795n" (OuterVolumeSpecName: "kube-api-access-p795n") pod "f58eb1d8-bb02-4af7-857c-138518c5bbf2" (UID: "f58eb1d8-bb02-4af7-857c-138518c5bbf2"). InnerVolumeSpecName "kube-api-access-p795n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:15:53 crc kubenswrapper[4811]: I0122 09:15:53.585396 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f58eb1d8-bb02-4af7-857c-138518c5bbf2-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "f58eb1d8-bb02-4af7-857c-138518c5bbf2" (UID: "f58eb1d8-bb02-4af7-857c-138518c5bbf2"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:15:53 crc kubenswrapper[4811]: I0122 09:15:53.585617 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f58eb1d8-bb02-4af7-857c-138518c5bbf2-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "f58eb1d8-bb02-4af7-857c-138518c5bbf2" (UID: "f58eb1d8-bb02-4af7-857c-138518c5bbf2"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:15:53 crc kubenswrapper[4811]: I0122 09:15:53.635044 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4069e1a9-a40a-4b76-bee8-4b35c06e818e-util" (OuterVolumeSpecName: "util") pod "4069e1a9-a40a-4b76-bee8-4b35c06e818e" (UID: "4069e1a9-a40a-4b76-bee8-4b35c06e818e"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:15:53 crc kubenswrapper[4811]: I0122 09:15:53.684122 4811 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f58eb1d8-bb02-4af7-857c-138518c5bbf2-console-config\") on node \"crc\" DevicePath \"\"" Jan 22 09:15:53 crc kubenswrapper[4811]: I0122 09:15:53.684145 4811 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f58eb1d8-bb02-4af7-857c-138518c5bbf2-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 22 09:15:53 crc kubenswrapper[4811]: I0122 09:15:53.684157 4811 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f58eb1d8-bb02-4af7-857c-138518c5bbf2-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 09:15:53 crc kubenswrapper[4811]: I0122 09:15:53.684166 4811 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4069e1a9-a40a-4b76-bee8-4b35c06e818e-util\") on node \"crc\" DevicePath \"\"" Jan 22 09:15:53 crc kubenswrapper[4811]: I0122 09:15:53.684176 4811 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f58eb1d8-bb02-4af7-857c-138518c5bbf2-service-ca\") on node \"crc\" DevicePath \"\"" Jan 22 09:15:53 crc kubenswrapper[4811]: I0122 09:15:53.684184 4811 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f58eb1d8-bb02-4af7-857c-138518c5bbf2-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 09:15:53 crc kubenswrapper[4811]: I0122 09:15:53.684191 4811 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f58eb1d8-bb02-4af7-857c-138518c5bbf2-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 09:15:53 crc kubenswrapper[4811]: I0122 09:15:53.684199 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p795n\" (UniqueName: \"kubernetes.io/projected/f58eb1d8-bb02-4af7-857c-138518c5bbf2-kube-api-access-p795n\") on node \"crc\" DevicePath \"\"" Jan 22 09:15:54 crc kubenswrapper[4811]: I0122 09:15:54.232661 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-jhptg_f58eb1d8-bb02-4af7-857c-138518c5bbf2/console/0.log" Jan 22 09:15:54 crc kubenswrapper[4811]: I0122 09:15:54.232810 4811 generic.go:334] "Generic (PLEG): container finished" podID="f58eb1d8-bb02-4af7-857c-138518c5bbf2" containerID="5da706283e4a03d22b1e471a06cecfcd8037ec41b3baba8bfd796cc892c8fbdf" exitCode=2 Jan 22 09:15:54 crc kubenswrapper[4811]: I0122 09:15:54.232882 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-jhptg" Jan 22 09:15:54 crc kubenswrapper[4811]: I0122 09:15:54.232860 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-jhptg" event={"ID":"f58eb1d8-bb02-4af7-857c-138518c5bbf2","Type":"ContainerDied","Data":"5da706283e4a03d22b1e471a06cecfcd8037ec41b3baba8bfd796cc892c8fbdf"} Jan 22 09:15:54 crc kubenswrapper[4811]: I0122 09:15:54.232993 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-jhptg" event={"ID":"f58eb1d8-bb02-4af7-857c-138518c5bbf2","Type":"ContainerDied","Data":"684026d62f766b4dd501868118b9ea743242efd4c2e4420dfddd6a9fcc3c0959"} Jan 22 09:15:54 crc kubenswrapper[4811]: I0122 09:15:54.233010 4811 scope.go:117] "RemoveContainer" containerID="5da706283e4a03d22b1e471a06cecfcd8037ec41b3baba8bfd796cc892c8fbdf" Jan 22 09:15:54 crc kubenswrapper[4811]: I0122 09:15:54.235485 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfbp48" event={"ID":"4069e1a9-a40a-4b76-bee8-4b35c06e818e","Type":"ContainerDied","Data":"600d9a40725014e6b3072175e643221af0c7147b7fde85a6be0c36837b3055a1"} Jan 22 09:15:54 crc kubenswrapper[4811]: I0122 09:15:54.235507 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="600d9a40725014e6b3072175e643221af0c7147b7fde85a6be0c36837b3055a1" Jan 22 09:15:54 crc kubenswrapper[4811]: I0122 09:15:54.235527 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfbp48" Jan 22 09:15:54 crc kubenswrapper[4811]: I0122 09:15:54.246648 4811 scope.go:117] "RemoveContainer" containerID="5da706283e4a03d22b1e471a06cecfcd8037ec41b3baba8bfd796cc892c8fbdf" Jan 22 09:15:54 crc kubenswrapper[4811]: E0122 09:15:54.246918 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5da706283e4a03d22b1e471a06cecfcd8037ec41b3baba8bfd796cc892c8fbdf\": container with ID starting with 5da706283e4a03d22b1e471a06cecfcd8037ec41b3baba8bfd796cc892c8fbdf not found: ID does not exist" containerID="5da706283e4a03d22b1e471a06cecfcd8037ec41b3baba8bfd796cc892c8fbdf" Jan 22 09:15:54 crc kubenswrapper[4811]: I0122 09:15:54.246941 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5da706283e4a03d22b1e471a06cecfcd8037ec41b3baba8bfd796cc892c8fbdf"} err="failed to get container status \"5da706283e4a03d22b1e471a06cecfcd8037ec41b3baba8bfd796cc892c8fbdf\": rpc error: code = NotFound desc = could not find container \"5da706283e4a03d22b1e471a06cecfcd8037ec41b3baba8bfd796cc892c8fbdf\": container with ID starting with 5da706283e4a03d22b1e471a06cecfcd8037ec41b3baba8bfd796cc892c8fbdf not found: ID does not exist" Jan 22 09:15:54 crc kubenswrapper[4811]: I0122 09:15:54.248916 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-jhptg"] Jan 22 09:15:54 crc kubenswrapper[4811]: I0122 09:15:54.251958 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-jhptg"] Jan 22 09:15:56 crc kubenswrapper[4811]: I0122 09:15:56.009465 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f58eb1d8-bb02-4af7-857c-138518c5bbf2" path="/var/lib/kubelet/pods/f58eb1d8-bb02-4af7-857c-138518c5bbf2/volumes" Jan 22 09:16:04 crc kubenswrapper[4811]: I0122 09:16:04.512307 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-64bd67c58d-k58sk"] Jan 22 09:16:04 crc kubenswrapper[4811]: E0122 09:16:04.512897 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4069e1a9-a40a-4b76-bee8-4b35c06e818e" containerName="pull" Jan 22 09:16:04 crc kubenswrapper[4811]: I0122 09:16:04.512908 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="4069e1a9-a40a-4b76-bee8-4b35c06e818e" containerName="pull" Jan 22 09:16:04 crc kubenswrapper[4811]: E0122 09:16:04.512919 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4069e1a9-a40a-4b76-bee8-4b35c06e818e" containerName="extract" Jan 22 09:16:04 crc kubenswrapper[4811]: I0122 09:16:04.512924 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="4069e1a9-a40a-4b76-bee8-4b35c06e818e" containerName="extract" Jan 22 09:16:04 crc kubenswrapper[4811]: E0122 09:16:04.512937 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4069e1a9-a40a-4b76-bee8-4b35c06e818e" containerName="util" Jan 22 09:16:04 crc kubenswrapper[4811]: I0122 09:16:04.512943 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="4069e1a9-a40a-4b76-bee8-4b35c06e818e" containerName="util" Jan 22 09:16:04 crc kubenswrapper[4811]: E0122 09:16:04.512953 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f58eb1d8-bb02-4af7-857c-138518c5bbf2" containerName="console" Jan 22 09:16:04 crc kubenswrapper[4811]: I0122 09:16:04.512957 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="f58eb1d8-bb02-4af7-857c-138518c5bbf2" containerName="console" Jan 22 09:16:04 crc kubenswrapper[4811]: I0122 09:16:04.513046 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="4069e1a9-a40a-4b76-bee8-4b35c06e818e" containerName="extract" Jan 22 09:16:04 crc kubenswrapper[4811]: I0122 09:16:04.513054 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="f58eb1d8-bb02-4af7-857c-138518c5bbf2" containerName="console" Jan 22 09:16:04 crc kubenswrapper[4811]: I0122 09:16:04.513358 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-64bd67c58d-k58sk" Jan 22 09:16:04 crc kubenswrapper[4811]: I0122 09:16:04.515455 4811 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 22 09:16:04 crc kubenswrapper[4811]: I0122 09:16:04.515515 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 22 09:16:04 crc kubenswrapper[4811]: I0122 09:16:04.516620 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 22 09:16:04 crc kubenswrapper[4811]: I0122 09:16:04.516967 4811 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-6lks9" Jan 22 09:16:04 crc kubenswrapper[4811]: I0122 09:16:04.517403 4811 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 22 09:16:04 crc kubenswrapper[4811]: I0122 09:16:04.527446 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-64bd67c58d-k58sk"] Jan 22 09:16:04 crc kubenswrapper[4811]: I0122 09:16:04.591226 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/13617657-7245-4223-9b20-03a56378edaf-webhook-cert\") pod \"metallb-operator-controller-manager-64bd67c58d-k58sk\" (UID: \"13617657-7245-4223-9b20-03a56378edaf\") " pod="metallb-system/metallb-operator-controller-manager-64bd67c58d-k58sk" Jan 22 09:16:04 crc kubenswrapper[4811]: I0122 09:16:04.591334 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84nc9\" (UniqueName: \"kubernetes.io/projected/13617657-7245-4223-9b20-03a56378edaf-kube-api-access-84nc9\") pod \"metallb-operator-controller-manager-64bd67c58d-k58sk\" (UID: \"13617657-7245-4223-9b20-03a56378edaf\") " pod="metallb-system/metallb-operator-controller-manager-64bd67c58d-k58sk" Jan 22 09:16:04 crc kubenswrapper[4811]: I0122 09:16:04.591440 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/13617657-7245-4223-9b20-03a56378edaf-apiservice-cert\") pod \"metallb-operator-controller-manager-64bd67c58d-k58sk\" (UID: \"13617657-7245-4223-9b20-03a56378edaf\") " pod="metallb-system/metallb-operator-controller-manager-64bd67c58d-k58sk" Jan 22 09:16:04 crc kubenswrapper[4811]: I0122 09:16:04.644153 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-5bc67d6df-ckhh4"] Jan 22 09:16:04 crc kubenswrapper[4811]: I0122 09:16:04.644931 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-5bc67d6df-ckhh4" Jan 22 09:16:04 crc kubenswrapper[4811]: I0122 09:16:04.646471 4811 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 22 09:16:04 crc kubenswrapper[4811]: I0122 09:16:04.647073 4811 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 22 09:16:04 crc kubenswrapper[4811]: I0122 09:16:04.647223 4811 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-b9r6t" Jan 22 09:16:04 crc kubenswrapper[4811]: I0122 09:16:04.657363 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-5bc67d6df-ckhh4"] Jan 22 09:16:04 crc kubenswrapper[4811]: I0122 09:16:04.692454 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-84nc9\" (UniqueName: \"kubernetes.io/projected/13617657-7245-4223-9b20-03a56378edaf-kube-api-access-84nc9\") pod \"metallb-operator-controller-manager-64bd67c58d-k58sk\" (UID: \"13617657-7245-4223-9b20-03a56378edaf\") " pod="metallb-system/metallb-operator-controller-manager-64bd67c58d-k58sk" Jan 22 09:16:04 crc kubenswrapper[4811]: I0122 09:16:04.692506 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1008e895-ec53-4fdd-9423-bbb4d249a6b9-apiservice-cert\") pod \"metallb-operator-webhook-server-5bc67d6df-ckhh4\" (UID: \"1008e895-ec53-4fdd-9423-bbb4d249a6b9\") " pod="metallb-system/metallb-operator-webhook-server-5bc67d6df-ckhh4" Jan 22 09:16:04 crc kubenswrapper[4811]: I0122 09:16:04.692606 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/13617657-7245-4223-9b20-03a56378edaf-apiservice-cert\") pod \"metallb-operator-controller-manager-64bd67c58d-k58sk\" (UID: \"13617657-7245-4223-9b20-03a56378edaf\") " pod="metallb-system/metallb-operator-controller-manager-64bd67c58d-k58sk" Jan 22 09:16:04 crc kubenswrapper[4811]: I0122 09:16:04.692689 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1008e895-ec53-4fdd-9423-bbb4d249a6b9-webhook-cert\") pod \"metallb-operator-webhook-server-5bc67d6df-ckhh4\" (UID: \"1008e895-ec53-4fdd-9423-bbb4d249a6b9\") " pod="metallb-system/metallb-operator-webhook-server-5bc67d6df-ckhh4" Jan 22 09:16:04 crc kubenswrapper[4811]: I0122 09:16:04.692711 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/13617657-7245-4223-9b20-03a56378edaf-webhook-cert\") pod \"metallb-operator-controller-manager-64bd67c58d-k58sk\" (UID: \"13617657-7245-4223-9b20-03a56378edaf\") " pod="metallb-system/metallb-operator-controller-manager-64bd67c58d-k58sk" Jan 22 09:16:04 crc kubenswrapper[4811]: I0122 09:16:04.692731 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w92qz\" (UniqueName: \"kubernetes.io/projected/1008e895-ec53-4fdd-9423-bbb4d249a6b9-kube-api-access-w92qz\") pod \"metallb-operator-webhook-server-5bc67d6df-ckhh4\" (UID: \"1008e895-ec53-4fdd-9423-bbb4d249a6b9\") " pod="metallb-system/metallb-operator-webhook-server-5bc67d6df-ckhh4" Jan 22 09:16:04 crc kubenswrapper[4811]: I0122 09:16:04.703141 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/13617657-7245-4223-9b20-03a56378edaf-apiservice-cert\") pod \"metallb-operator-controller-manager-64bd67c58d-k58sk\" (UID: \"13617657-7245-4223-9b20-03a56378edaf\") " pod="metallb-system/metallb-operator-controller-manager-64bd67c58d-k58sk" Jan 22 09:16:04 crc kubenswrapper[4811]: I0122 09:16:04.709878 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-84nc9\" (UniqueName: \"kubernetes.io/projected/13617657-7245-4223-9b20-03a56378edaf-kube-api-access-84nc9\") pod \"metallb-operator-controller-manager-64bd67c58d-k58sk\" (UID: \"13617657-7245-4223-9b20-03a56378edaf\") " pod="metallb-system/metallb-operator-controller-manager-64bd67c58d-k58sk" Jan 22 09:16:04 crc kubenswrapper[4811]: I0122 09:16:04.713302 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/13617657-7245-4223-9b20-03a56378edaf-webhook-cert\") pod \"metallb-operator-controller-manager-64bd67c58d-k58sk\" (UID: \"13617657-7245-4223-9b20-03a56378edaf\") " pod="metallb-system/metallb-operator-controller-manager-64bd67c58d-k58sk" Jan 22 09:16:04 crc kubenswrapper[4811]: I0122 09:16:04.793699 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1008e895-ec53-4fdd-9423-bbb4d249a6b9-webhook-cert\") pod \"metallb-operator-webhook-server-5bc67d6df-ckhh4\" (UID: \"1008e895-ec53-4fdd-9423-bbb4d249a6b9\") " pod="metallb-system/metallb-operator-webhook-server-5bc67d6df-ckhh4" Jan 22 09:16:04 crc kubenswrapper[4811]: I0122 09:16:04.793970 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w92qz\" (UniqueName: \"kubernetes.io/projected/1008e895-ec53-4fdd-9423-bbb4d249a6b9-kube-api-access-w92qz\") pod \"metallb-operator-webhook-server-5bc67d6df-ckhh4\" (UID: \"1008e895-ec53-4fdd-9423-bbb4d249a6b9\") " pod="metallb-system/metallb-operator-webhook-server-5bc67d6df-ckhh4" Jan 22 09:16:04 crc kubenswrapper[4811]: I0122 09:16:04.794110 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1008e895-ec53-4fdd-9423-bbb4d249a6b9-apiservice-cert\") pod \"metallb-operator-webhook-server-5bc67d6df-ckhh4\" (UID: \"1008e895-ec53-4fdd-9423-bbb4d249a6b9\") " pod="metallb-system/metallb-operator-webhook-server-5bc67d6df-ckhh4" Jan 22 09:16:04 crc kubenswrapper[4811]: I0122 09:16:04.799105 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1008e895-ec53-4fdd-9423-bbb4d249a6b9-apiservice-cert\") pod \"metallb-operator-webhook-server-5bc67d6df-ckhh4\" (UID: \"1008e895-ec53-4fdd-9423-bbb4d249a6b9\") " pod="metallb-system/metallb-operator-webhook-server-5bc67d6df-ckhh4" Jan 22 09:16:04 crc kubenswrapper[4811]: I0122 09:16:04.803892 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1008e895-ec53-4fdd-9423-bbb4d249a6b9-webhook-cert\") pod \"metallb-operator-webhook-server-5bc67d6df-ckhh4\" (UID: \"1008e895-ec53-4fdd-9423-bbb4d249a6b9\") " pod="metallb-system/metallb-operator-webhook-server-5bc67d6df-ckhh4" Jan 22 09:16:04 crc kubenswrapper[4811]: I0122 09:16:04.825311 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-64bd67c58d-k58sk" Jan 22 09:16:04 crc kubenswrapper[4811]: I0122 09:16:04.826453 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w92qz\" (UniqueName: \"kubernetes.io/projected/1008e895-ec53-4fdd-9423-bbb4d249a6b9-kube-api-access-w92qz\") pod \"metallb-operator-webhook-server-5bc67d6df-ckhh4\" (UID: \"1008e895-ec53-4fdd-9423-bbb4d249a6b9\") " pod="metallb-system/metallb-operator-webhook-server-5bc67d6df-ckhh4" Jan 22 09:16:04 crc kubenswrapper[4811]: I0122 09:16:04.955825 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-5bc67d6df-ckhh4" Jan 22 09:16:05 crc kubenswrapper[4811]: I0122 09:16:05.050829 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-64bd67c58d-k58sk"] Jan 22 09:16:05 crc kubenswrapper[4811]: I0122 09:16:05.284324 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-64bd67c58d-k58sk" event={"ID":"13617657-7245-4223-9b20-03a56378edaf","Type":"ContainerStarted","Data":"b93671de4d8c8393900504c21a3c8345a621c0a820ef8b0120e473422267c871"} Jan 22 09:16:05 crc kubenswrapper[4811]: I0122 09:16:05.364516 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-5bc67d6df-ckhh4"] Jan 22 09:16:06 crc kubenswrapper[4811]: I0122 09:16:06.290551 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-5bc67d6df-ckhh4" event={"ID":"1008e895-ec53-4fdd-9423-bbb4d249a6b9","Type":"ContainerStarted","Data":"77b8b43b47cbb763a4fab336b18a9c2d5f31c5345de9506885facb54df0b5302"} Jan 22 09:16:08 crc kubenswrapper[4811]: I0122 09:16:08.302536 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-64bd67c58d-k58sk" event={"ID":"13617657-7245-4223-9b20-03a56378edaf","Type":"ContainerStarted","Data":"c3d214569a7da7d48ebbcaf2b1c69ba40cb228ef64f88bfabc3de9458168c48a"} Jan 22 09:16:08 crc kubenswrapper[4811]: I0122 09:16:08.303272 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-64bd67c58d-k58sk" Jan 22 09:16:08 crc kubenswrapper[4811]: I0122 09:16:08.318908 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-64bd67c58d-k58sk" podStartSLOduration=1.395278563 podStartE2EDuration="4.318891358s" podCreationTimestamp="2026-01-22 09:16:04 +0000 UTC" firstStartedPulling="2026-01-22 09:16:05.064800634 +0000 UTC m=+609.386987758" lastFinishedPulling="2026-01-22 09:16:07.98841343 +0000 UTC m=+612.310600553" observedRunningTime="2026-01-22 09:16:08.317839405 +0000 UTC m=+612.640026528" watchObservedRunningTime="2026-01-22 09:16:08.318891358 +0000 UTC m=+612.641078481" Jan 22 09:16:10 crc kubenswrapper[4811]: I0122 09:16:10.316379 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-5bc67d6df-ckhh4" event={"ID":"1008e895-ec53-4fdd-9423-bbb4d249a6b9","Type":"ContainerStarted","Data":"f7d51a095a7c7dc8a918bce3d3dcb2dd52be1a932cdb960c033d6d4d6efbcf30"} Jan 22 09:16:10 crc kubenswrapper[4811]: I0122 09:16:10.330695 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-5bc67d6df-ckhh4" podStartSLOduration=2.174974338 podStartE2EDuration="6.330679347s" podCreationTimestamp="2026-01-22 09:16:04 +0000 UTC" firstStartedPulling="2026-01-22 09:16:05.411724168 +0000 UTC m=+609.733911291" lastFinishedPulling="2026-01-22 09:16:09.567429177 +0000 UTC m=+613.889616300" observedRunningTime="2026-01-22 09:16:10.327949922 +0000 UTC m=+614.650137044" watchObservedRunningTime="2026-01-22 09:16:10.330679347 +0000 UTC m=+614.652866469" Jan 22 09:16:11 crc kubenswrapper[4811]: I0122 09:16:11.321253 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-5bc67d6df-ckhh4" Jan 22 09:16:24 crc kubenswrapper[4811]: I0122 09:16:24.961244 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-5bc67d6df-ckhh4" Jan 22 09:16:44 crc kubenswrapper[4811]: I0122 09:16:44.828077 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-64bd67c58d-k58sk" Jan 22 09:16:45 crc kubenswrapper[4811]: I0122 09:16:45.350930 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-rnvr5"] Jan 22 09:16:45 crc kubenswrapper[4811]: I0122 09:16:45.352700 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-rnvr5" Jan 22 09:16:45 crc kubenswrapper[4811]: I0122 09:16:45.354408 4811 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-nls5l" Jan 22 09:16:45 crc kubenswrapper[4811]: I0122 09:16:45.354963 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 22 09:16:45 crc kubenswrapper[4811]: I0122 09:16:45.355806 4811 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 22 09:16:45 crc kubenswrapper[4811]: I0122 09:16:45.373529 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-zfggh"] Jan 22 09:16:45 crc kubenswrapper[4811]: I0122 09:16:45.374100 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-zfggh" Jan 22 09:16:45 crc kubenswrapper[4811]: I0122 09:16:45.375496 4811 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 22 09:16:45 crc kubenswrapper[4811]: I0122 09:16:45.388569 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-zfggh"] Jan 22 09:16:45 crc kubenswrapper[4811]: I0122 09:16:45.473104 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-k88w9"] Jan 22 09:16:45 crc kubenswrapper[4811]: I0122 09:16:45.474005 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-k88w9" Jan 22 09:16:45 crc kubenswrapper[4811]: I0122 09:16:45.478526 4811 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-vs782" Jan 22 09:16:45 crc kubenswrapper[4811]: I0122 09:16:45.483487 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 22 09:16:45 crc kubenswrapper[4811]: I0122 09:16:45.483563 4811 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 22 09:16:45 crc kubenswrapper[4811]: I0122 09:16:45.483694 4811 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 22 09:16:45 crc kubenswrapper[4811]: I0122 09:16:45.514502 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-rl9k7"] Jan 22 09:16:45 crc kubenswrapper[4811]: I0122 09:16:45.515316 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-rl9k7" Jan 22 09:16:45 crc kubenswrapper[4811]: I0122 09:16:45.516673 4811 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 22 09:16:45 crc kubenswrapper[4811]: I0122 09:16:45.527696 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-rl9k7"] Jan 22 09:16:45 crc kubenswrapper[4811]: I0122 09:16:45.545437 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/d2a3b77d-fd7c-421e-aa23-d206419b1c7d-reloader\") pod \"frr-k8s-rnvr5\" (UID: \"d2a3b77d-fd7c-421e-aa23-d206419b1c7d\") " pod="metallb-system/frr-k8s-rnvr5" Jan 22 09:16:45 crc kubenswrapper[4811]: I0122 09:16:45.545476 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/d2a3b77d-fd7c-421e-aa23-d206419b1c7d-frr-sockets\") pod \"frr-k8s-rnvr5\" (UID: \"d2a3b77d-fd7c-421e-aa23-d206419b1c7d\") " pod="metallb-system/frr-k8s-rnvr5" Jan 22 09:16:45 crc kubenswrapper[4811]: I0122 09:16:45.545531 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2f6eae9c-374b-4ac3-b5d7-04267fe9bf73-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-zfggh\" (UID: \"2f6eae9c-374b-4ac3-b5d7-04267fe9bf73\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-zfggh" Jan 22 09:16:45 crc kubenswrapper[4811]: I0122 09:16:45.545549 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/d2a3b77d-fd7c-421e-aa23-d206419b1c7d-frr-startup\") pod \"frr-k8s-rnvr5\" (UID: \"d2a3b77d-fd7c-421e-aa23-d206419b1c7d\") " pod="metallb-system/frr-k8s-rnvr5" Jan 22 09:16:45 crc kubenswrapper[4811]: I0122 09:16:45.545568 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6v9n9\" (UniqueName: \"kubernetes.io/projected/d2a3b77d-fd7c-421e-aa23-d206419b1c7d-kube-api-access-6v9n9\") pod \"frr-k8s-rnvr5\" (UID: \"d2a3b77d-fd7c-421e-aa23-d206419b1c7d\") " pod="metallb-system/frr-k8s-rnvr5" Jan 22 09:16:45 crc kubenswrapper[4811]: I0122 09:16:45.545598 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8q669\" (UniqueName: \"kubernetes.io/projected/2f6eae9c-374b-4ac3-b5d7-04267fe9bf73-kube-api-access-8q669\") pod \"frr-k8s-webhook-server-7df86c4f6c-zfggh\" (UID: \"2f6eae9c-374b-4ac3-b5d7-04267fe9bf73\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-zfggh" Jan 22 09:16:45 crc kubenswrapper[4811]: I0122 09:16:45.545650 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/d2a3b77d-fd7c-421e-aa23-d206419b1c7d-metrics\") pod \"frr-k8s-rnvr5\" (UID: \"d2a3b77d-fd7c-421e-aa23-d206419b1c7d\") " pod="metallb-system/frr-k8s-rnvr5" Jan 22 09:16:45 crc kubenswrapper[4811]: I0122 09:16:45.545685 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/d2a3b77d-fd7c-421e-aa23-d206419b1c7d-frr-conf\") pod \"frr-k8s-rnvr5\" (UID: \"d2a3b77d-fd7c-421e-aa23-d206419b1c7d\") " pod="metallb-system/frr-k8s-rnvr5" Jan 22 09:16:45 crc kubenswrapper[4811]: I0122 09:16:45.545709 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d2a3b77d-fd7c-421e-aa23-d206419b1c7d-metrics-certs\") pod \"frr-k8s-rnvr5\" (UID: \"d2a3b77d-fd7c-421e-aa23-d206419b1c7d\") " pod="metallb-system/frr-k8s-rnvr5" Jan 22 09:16:45 crc kubenswrapper[4811]: I0122 09:16:45.646849 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d2a3b77d-fd7c-421e-aa23-d206419b1c7d-metrics-certs\") pod \"frr-k8s-rnvr5\" (UID: \"d2a3b77d-fd7c-421e-aa23-d206419b1c7d\") " pod="metallb-system/frr-k8s-rnvr5" Jan 22 09:16:45 crc kubenswrapper[4811]: I0122 09:16:45.647108 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zh6px\" (UniqueName: \"kubernetes.io/projected/346ed4cd-2bb8-470d-a275-6c297994fb3f-kube-api-access-zh6px\") pod \"controller-6968d8fdc4-rl9k7\" (UID: \"346ed4cd-2bb8-470d-a275-6c297994fb3f\") " pod="metallb-system/controller-6968d8fdc4-rl9k7" Jan 22 09:16:45 crc kubenswrapper[4811]: I0122 09:16:45.647198 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/346ed4cd-2bb8-470d-a275-6c297994fb3f-cert\") pod \"controller-6968d8fdc4-rl9k7\" (UID: \"346ed4cd-2bb8-470d-a275-6c297994fb3f\") " pod="metallb-system/controller-6968d8fdc4-rl9k7" Jan 22 09:16:45 crc kubenswrapper[4811]: I0122 09:16:45.647276 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/d2a3b77d-fd7c-421e-aa23-d206419b1c7d-reloader\") pod \"frr-k8s-rnvr5\" (UID: \"d2a3b77d-fd7c-421e-aa23-d206419b1c7d\") " pod="metallb-system/frr-k8s-rnvr5" Jan 22 09:16:45 crc kubenswrapper[4811]: I0122 09:16:45.647344 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/d2a3b77d-fd7c-421e-aa23-d206419b1c7d-frr-sockets\") pod \"frr-k8s-rnvr5\" (UID: \"d2a3b77d-fd7c-421e-aa23-d206419b1c7d\") " pod="metallb-system/frr-k8s-rnvr5" Jan 22 09:16:45 crc kubenswrapper[4811]: I0122 09:16:45.647410 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/faa36c07-3c7a-4b4a-a04e-58b43a178890-memberlist\") pod \"speaker-k88w9\" (UID: \"faa36c07-3c7a-4b4a-a04e-58b43a178890\") " pod="metallb-system/speaker-k88w9" Jan 22 09:16:45 crc kubenswrapper[4811]: I0122 09:16:45.647504 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/346ed4cd-2bb8-470d-a275-6c297994fb3f-metrics-certs\") pod \"controller-6968d8fdc4-rl9k7\" (UID: \"346ed4cd-2bb8-470d-a275-6c297994fb3f\") " pod="metallb-system/controller-6968d8fdc4-rl9k7" Jan 22 09:16:45 crc kubenswrapper[4811]: I0122 09:16:45.647574 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/faa36c07-3c7a-4b4a-a04e-58b43a178890-metrics-certs\") pod \"speaker-k88w9\" (UID: \"faa36c07-3c7a-4b4a-a04e-58b43a178890\") " pod="metallb-system/speaker-k88w9" Jan 22 09:16:45 crc kubenswrapper[4811]: I0122 09:16:45.647843 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2f6eae9c-374b-4ac3-b5d7-04267fe9bf73-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-zfggh\" (UID: \"2f6eae9c-374b-4ac3-b5d7-04267fe9bf73\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-zfggh" Jan 22 09:16:45 crc kubenswrapper[4811]: I0122 09:16:45.647913 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/d2a3b77d-fd7c-421e-aa23-d206419b1c7d-frr-startup\") pod \"frr-k8s-rnvr5\" (UID: \"d2a3b77d-fd7c-421e-aa23-d206419b1c7d\") " pod="metallb-system/frr-k8s-rnvr5" Jan 22 09:16:45 crc kubenswrapper[4811]: I0122 09:16:45.647976 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/faa36c07-3c7a-4b4a-a04e-58b43a178890-metallb-excludel2\") pod \"speaker-k88w9\" (UID: \"faa36c07-3c7a-4b4a-a04e-58b43a178890\") " pod="metallb-system/speaker-k88w9" Jan 22 09:16:45 crc kubenswrapper[4811]: I0122 09:16:45.648042 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6v9n9\" (UniqueName: \"kubernetes.io/projected/d2a3b77d-fd7c-421e-aa23-d206419b1c7d-kube-api-access-6v9n9\") pod \"frr-k8s-rnvr5\" (UID: \"d2a3b77d-fd7c-421e-aa23-d206419b1c7d\") " pod="metallb-system/frr-k8s-rnvr5" Jan 22 09:16:45 crc kubenswrapper[4811]: I0122 09:16:45.648117 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8q669\" (UniqueName: \"kubernetes.io/projected/2f6eae9c-374b-4ac3-b5d7-04267fe9bf73-kube-api-access-8q669\") pod \"frr-k8s-webhook-server-7df86c4f6c-zfggh\" (UID: \"2f6eae9c-374b-4ac3-b5d7-04267fe9bf73\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-zfggh" Jan 22 09:16:45 crc kubenswrapper[4811]: I0122 09:16:45.648189 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kh4xq\" (UniqueName: \"kubernetes.io/projected/faa36c07-3c7a-4b4a-a04e-58b43a178890-kube-api-access-kh4xq\") pod \"speaker-k88w9\" (UID: \"faa36c07-3c7a-4b4a-a04e-58b43a178890\") " pod="metallb-system/speaker-k88w9" Jan 22 09:16:45 crc kubenswrapper[4811]: I0122 09:16:45.648250 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/d2a3b77d-fd7c-421e-aa23-d206419b1c7d-metrics\") pod \"frr-k8s-rnvr5\" (UID: \"d2a3b77d-fd7c-421e-aa23-d206419b1c7d\") " pod="metallb-system/frr-k8s-rnvr5" Jan 22 09:16:45 crc kubenswrapper[4811]: I0122 09:16:45.648317 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/d2a3b77d-fd7c-421e-aa23-d206419b1c7d-frr-conf\") pod \"frr-k8s-rnvr5\" (UID: \"d2a3b77d-fd7c-421e-aa23-d206419b1c7d\") " pod="metallb-system/frr-k8s-rnvr5" Jan 22 09:16:45 crc kubenswrapper[4811]: I0122 09:16:45.647737 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/d2a3b77d-fd7c-421e-aa23-d206419b1c7d-reloader\") pod \"frr-k8s-rnvr5\" (UID: \"d2a3b77d-fd7c-421e-aa23-d206419b1c7d\") " pod="metallb-system/frr-k8s-rnvr5" Jan 22 09:16:45 crc kubenswrapper[4811]: I0122 09:16:45.647767 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/d2a3b77d-fd7c-421e-aa23-d206419b1c7d-frr-sockets\") pod \"frr-k8s-rnvr5\" (UID: \"d2a3b77d-fd7c-421e-aa23-d206419b1c7d\") " pod="metallb-system/frr-k8s-rnvr5" Jan 22 09:16:45 crc kubenswrapper[4811]: I0122 09:16:45.648824 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/d2a3b77d-fd7c-421e-aa23-d206419b1c7d-frr-conf\") pod \"frr-k8s-rnvr5\" (UID: \"d2a3b77d-fd7c-421e-aa23-d206419b1c7d\") " pod="metallb-system/frr-k8s-rnvr5" Jan 22 09:16:45 crc kubenswrapper[4811]: I0122 09:16:45.648900 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/d2a3b77d-fd7c-421e-aa23-d206419b1c7d-metrics\") pod \"frr-k8s-rnvr5\" (UID: \"d2a3b77d-fd7c-421e-aa23-d206419b1c7d\") " pod="metallb-system/frr-k8s-rnvr5" Jan 22 09:16:45 crc kubenswrapper[4811]: I0122 09:16:45.649421 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/d2a3b77d-fd7c-421e-aa23-d206419b1c7d-frr-startup\") pod \"frr-k8s-rnvr5\" (UID: \"d2a3b77d-fd7c-421e-aa23-d206419b1c7d\") " pod="metallb-system/frr-k8s-rnvr5" Jan 22 09:16:45 crc kubenswrapper[4811]: I0122 09:16:45.652678 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2f6eae9c-374b-4ac3-b5d7-04267fe9bf73-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-zfggh\" (UID: \"2f6eae9c-374b-4ac3-b5d7-04267fe9bf73\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-zfggh" Jan 22 09:16:45 crc kubenswrapper[4811]: I0122 09:16:45.664158 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6v9n9\" (UniqueName: \"kubernetes.io/projected/d2a3b77d-fd7c-421e-aa23-d206419b1c7d-kube-api-access-6v9n9\") pod \"frr-k8s-rnvr5\" (UID: \"d2a3b77d-fd7c-421e-aa23-d206419b1c7d\") " pod="metallb-system/frr-k8s-rnvr5" Jan 22 09:16:45 crc kubenswrapper[4811]: I0122 09:16:45.669000 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d2a3b77d-fd7c-421e-aa23-d206419b1c7d-metrics-certs\") pod \"frr-k8s-rnvr5\" (UID: \"d2a3b77d-fd7c-421e-aa23-d206419b1c7d\") " pod="metallb-system/frr-k8s-rnvr5" Jan 22 09:16:45 crc kubenswrapper[4811]: I0122 09:16:45.669235 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8q669\" (UniqueName: \"kubernetes.io/projected/2f6eae9c-374b-4ac3-b5d7-04267fe9bf73-kube-api-access-8q669\") pod \"frr-k8s-webhook-server-7df86c4f6c-zfggh\" (UID: \"2f6eae9c-374b-4ac3-b5d7-04267fe9bf73\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-zfggh" Jan 22 09:16:45 crc kubenswrapper[4811]: I0122 09:16:45.684069 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-zfggh" Jan 22 09:16:45 crc kubenswrapper[4811]: I0122 09:16:45.749052 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kh4xq\" (UniqueName: \"kubernetes.io/projected/faa36c07-3c7a-4b4a-a04e-58b43a178890-kube-api-access-kh4xq\") pod \"speaker-k88w9\" (UID: \"faa36c07-3c7a-4b4a-a04e-58b43a178890\") " pod="metallb-system/speaker-k88w9" Jan 22 09:16:45 crc kubenswrapper[4811]: I0122 09:16:45.749100 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zh6px\" (UniqueName: \"kubernetes.io/projected/346ed4cd-2bb8-470d-a275-6c297994fb3f-kube-api-access-zh6px\") pod \"controller-6968d8fdc4-rl9k7\" (UID: \"346ed4cd-2bb8-470d-a275-6c297994fb3f\") " pod="metallb-system/controller-6968d8fdc4-rl9k7" Jan 22 09:16:45 crc kubenswrapper[4811]: I0122 09:16:45.749118 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/346ed4cd-2bb8-470d-a275-6c297994fb3f-cert\") pod \"controller-6968d8fdc4-rl9k7\" (UID: \"346ed4cd-2bb8-470d-a275-6c297994fb3f\") " pod="metallb-system/controller-6968d8fdc4-rl9k7" Jan 22 09:16:45 crc kubenswrapper[4811]: I0122 09:16:45.749134 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/faa36c07-3c7a-4b4a-a04e-58b43a178890-memberlist\") pod \"speaker-k88w9\" (UID: \"faa36c07-3c7a-4b4a-a04e-58b43a178890\") " pod="metallb-system/speaker-k88w9" Jan 22 09:16:45 crc kubenswrapper[4811]: I0122 09:16:45.749171 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/346ed4cd-2bb8-470d-a275-6c297994fb3f-metrics-certs\") pod \"controller-6968d8fdc4-rl9k7\" (UID: \"346ed4cd-2bb8-470d-a275-6c297994fb3f\") " pod="metallb-system/controller-6968d8fdc4-rl9k7" Jan 22 09:16:45 crc kubenswrapper[4811]: I0122 09:16:45.749186 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/faa36c07-3c7a-4b4a-a04e-58b43a178890-metrics-certs\") pod \"speaker-k88w9\" (UID: \"faa36c07-3c7a-4b4a-a04e-58b43a178890\") " pod="metallb-system/speaker-k88w9" Jan 22 09:16:45 crc kubenswrapper[4811]: I0122 09:16:45.749207 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/faa36c07-3c7a-4b4a-a04e-58b43a178890-metallb-excludel2\") pod \"speaker-k88w9\" (UID: \"faa36c07-3c7a-4b4a-a04e-58b43a178890\") " pod="metallb-system/speaker-k88w9" Jan 22 09:16:45 crc kubenswrapper[4811]: E0122 09:16:45.749336 4811 secret.go:188] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Jan 22 09:16:45 crc kubenswrapper[4811]: E0122 09:16:45.749382 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/346ed4cd-2bb8-470d-a275-6c297994fb3f-metrics-certs podName:346ed4cd-2bb8-470d-a275-6c297994fb3f nodeName:}" failed. No retries permitted until 2026-01-22 09:16:46.249369864 +0000 UTC m=+650.571556987 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/346ed4cd-2bb8-470d-a275-6c297994fb3f-metrics-certs") pod "controller-6968d8fdc4-rl9k7" (UID: "346ed4cd-2bb8-470d-a275-6c297994fb3f") : secret "controller-certs-secret" not found Jan 22 09:16:45 crc kubenswrapper[4811]: E0122 09:16:45.749447 4811 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 22 09:16:45 crc kubenswrapper[4811]: E0122 09:16:45.749487 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/faa36c07-3c7a-4b4a-a04e-58b43a178890-memberlist podName:faa36c07-3c7a-4b4a-a04e-58b43a178890 nodeName:}" failed. No retries permitted until 2026-01-22 09:16:46.249475593 +0000 UTC m=+650.571662717 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/faa36c07-3c7a-4b4a-a04e-58b43a178890-memberlist") pod "speaker-k88w9" (UID: "faa36c07-3c7a-4b4a-a04e-58b43a178890") : secret "metallb-memberlist" not found Jan 22 09:16:45 crc kubenswrapper[4811]: E0122 09:16:45.749524 4811 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Jan 22 09:16:45 crc kubenswrapper[4811]: E0122 09:16:45.749547 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/faa36c07-3c7a-4b4a-a04e-58b43a178890-metrics-certs podName:faa36c07-3c7a-4b4a-a04e-58b43a178890 nodeName:}" failed. No retries permitted until 2026-01-22 09:16:46.249539874 +0000 UTC m=+650.571726997 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/faa36c07-3c7a-4b4a-a04e-58b43a178890-metrics-certs") pod "speaker-k88w9" (UID: "faa36c07-3c7a-4b4a-a04e-58b43a178890") : secret "speaker-certs-secret" not found Jan 22 09:16:45 crc kubenswrapper[4811]: I0122 09:16:45.749791 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/faa36c07-3c7a-4b4a-a04e-58b43a178890-metallb-excludel2\") pod \"speaker-k88w9\" (UID: \"faa36c07-3c7a-4b4a-a04e-58b43a178890\") " pod="metallb-system/speaker-k88w9" Jan 22 09:16:45 crc kubenswrapper[4811]: I0122 09:16:45.751725 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/346ed4cd-2bb8-470d-a275-6c297994fb3f-cert\") pod \"controller-6968d8fdc4-rl9k7\" (UID: \"346ed4cd-2bb8-470d-a275-6c297994fb3f\") " pod="metallb-system/controller-6968d8fdc4-rl9k7" Jan 22 09:16:45 crc kubenswrapper[4811]: I0122 09:16:45.769756 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zh6px\" (UniqueName: \"kubernetes.io/projected/346ed4cd-2bb8-470d-a275-6c297994fb3f-kube-api-access-zh6px\") pod \"controller-6968d8fdc4-rl9k7\" (UID: \"346ed4cd-2bb8-470d-a275-6c297994fb3f\") " pod="metallb-system/controller-6968d8fdc4-rl9k7" Jan 22 09:16:45 crc kubenswrapper[4811]: I0122 09:16:45.770014 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kh4xq\" (UniqueName: \"kubernetes.io/projected/faa36c07-3c7a-4b4a-a04e-58b43a178890-kube-api-access-kh4xq\") pod \"speaker-k88w9\" (UID: \"faa36c07-3c7a-4b4a-a04e-58b43a178890\") " pod="metallb-system/speaker-k88w9" Jan 22 09:16:45 crc kubenswrapper[4811]: I0122 09:16:45.965943 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-rnvr5" Jan 22 09:16:46 crc kubenswrapper[4811]: I0122 09:16:46.054433 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-zfggh"] Jan 22 09:16:46 crc kubenswrapper[4811]: W0122 09:16:46.065374 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2f6eae9c_374b_4ac3_b5d7_04267fe9bf73.slice/crio-ef5f447696361155c2947e498b9fa345c482a9eee09299d5eb2ab739b3ddfc80 WatchSource:0}: Error finding container ef5f447696361155c2947e498b9fa345c482a9eee09299d5eb2ab739b3ddfc80: Status 404 returned error can't find the container with id ef5f447696361155c2947e498b9fa345c482a9eee09299d5eb2ab739b3ddfc80 Jan 22 09:16:46 crc kubenswrapper[4811]: I0122 09:16:46.257983 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/faa36c07-3c7a-4b4a-a04e-58b43a178890-memberlist\") pod \"speaker-k88w9\" (UID: \"faa36c07-3c7a-4b4a-a04e-58b43a178890\") " pod="metallb-system/speaker-k88w9" Jan 22 09:16:46 crc kubenswrapper[4811]: I0122 09:16:46.258028 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/346ed4cd-2bb8-470d-a275-6c297994fb3f-metrics-certs\") pod \"controller-6968d8fdc4-rl9k7\" (UID: \"346ed4cd-2bb8-470d-a275-6c297994fb3f\") " pod="metallb-system/controller-6968d8fdc4-rl9k7" Jan 22 09:16:46 crc kubenswrapper[4811]: I0122 09:16:46.258047 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/faa36c07-3c7a-4b4a-a04e-58b43a178890-metrics-certs\") pod \"speaker-k88w9\" (UID: \"faa36c07-3c7a-4b4a-a04e-58b43a178890\") " pod="metallb-system/speaker-k88w9" Jan 22 09:16:46 crc kubenswrapper[4811]: E0122 09:16:46.258313 4811 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 22 09:16:46 crc kubenswrapper[4811]: E0122 09:16:46.258433 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/faa36c07-3c7a-4b4a-a04e-58b43a178890-memberlist podName:faa36c07-3c7a-4b4a-a04e-58b43a178890 nodeName:}" failed. No retries permitted until 2026-01-22 09:16:47.258416599 +0000 UTC m=+651.580603722 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/faa36c07-3c7a-4b4a-a04e-58b43a178890-memberlist") pod "speaker-k88w9" (UID: "faa36c07-3c7a-4b4a-a04e-58b43a178890") : secret "metallb-memberlist" not found Jan 22 09:16:46 crc kubenswrapper[4811]: I0122 09:16:46.261983 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/faa36c07-3c7a-4b4a-a04e-58b43a178890-metrics-certs\") pod \"speaker-k88w9\" (UID: \"faa36c07-3c7a-4b4a-a04e-58b43a178890\") " pod="metallb-system/speaker-k88w9" Jan 22 09:16:46 crc kubenswrapper[4811]: I0122 09:16:46.262429 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/346ed4cd-2bb8-470d-a275-6c297994fb3f-metrics-certs\") pod \"controller-6968d8fdc4-rl9k7\" (UID: \"346ed4cd-2bb8-470d-a275-6c297994fb3f\") " pod="metallb-system/controller-6968d8fdc4-rl9k7" Jan 22 09:16:46 crc kubenswrapper[4811]: I0122 09:16:46.425052 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-rl9k7" Jan 22 09:16:46 crc kubenswrapper[4811]: I0122 09:16:46.474162 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-zfggh" event={"ID":"2f6eae9c-374b-4ac3-b5d7-04267fe9bf73","Type":"ContainerStarted","Data":"ef5f447696361155c2947e498b9fa345c482a9eee09299d5eb2ab739b3ddfc80"} Jan 22 09:16:46 crc kubenswrapper[4811]: I0122 09:16:46.475025 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-rnvr5" event={"ID":"d2a3b77d-fd7c-421e-aa23-d206419b1c7d","Type":"ContainerStarted","Data":"f7241ea38d4e5352ecf06447e721f562744bd2c2196f1e41167d23bf4302d066"} Jan 22 09:16:46 crc kubenswrapper[4811]: W0122 09:16:46.781491 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod346ed4cd_2bb8_470d_a275_6c297994fb3f.slice/crio-7fb31dcfd47eb439bbb595334b7aa8c0d1e16053a77243599173709e0ac529cf WatchSource:0}: Error finding container 7fb31dcfd47eb439bbb595334b7aa8c0d1e16053a77243599173709e0ac529cf: Status 404 returned error can't find the container with id 7fb31dcfd47eb439bbb595334b7aa8c0d1e16053a77243599173709e0ac529cf Jan 22 09:16:46 crc kubenswrapper[4811]: I0122 09:16:46.787486 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-rl9k7"] Jan 22 09:16:47 crc kubenswrapper[4811]: I0122 09:16:47.269154 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/faa36c07-3c7a-4b4a-a04e-58b43a178890-memberlist\") pod \"speaker-k88w9\" (UID: \"faa36c07-3c7a-4b4a-a04e-58b43a178890\") " pod="metallb-system/speaker-k88w9" Jan 22 09:16:47 crc kubenswrapper[4811]: I0122 09:16:47.278067 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/faa36c07-3c7a-4b4a-a04e-58b43a178890-memberlist\") pod \"speaker-k88w9\" (UID: \"faa36c07-3c7a-4b4a-a04e-58b43a178890\") " pod="metallb-system/speaker-k88w9" Jan 22 09:16:47 crc kubenswrapper[4811]: I0122 09:16:47.285348 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-k88w9" Jan 22 09:16:47 crc kubenswrapper[4811]: W0122 09:16:47.305502 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfaa36c07_3c7a_4b4a_a04e_58b43a178890.slice/crio-abc98e3b77d76277ae0911591501af525510f4a9b72270d1840224eaba71d10b WatchSource:0}: Error finding container abc98e3b77d76277ae0911591501af525510f4a9b72270d1840224eaba71d10b: Status 404 returned error can't find the container with id abc98e3b77d76277ae0911591501af525510f4a9b72270d1840224eaba71d10b Jan 22 09:16:47 crc kubenswrapper[4811]: I0122 09:16:47.482748 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-k88w9" event={"ID":"faa36c07-3c7a-4b4a-a04e-58b43a178890","Type":"ContainerStarted","Data":"abc98e3b77d76277ae0911591501af525510f4a9b72270d1840224eaba71d10b"} Jan 22 09:16:47 crc kubenswrapper[4811]: I0122 09:16:47.485814 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-rl9k7" event={"ID":"346ed4cd-2bb8-470d-a275-6c297994fb3f","Type":"ContainerStarted","Data":"413f99fb0a0dca298e898c167ad85353a514278786d2d8422ac6d87398ff65f0"} Jan 22 09:16:47 crc kubenswrapper[4811]: I0122 09:16:47.485844 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-rl9k7" event={"ID":"346ed4cd-2bb8-470d-a275-6c297994fb3f","Type":"ContainerStarted","Data":"659890d79d2d6ede0fb22242130ddb9d154fdc0ea02e337a50d42b88d2b35f89"} Jan 22 09:16:47 crc kubenswrapper[4811]: I0122 09:16:47.485855 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-rl9k7" event={"ID":"346ed4cd-2bb8-470d-a275-6c297994fb3f","Type":"ContainerStarted","Data":"7fb31dcfd47eb439bbb595334b7aa8c0d1e16053a77243599173709e0ac529cf"} Jan 22 09:16:47 crc kubenswrapper[4811]: I0122 09:16:47.485939 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-rl9k7" Jan 22 09:16:47 crc kubenswrapper[4811]: I0122 09:16:47.502471 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-rl9k7" podStartSLOduration=2.502460407 podStartE2EDuration="2.502460407s" podCreationTimestamp="2026-01-22 09:16:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:16:47.498664409 +0000 UTC m=+651.820851533" watchObservedRunningTime="2026-01-22 09:16:47.502460407 +0000 UTC m=+651.824647530" Jan 22 09:16:48 crc kubenswrapper[4811]: I0122 09:16:48.495332 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-k88w9" event={"ID":"faa36c07-3c7a-4b4a-a04e-58b43a178890","Type":"ContainerStarted","Data":"2c8fc1a6252d7aead95652bcbcaafbe36e60c2441b8d3dc52315ee86adf5f736"} Jan 22 09:16:48 crc kubenswrapper[4811]: I0122 09:16:48.495581 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-k88w9" event={"ID":"faa36c07-3c7a-4b4a-a04e-58b43a178890","Type":"ContainerStarted","Data":"77548273ba0d5494239f58337a447e6773e1835adf2ef600de3186d22bdfb442"} Jan 22 09:16:48 crc kubenswrapper[4811]: I0122 09:16:48.518306 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-k88w9" podStartSLOduration=3.518292226 podStartE2EDuration="3.518292226s" podCreationTimestamp="2026-01-22 09:16:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:16:48.511675812 +0000 UTC m=+652.833862935" watchObservedRunningTime="2026-01-22 09:16:48.518292226 +0000 UTC m=+652.840479350" Jan 22 09:16:49 crc kubenswrapper[4811]: I0122 09:16:49.500299 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-k88w9" Jan 22 09:16:53 crc kubenswrapper[4811]: I0122 09:16:53.518691 4811 generic.go:334] "Generic (PLEG): container finished" podID="d2a3b77d-fd7c-421e-aa23-d206419b1c7d" containerID="ea97f6e33dcc13ebd72e98cfc86ed8a1a2ae71cfa5998781c036eb010a22891d" exitCode=0 Jan 22 09:16:53 crc kubenswrapper[4811]: I0122 09:16:53.518792 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-rnvr5" event={"ID":"d2a3b77d-fd7c-421e-aa23-d206419b1c7d","Type":"ContainerDied","Data":"ea97f6e33dcc13ebd72e98cfc86ed8a1a2ae71cfa5998781c036eb010a22891d"} Jan 22 09:16:53 crc kubenswrapper[4811]: I0122 09:16:53.520080 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-zfggh" event={"ID":"2f6eae9c-374b-4ac3-b5d7-04267fe9bf73","Type":"ContainerStarted","Data":"eed949794371bf934d659e4c79c2ed22448fdc9fa5a9e6ae22c61676822944ad"} Jan 22 09:16:53 crc kubenswrapper[4811]: I0122 09:16:53.520222 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-zfggh" Jan 22 09:16:54 crc kubenswrapper[4811]: I0122 09:16:54.525863 4811 generic.go:334] "Generic (PLEG): container finished" podID="d2a3b77d-fd7c-421e-aa23-d206419b1c7d" containerID="30deb3396da70777e9e097eeff0d366607789bcd45cbb75f8248e8ac3429e74a" exitCode=0 Jan 22 09:16:54 crc kubenswrapper[4811]: I0122 09:16:54.525956 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-rnvr5" event={"ID":"d2a3b77d-fd7c-421e-aa23-d206419b1c7d","Type":"ContainerDied","Data":"30deb3396da70777e9e097eeff0d366607789bcd45cbb75f8248e8ac3429e74a"} Jan 22 09:16:54 crc kubenswrapper[4811]: I0122 09:16:54.543883 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-zfggh" podStartSLOduration=3.063346167 podStartE2EDuration="9.543867668s" podCreationTimestamp="2026-01-22 09:16:45 +0000 UTC" firstStartedPulling="2026-01-22 09:16:46.067003326 +0000 UTC m=+650.389190449" lastFinishedPulling="2026-01-22 09:16:52.547524828 +0000 UTC m=+656.869711950" observedRunningTime="2026-01-22 09:16:53.549561115 +0000 UTC m=+657.871748237" watchObservedRunningTime="2026-01-22 09:16:54.543867668 +0000 UTC m=+658.866054791" Jan 22 09:16:55 crc kubenswrapper[4811]: I0122 09:16:55.533116 4811 generic.go:334] "Generic (PLEG): container finished" podID="d2a3b77d-fd7c-421e-aa23-d206419b1c7d" containerID="cfbdc4f5b7100d78c2028c74adad528bff81f269b5666d1dd29f56df6cf2ad5b" exitCode=0 Jan 22 09:16:55 crc kubenswrapper[4811]: I0122 09:16:55.533154 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-rnvr5" event={"ID":"d2a3b77d-fd7c-421e-aa23-d206419b1c7d","Type":"ContainerDied","Data":"cfbdc4f5b7100d78c2028c74adad528bff81f269b5666d1dd29f56df6cf2ad5b"} Jan 22 09:16:56 crc kubenswrapper[4811]: I0122 09:16:56.429770 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-rl9k7" Jan 22 09:16:56 crc kubenswrapper[4811]: I0122 09:16:56.542013 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-rnvr5" event={"ID":"d2a3b77d-fd7c-421e-aa23-d206419b1c7d","Type":"ContainerStarted","Data":"bf31a53d00068acfd19161866f5376426e54abc7c896343d80316e2f41721178"} Jan 22 09:16:56 crc kubenswrapper[4811]: I0122 09:16:56.542064 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-rnvr5" event={"ID":"d2a3b77d-fd7c-421e-aa23-d206419b1c7d","Type":"ContainerStarted","Data":"e2632477fd82b8fb52f661869712f3174e2f36a4b91a33ce5e6aa01ca18c6dd7"} Jan 22 09:16:56 crc kubenswrapper[4811]: I0122 09:16:56.542077 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-rnvr5" event={"ID":"d2a3b77d-fd7c-421e-aa23-d206419b1c7d","Type":"ContainerStarted","Data":"245e7776c52e9f742c561f3c877996bfd2093f03cf5ecd6ec60a81c9f834dcdb"} Jan 22 09:16:56 crc kubenswrapper[4811]: I0122 09:16:56.542085 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-rnvr5" event={"ID":"d2a3b77d-fd7c-421e-aa23-d206419b1c7d","Type":"ContainerStarted","Data":"4b310db9618c4532ff660a7352d3f21a9cce4f8f08e751cd04322c8c1c1ade3a"} Jan 22 09:16:56 crc kubenswrapper[4811]: I0122 09:16:56.542094 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-rnvr5" event={"ID":"d2a3b77d-fd7c-421e-aa23-d206419b1c7d","Type":"ContainerStarted","Data":"906def6a61b83243e59fb388ffbd6ac161eae2c563dd0c023cb699d43e212bf7"} Jan 22 09:16:56 crc kubenswrapper[4811]: I0122 09:16:56.542102 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-rnvr5" event={"ID":"d2a3b77d-fd7c-421e-aa23-d206419b1c7d","Type":"ContainerStarted","Data":"e9085dbf8568ea12a0f8183ee6251f7e3a298a59b7e005bc4a4c772aa44d1dfd"} Jan 22 09:16:56 crc kubenswrapper[4811]: I0122 09:16:56.542123 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-rnvr5" Jan 22 09:16:56 crc kubenswrapper[4811]: I0122 09:16:56.557772 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-rnvr5" podStartSLOduration=5.05182636 podStartE2EDuration="11.557755809s" podCreationTimestamp="2026-01-22 09:16:45 +0000 UTC" firstStartedPulling="2026-01-22 09:16:46.046193027 +0000 UTC m=+650.368380151" lastFinishedPulling="2026-01-22 09:16:52.552122487 +0000 UTC m=+656.874309600" observedRunningTime="2026-01-22 09:16:56.555971666 +0000 UTC m=+660.878158788" watchObservedRunningTime="2026-01-22 09:16:56.557755809 +0000 UTC m=+660.879942933" Jan 22 09:16:57 crc kubenswrapper[4811]: I0122 09:16:57.289526 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-k88w9" Jan 22 09:16:59 crc kubenswrapper[4811]: I0122 09:16:59.487381 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-c2sr9"] Jan 22 09:16:59 crc kubenswrapper[4811]: I0122 09:16:59.489157 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-c2sr9" Jan 22 09:16:59 crc kubenswrapper[4811]: I0122 09:16:59.493379 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 22 09:16:59 crc kubenswrapper[4811]: I0122 09:16:59.494519 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 22 09:16:59 crc kubenswrapper[4811]: I0122 09:16:59.494683 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-5jllb" Jan 22 09:16:59 crc kubenswrapper[4811]: I0122 09:16:59.500124 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-c2sr9"] Jan 22 09:16:59 crc kubenswrapper[4811]: I0122 09:16:59.627206 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8nmft\" (UniqueName: \"kubernetes.io/projected/5518af80-1f74-4caf-8bc0-80680646bfca-kube-api-access-8nmft\") pod \"openstack-operator-index-c2sr9\" (UID: \"5518af80-1f74-4caf-8bc0-80680646bfca\") " pod="openstack-operators/openstack-operator-index-c2sr9" Jan 22 09:16:59 crc kubenswrapper[4811]: I0122 09:16:59.728165 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8nmft\" (UniqueName: \"kubernetes.io/projected/5518af80-1f74-4caf-8bc0-80680646bfca-kube-api-access-8nmft\") pod \"openstack-operator-index-c2sr9\" (UID: \"5518af80-1f74-4caf-8bc0-80680646bfca\") " pod="openstack-operators/openstack-operator-index-c2sr9" Jan 22 09:16:59 crc kubenswrapper[4811]: I0122 09:16:59.742727 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nmft\" (UniqueName: \"kubernetes.io/projected/5518af80-1f74-4caf-8bc0-80680646bfca-kube-api-access-8nmft\") pod \"openstack-operator-index-c2sr9\" (UID: \"5518af80-1f74-4caf-8bc0-80680646bfca\") " pod="openstack-operators/openstack-operator-index-c2sr9" Jan 22 09:16:59 crc kubenswrapper[4811]: I0122 09:16:59.807384 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-c2sr9" Jan 22 09:17:00 crc kubenswrapper[4811]: I0122 09:17:00.143667 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-c2sr9"] Jan 22 09:17:00 crc kubenswrapper[4811]: W0122 09:17:00.147303 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5518af80_1f74_4caf_8bc0_80680646bfca.slice/crio-03480c383e0192674a94316b0b3a8547034853d7c84bba11e212b7a62e5db9f4 WatchSource:0}: Error finding container 03480c383e0192674a94316b0b3a8547034853d7c84bba11e212b7a62e5db9f4: Status 404 returned error can't find the container with id 03480c383e0192674a94316b0b3a8547034853d7c84bba11e212b7a62e5db9f4 Jan 22 09:17:00 crc kubenswrapper[4811]: I0122 09:17:00.558541 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-c2sr9" event={"ID":"5518af80-1f74-4caf-8bc0-80680646bfca","Type":"ContainerStarted","Data":"03480c383e0192674a94316b0b3a8547034853d7c84bba11e212b7a62e5db9f4"} Jan 22 09:17:00 crc kubenswrapper[4811]: I0122 09:17:00.966599 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-rnvr5" Jan 22 09:17:00 crc kubenswrapper[4811]: I0122 09:17:00.996199 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-rnvr5" Jan 22 09:17:02 crc kubenswrapper[4811]: I0122 09:17:02.567753 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-c2sr9" event={"ID":"5518af80-1f74-4caf-8bc0-80680646bfca","Type":"ContainerStarted","Data":"234a6ebc785e5d183dd3414385966609291683828356ec64be2ec25243c52b7f"} Jan 22 09:17:02 crc kubenswrapper[4811]: I0122 09:17:02.581548 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-c2sr9" podStartSLOduration=2.115947777 podStartE2EDuration="3.581532547s" podCreationTimestamp="2026-01-22 09:16:59 +0000 UTC" firstStartedPulling="2026-01-22 09:17:00.148647499 +0000 UTC m=+664.470834621" lastFinishedPulling="2026-01-22 09:17:01.614232267 +0000 UTC m=+665.936419391" observedRunningTime="2026-01-22 09:17:02.576683525 +0000 UTC m=+666.898870647" watchObservedRunningTime="2026-01-22 09:17:02.581532547 +0000 UTC m=+666.903719670" Jan 22 09:17:05 crc kubenswrapper[4811]: I0122 09:17:05.689208 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-zfggh" Jan 22 09:17:05 crc kubenswrapper[4811]: I0122 09:17:05.970155 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-rnvr5" Jan 22 09:17:09 crc kubenswrapper[4811]: I0122 09:17:09.808371 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-c2sr9" Jan 22 09:17:09 crc kubenswrapper[4811]: I0122 09:17:09.808600 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-c2sr9" Jan 22 09:17:09 crc kubenswrapper[4811]: I0122 09:17:09.828714 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-c2sr9" Jan 22 09:17:10 crc kubenswrapper[4811]: I0122 09:17:10.616723 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-c2sr9" Jan 22 09:17:16 crc kubenswrapper[4811]: I0122 09:17:16.194044 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/3078a1b977323d6d8f95a4018ef06f377c198eb1741282ac05e933b60384v48"] Jan 22 09:17:16 crc kubenswrapper[4811]: I0122 09:17:16.195163 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/3078a1b977323d6d8f95a4018ef06f377c198eb1741282ac05e933b60384v48" Jan 22 09:17:16 crc kubenswrapper[4811]: I0122 09:17:16.196850 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-xmldx" Jan 22 09:17:16 crc kubenswrapper[4811]: I0122 09:17:16.201645 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/3078a1b977323d6d8f95a4018ef06f377c198eb1741282ac05e933b60384v48"] Jan 22 09:17:16 crc kubenswrapper[4811]: I0122 09:17:16.388218 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d014441f-4913-4583-ba8e-b1c20aaeed47-util\") pod \"3078a1b977323d6d8f95a4018ef06f377c198eb1741282ac05e933b60384v48\" (UID: \"d014441f-4913-4583-ba8e-b1c20aaeed47\") " pod="openstack-operators/3078a1b977323d6d8f95a4018ef06f377c198eb1741282ac05e933b60384v48" Jan 22 09:17:16 crc kubenswrapper[4811]: I0122 09:17:16.388671 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vk7mm\" (UniqueName: \"kubernetes.io/projected/d014441f-4913-4583-ba8e-b1c20aaeed47-kube-api-access-vk7mm\") pod \"3078a1b977323d6d8f95a4018ef06f377c198eb1741282ac05e933b60384v48\" (UID: \"d014441f-4913-4583-ba8e-b1c20aaeed47\") " pod="openstack-operators/3078a1b977323d6d8f95a4018ef06f377c198eb1741282ac05e933b60384v48" Jan 22 09:17:16 crc kubenswrapper[4811]: I0122 09:17:16.388721 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d014441f-4913-4583-ba8e-b1c20aaeed47-bundle\") pod \"3078a1b977323d6d8f95a4018ef06f377c198eb1741282ac05e933b60384v48\" (UID: \"d014441f-4913-4583-ba8e-b1c20aaeed47\") " pod="openstack-operators/3078a1b977323d6d8f95a4018ef06f377c198eb1741282ac05e933b60384v48" Jan 22 09:17:16 crc kubenswrapper[4811]: I0122 09:17:16.489354 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d014441f-4913-4583-ba8e-b1c20aaeed47-util\") pod \"3078a1b977323d6d8f95a4018ef06f377c198eb1741282ac05e933b60384v48\" (UID: \"d014441f-4913-4583-ba8e-b1c20aaeed47\") " pod="openstack-operators/3078a1b977323d6d8f95a4018ef06f377c198eb1741282ac05e933b60384v48" Jan 22 09:17:16 crc kubenswrapper[4811]: I0122 09:17:16.489409 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vk7mm\" (UniqueName: \"kubernetes.io/projected/d014441f-4913-4583-ba8e-b1c20aaeed47-kube-api-access-vk7mm\") pod \"3078a1b977323d6d8f95a4018ef06f377c198eb1741282ac05e933b60384v48\" (UID: \"d014441f-4913-4583-ba8e-b1c20aaeed47\") " pod="openstack-operators/3078a1b977323d6d8f95a4018ef06f377c198eb1741282ac05e933b60384v48" Jan 22 09:17:16 crc kubenswrapper[4811]: I0122 09:17:16.489433 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d014441f-4913-4583-ba8e-b1c20aaeed47-bundle\") pod \"3078a1b977323d6d8f95a4018ef06f377c198eb1741282ac05e933b60384v48\" (UID: \"d014441f-4913-4583-ba8e-b1c20aaeed47\") " pod="openstack-operators/3078a1b977323d6d8f95a4018ef06f377c198eb1741282ac05e933b60384v48" Jan 22 09:17:16 crc kubenswrapper[4811]: I0122 09:17:16.489857 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d014441f-4913-4583-ba8e-b1c20aaeed47-bundle\") pod \"3078a1b977323d6d8f95a4018ef06f377c198eb1741282ac05e933b60384v48\" (UID: \"d014441f-4913-4583-ba8e-b1c20aaeed47\") " pod="openstack-operators/3078a1b977323d6d8f95a4018ef06f377c198eb1741282ac05e933b60384v48" Jan 22 09:17:16 crc kubenswrapper[4811]: I0122 09:17:16.489876 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d014441f-4913-4583-ba8e-b1c20aaeed47-util\") pod \"3078a1b977323d6d8f95a4018ef06f377c198eb1741282ac05e933b60384v48\" (UID: \"d014441f-4913-4583-ba8e-b1c20aaeed47\") " pod="openstack-operators/3078a1b977323d6d8f95a4018ef06f377c198eb1741282ac05e933b60384v48" Jan 22 09:17:16 crc kubenswrapper[4811]: I0122 09:17:16.504419 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vk7mm\" (UniqueName: \"kubernetes.io/projected/d014441f-4913-4583-ba8e-b1c20aaeed47-kube-api-access-vk7mm\") pod \"3078a1b977323d6d8f95a4018ef06f377c198eb1741282ac05e933b60384v48\" (UID: \"d014441f-4913-4583-ba8e-b1c20aaeed47\") " pod="openstack-operators/3078a1b977323d6d8f95a4018ef06f377c198eb1741282ac05e933b60384v48" Jan 22 09:17:16 crc kubenswrapper[4811]: I0122 09:17:16.507016 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/3078a1b977323d6d8f95a4018ef06f377c198eb1741282ac05e933b60384v48" Jan 22 09:17:16 crc kubenswrapper[4811]: I0122 09:17:16.839786 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/3078a1b977323d6d8f95a4018ef06f377c198eb1741282ac05e933b60384v48"] Jan 22 09:17:16 crc kubenswrapper[4811]: W0122 09:17:16.842344 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd014441f_4913_4583_ba8e_b1c20aaeed47.slice/crio-b4dcaefe4cd63e948b630e1d3dbbb43bcf2050155ed2758302a617df02d122f2 WatchSource:0}: Error finding container b4dcaefe4cd63e948b630e1d3dbbb43bcf2050155ed2758302a617df02d122f2: Status 404 returned error can't find the container with id b4dcaefe4cd63e948b630e1d3dbbb43bcf2050155ed2758302a617df02d122f2 Jan 22 09:17:17 crc kubenswrapper[4811]: I0122 09:17:17.629312 4811 generic.go:334] "Generic (PLEG): container finished" podID="d014441f-4913-4583-ba8e-b1c20aaeed47" containerID="7918d99b473c03e4a192a8f86ff116c79fabe4458c0eb9932596cd614afce2c9" exitCode=0 Jan 22 09:17:17 crc kubenswrapper[4811]: I0122 09:17:17.629352 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/3078a1b977323d6d8f95a4018ef06f377c198eb1741282ac05e933b60384v48" event={"ID":"d014441f-4913-4583-ba8e-b1c20aaeed47","Type":"ContainerDied","Data":"7918d99b473c03e4a192a8f86ff116c79fabe4458c0eb9932596cd614afce2c9"} Jan 22 09:17:17 crc kubenswrapper[4811]: I0122 09:17:17.629390 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/3078a1b977323d6d8f95a4018ef06f377c198eb1741282ac05e933b60384v48" event={"ID":"d014441f-4913-4583-ba8e-b1c20aaeed47","Type":"ContainerStarted","Data":"b4dcaefe4cd63e948b630e1d3dbbb43bcf2050155ed2758302a617df02d122f2"} Jan 22 09:17:19 crc kubenswrapper[4811]: I0122 09:17:19.638988 4811 generic.go:334] "Generic (PLEG): container finished" podID="d014441f-4913-4583-ba8e-b1c20aaeed47" containerID="75eb8765ea610547274c9611eef145115b978d0907cdae271d1d12fe34513d1e" exitCode=0 Jan 22 09:17:19 crc kubenswrapper[4811]: I0122 09:17:19.639075 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/3078a1b977323d6d8f95a4018ef06f377c198eb1741282ac05e933b60384v48" event={"ID":"d014441f-4913-4583-ba8e-b1c20aaeed47","Type":"ContainerDied","Data":"75eb8765ea610547274c9611eef145115b978d0907cdae271d1d12fe34513d1e"} Jan 22 09:17:20 crc kubenswrapper[4811]: I0122 09:17:20.645744 4811 generic.go:334] "Generic (PLEG): container finished" podID="d014441f-4913-4583-ba8e-b1c20aaeed47" containerID="424bae986da50aea18256beb0bdd532335a1167ce92dcd3f4682d32251668d13" exitCode=0 Jan 22 09:17:20 crc kubenswrapper[4811]: I0122 09:17:20.645781 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/3078a1b977323d6d8f95a4018ef06f377c198eb1741282ac05e933b60384v48" event={"ID":"d014441f-4913-4583-ba8e-b1c20aaeed47","Type":"ContainerDied","Data":"424bae986da50aea18256beb0bdd532335a1167ce92dcd3f4682d32251668d13"} Jan 22 09:17:21 crc kubenswrapper[4811]: I0122 09:17:21.811859 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/3078a1b977323d6d8f95a4018ef06f377c198eb1741282ac05e933b60384v48" Jan 22 09:17:21 crc kubenswrapper[4811]: I0122 09:17:21.939906 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vk7mm\" (UniqueName: \"kubernetes.io/projected/d014441f-4913-4583-ba8e-b1c20aaeed47-kube-api-access-vk7mm\") pod \"d014441f-4913-4583-ba8e-b1c20aaeed47\" (UID: \"d014441f-4913-4583-ba8e-b1c20aaeed47\") " Jan 22 09:17:21 crc kubenswrapper[4811]: I0122 09:17:21.939975 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d014441f-4913-4583-ba8e-b1c20aaeed47-util\") pod \"d014441f-4913-4583-ba8e-b1c20aaeed47\" (UID: \"d014441f-4913-4583-ba8e-b1c20aaeed47\") " Jan 22 09:17:21 crc kubenswrapper[4811]: I0122 09:17:21.940003 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d014441f-4913-4583-ba8e-b1c20aaeed47-bundle\") pod \"d014441f-4913-4583-ba8e-b1c20aaeed47\" (UID: \"d014441f-4913-4583-ba8e-b1c20aaeed47\") " Jan 22 09:17:21 crc kubenswrapper[4811]: I0122 09:17:21.940536 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d014441f-4913-4583-ba8e-b1c20aaeed47-bundle" (OuterVolumeSpecName: "bundle") pod "d014441f-4913-4583-ba8e-b1c20aaeed47" (UID: "d014441f-4913-4583-ba8e-b1c20aaeed47"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:17:21 crc kubenswrapper[4811]: I0122 09:17:21.940886 4811 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d014441f-4913-4583-ba8e-b1c20aaeed47-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 09:17:21 crc kubenswrapper[4811]: I0122 09:17:21.943699 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d014441f-4913-4583-ba8e-b1c20aaeed47-kube-api-access-vk7mm" (OuterVolumeSpecName: "kube-api-access-vk7mm") pod "d014441f-4913-4583-ba8e-b1c20aaeed47" (UID: "d014441f-4913-4583-ba8e-b1c20aaeed47"). InnerVolumeSpecName "kube-api-access-vk7mm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:17:21 crc kubenswrapper[4811]: I0122 09:17:21.949431 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d014441f-4913-4583-ba8e-b1c20aaeed47-util" (OuterVolumeSpecName: "util") pod "d014441f-4913-4583-ba8e-b1c20aaeed47" (UID: "d014441f-4913-4583-ba8e-b1c20aaeed47"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:17:22 crc kubenswrapper[4811]: I0122 09:17:22.041578 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vk7mm\" (UniqueName: \"kubernetes.io/projected/d014441f-4913-4583-ba8e-b1c20aaeed47-kube-api-access-vk7mm\") on node \"crc\" DevicePath \"\"" Jan 22 09:17:22 crc kubenswrapper[4811]: I0122 09:17:22.041601 4811 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d014441f-4913-4583-ba8e-b1c20aaeed47-util\") on node \"crc\" DevicePath \"\"" Jan 22 09:17:22 crc kubenswrapper[4811]: I0122 09:17:22.656197 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/3078a1b977323d6d8f95a4018ef06f377c198eb1741282ac05e933b60384v48" event={"ID":"d014441f-4913-4583-ba8e-b1c20aaeed47","Type":"ContainerDied","Data":"b4dcaefe4cd63e948b630e1d3dbbb43bcf2050155ed2758302a617df02d122f2"} Jan 22 09:17:22 crc kubenswrapper[4811]: I0122 09:17:22.656234 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b4dcaefe4cd63e948b630e1d3dbbb43bcf2050155ed2758302a617df02d122f2" Jan 22 09:17:22 crc kubenswrapper[4811]: I0122 09:17:22.656245 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/3078a1b977323d6d8f95a4018ef06f377c198eb1741282ac05e933b60384v48" Jan 22 09:17:28 crc kubenswrapper[4811]: I0122 09:17:28.485803 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-5cd76577f9-kn8dt"] Jan 22 09:17:28 crc kubenswrapper[4811]: E0122 09:17:28.486164 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d014441f-4913-4583-ba8e-b1c20aaeed47" containerName="extract" Jan 22 09:17:28 crc kubenswrapper[4811]: I0122 09:17:28.486177 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="d014441f-4913-4583-ba8e-b1c20aaeed47" containerName="extract" Jan 22 09:17:28 crc kubenswrapper[4811]: E0122 09:17:28.486197 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d014441f-4913-4583-ba8e-b1c20aaeed47" containerName="util" Jan 22 09:17:28 crc kubenswrapper[4811]: I0122 09:17:28.486202 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="d014441f-4913-4583-ba8e-b1c20aaeed47" containerName="util" Jan 22 09:17:28 crc kubenswrapper[4811]: E0122 09:17:28.486212 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d014441f-4913-4583-ba8e-b1c20aaeed47" containerName="pull" Jan 22 09:17:28 crc kubenswrapper[4811]: I0122 09:17:28.486218 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="d014441f-4913-4583-ba8e-b1c20aaeed47" containerName="pull" Jan 22 09:17:28 crc kubenswrapper[4811]: I0122 09:17:28.486328 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="d014441f-4913-4583-ba8e-b1c20aaeed47" containerName="extract" Jan 22 09:17:28 crc kubenswrapper[4811]: I0122 09:17:28.486664 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-5cd76577f9-kn8dt" Jan 22 09:17:28 crc kubenswrapper[4811]: I0122 09:17:28.488111 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-jqsm6" Jan 22 09:17:28 crc kubenswrapper[4811]: I0122 09:17:28.504158 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tb29n\" (UniqueName: \"kubernetes.io/projected/a01a5eb9-0bef-4a6b-af9e-d71281e2ae34-kube-api-access-tb29n\") pod \"openstack-operator-controller-init-5cd76577f9-kn8dt\" (UID: \"a01a5eb9-0bef-4a6b-af9e-d71281e2ae34\") " pod="openstack-operators/openstack-operator-controller-init-5cd76577f9-kn8dt" Jan 22 09:17:28 crc kubenswrapper[4811]: I0122 09:17:28.559861 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-5cd76577f9-kn8dt"] Jan 22 09:17:28 crc kubenswrapper[4811]: I0122 09:17:28.605385 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tb29n\" (UniqueName: \"kubernetes.io/projected/a01a5eb9-0bef-4a6b-af9e-d71281e2ae34-kube-api-access-tb29n\") pod \"openstack-operator-controller-init-5cd76577f9-kn8dt\" (UID: \"a01a5eb9-0bef-4a6b-af9e-d71281e2ae34\") " pod="openstack-operators/openstack-operator-controller-init-5cd76577f9-kn8dt" Jan 22 09:17:28 crc kubenswrapper[4811]: I0122 09:17:28.620243 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tb29n\" (UniqueName: \"kubernetes.io/projected/a01a5eb9-0bef-4a6b-af9e-d71281e2ae34-kube-api-access-tb29n\") pod \"openstack-operator-controller-init-5cd76577f9-kn8dt\" (UID: \"a01a5eb9-0bef-4a6b-af9e-d71281e2ae34\") " pod="openstack-operators/openstack-operator-controller-init-5cd76577f9-kn8dt" Jan 22 09:17:28 crc kubenswrapper[4811]: I0122 09:17:28.799787 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-5cd76577f9-kn8dt" Jan 22 09:17:29 crc kubenswrapper[4811]: I0122 09:17:29.168530 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-5cd76577f9-kn8dt"] Jan 22 09:17:29 crc kubenswrapper[4811]: I0122 09:17:29.684807 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-5cd76577f9-kn8dt" event={"ID":"a01a5eb9-0bef-4a6b-af9e-d71281e2ae34","Type":"ContainerStarted","Data":"223600fa746e827045b838752cb389559d0dc6e68167fc41d8a7b96013a5a3e2"} Jan 22 09:17:34 crc kubenswrapper[4811]: I0122 09:17:34.732005 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-5cd76577f9-kn8dt" event={"ID":"a01a5eb9-0bef-4a6b-af9e-d71281e2ae34","Type":"ContainerStarted","Data":"6b2ccdd244cd0468c08367ab7735f6cc4af1bd63999dee01ca8e6279bf86db8a"} Jan 22 09:17:34 crc kubenswrapper[4811]: I0122 09:17:34.733206 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-5cd76577f9-kn8dt" Jan 22 09:17:34 crc kubenswrapper[4811]: I0122 09:17:34.761034 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-5cd76577f9-kn8dt" podStartSLOduration=1.760633888 podStartE2EDuration="6.761016055s" podCreationTimestamp="2026-01-22 09:17:28 +0000 UTC" firstStartedPulling="2026-01-22 09:17:29.170997708 +0000 UTC m=+693.493184832" lastFinishedPulling="2026-01-22 09:17:34.171379876 +0000 UTC m=+698.493566999" observedRunningTime="2026-01-22 09:17:34.755413962 +0000 UTC m=+699.077601085" watchObservedRunningTime="2026-01-22 09:17:34.761016055 +0000 UTC m=+699.083203177" Jan 22 09:17:35 crc kubenswrapper[4811]: I0122 09:17:35.501150 4811 patch_prober.go:28] interesting pod/machine-config-daemon-txvcq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 09:17:35 crc kubenswrapper[4811]: I0122 09:17:35.501460 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 09:17:48 crc kubenswrapper[4811]: I0122 09:17:48.802452 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-5cd76577f9-kn8dt" Jan 22 09:18:05 crc kubenswrapper[4811]: I0122 09:18:05.500987 4811 patch_prober.go:28] interesting pod/machine-config-daemon-txvcq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 09:18:05 crc kubenswrapper[4811]: I0122 09:18:05.501332 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 09:18:07 crc kubenswrapper[4811]: I0122 09:18:07.516602 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-pklcs"] Jan 22 09:18:07 crc kubenswrapper[4811]: I0122 09:18:07.517201 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-pklcs" Jan 22 09:18:07 crc kubenswrapper[4811]: I0122 09:18:07.521379 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-69cf5d4557-rgwhg"] Jan 22 09:18:07 crc kubenswrapper[4811]: I0122 09:18:07.521394 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-d877z" Jan 22 09:18:07 crc kubenswrapper[4811]: I0122 09:18:07.522083 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-rgwhg" Jan 22 09:18:07 crc kubenswrapper[4811]: I0122 09:18:07.526037 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-9n4gz" Jan 22 09:18:07 crc kubenswrapper[4811]: I0122 09:18:07.531008 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-pklcs"] Jan 22 09:18:07 crc kubenswrapper[4811]: I0122 09:18:07.533988 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-2ltqr"] Jan 22 09:18:07 crc kubenswrapper[4811]: I0122 09:18:07.534418 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-2ltqr" Jan 22 09:18:07 crc kubenswrapper[4811]: I0122 09:18:07.535681 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-8q5bg" Jan 22 09:18:07 crc kubenswrapper[4811]: I0122 09:18:07.542179 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-2ltqr"] Jan 22 09:18:07 crc kubenswrapper[4811]: I0122 09:18:07.553408 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-69cf5d4557-rgwhg"] Jan 22 09:18:07 crc kubenswrapper[4811]: I0122 09:18:07.569743 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-26vqb"] Jan 22 09:18:07 crc kubenswrapper[4811]: I0122 09:18:07.573831 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-26vqb" Jan 22 09:18:07 crc kubenswrapper[4811]: I0122 09:18:07.579588 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqwjg\" (UniqueName: \"kubernetes.io/projected/09ad3a19-244b-4685-8c96-0bee227b6547-kube-api-access-gqwjg\") pod \"barbican-operator-controller-manager-59dd8b7cbf-pklcs\" (UID: \"09ad3a19-244b-4685-8c96-0bee227b6547\") " pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-pklcs" Jan 22 09:18:07 crc kubenswrapper[4811]: I0122 09:18:07.579680 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdxsk\" (UniqueName: \"kubernetes.io/projected/e6fe0bc0-30b4-4a2f-b36d-93d5b288ecf8-kube-api-access-sdxsk\") pod \"cinder-operator-controller-manager-69cf5d4557-rgwhg\" (UID: \"e6fe0bc0-30b4-4a2f-b36d-93d5b288ecf8\") " pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-rgwhg" Jan 22 09:18:07 crc kubenswrapper[4811]: I0122 09:18:07.579705 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hk7xl\" (UniqueName: \"kubernetes.io/projected/b0f07719-5203-4d79-82b4-995b8af81a00-kube-api-access-hk7xl\") pod \"glance-operator-controller-manager-78fdd796fd-26vqb\" (UID: \"b0f07719-5203-4d79-82b4-995b8af81a00\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-26vqb" Jan 22 09:18:07 crc kubenswrapper[4811]: I0122 09:18:07.579731 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fl989\" (UniqueName: \"kubernetes.io/projected/62aa676a-95ae-40a8-9db5-b5fd24a293c2-kube-api-access-fl989\") pod \"designate-operator-controller-manager-b45d7bf98-2ltqr\" (UID: \"62aa676a-95ae-40a8-9db5-b5fd24a293c2\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-2ltqr" Jan 22 09:18:07 crc kubenswrapper[4811]: I0122 09:18:07.588349 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-t7dmm" Jan 22 09:18:07 crc kubenswrapper[4811]: I0122 09:18:07.624929 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-26vqb"] Jan 22 09:18:07 crc kubenswrapper[4811]: I0122 09:18:07.636675 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-vbbnq"] Jan 22 09:18:07 crc kubenswrapper[4811]: I0122 09:18:07.637408 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-vbbnq" Jan 22 09:18:07 crc kubenswrapper[4811]: I0122 09:18:07.643657 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-vbbnq"] Jan 22 09:18:07 crc kubenswrapper[4811]: I0122 09:18:07.649469 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-hd62m" Jan 22 09:18:07 crc kubenswrapper[4811]: I0122 09:18:07.657671 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-7p5h9"] Jan 22 09:18:07 crc kubenswrapper[4811]: I0122 09:18:07.658359 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-7p5h9" Jan 22 09:18:07 crc kubenswrapper[4811]: I0122 09:18:07.667937 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-8ln8w" Jan 22 09:18:07 crc kubenswrapper[4811]: I0122 09:18:07.672606 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-7p5h9"] Jan 22 09:18:07 crc kubenswrapper[4811]: I0122 09:18:07.682354 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sdxsk\" (UniqueName: \"kubernetes.io/projected/e6fe0bc0-30b4-4a2f-b36d-93d5b288ecf8-kube-api-access-sdxsk\") pod \"cinder-operator-controller-manager-69cf5d4557-rgwhg\" (UID: \"e6fe0bc0-30b4-4a2f-b36d-93d5b288ecf8\") " pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-rgwhg" Jan 22 09:18:07 crc kubenswrapper[4811]: I0122 09:18:07.682387 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hk7xl\" (UniqueName: \"kubernetes.io/projected/b0f07719-5203-4d79-82b4-995b8af81a00-kube-api-access-hk7xl\") pod \"glance-operator-controller-manager-78fdd796fd-26vqb\" (UID: \"b0f07719-5203-4d79-82b4-995b8af81a00\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-26vqb" Jan 22 09:18:07 crc kubenswrapper[4811]: I0122 09:18:07.682406 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fl989\" (UniqueName: \"kubernetes.io/projected/62aa676a-95ae-40a8-9db5-b5fd24a293c2-kube-api-access-fl989\") pod \"designate-operator-controller-manager-b45d7bf98-2ltqr\" (UID: \"62aa676a-95ae-40a8-9db5-b5fd24a293c2\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-2ltqr" Jan 22 09:18:07 crc kubenswrapper[4811]: I0122 09:18:07.682457 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgf5q\" (UniqueName: \"kubernetes.io/projected/e019bc4b-f0e7-4a4f-a42c-1486010a63fd-kube-api-access-xgf5q\") pod \"heat-operator-controller-manager-594c8c9d5d-vbbnq\" (UID: \"e019bc4b-f0e7-4a4f-a42c-1486010a63fd\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-vbbnq" Jan 22 09:18:07 crc kubenswrapper[4811]: I0122 09:18:07.682492 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqwjg\" (UniqueName: \"kubernetes.io/projected/09ad3a19-244b-4685-8c96-0bee227b6547-kube-api-access-gqwjg\") pod \"barbican-operator-controller-manager-59dd8b7cbf-pklcs\" (UID: \"09ad3a19-244b-4685-8c96-0bee227b6547\") " pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-pklcs" Jan 22 09:18:07 crc kubenswrapper[4811]: I0122 09:18:07.682513 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjj9w\" (UniqueName: \"kubernetes.io/projected/62a9fc61-630e-4f4d-9788-f21e25ab4dda-kube-api-access-kjj9w\") pod \"horizon-operator-controller-manager-77d5c5b54f-7p5h9\" (UID: \"62a9fc61-630e-4f4d-9788-f21e25ab4dda\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-7p5h9" Jan 22 09:18:07 crc kubenswrapper[4811]: I0122 09:18:07.685663 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-54ccf4f85d-r6z6t"] Jan 22 09:18:07 crc kubenswrapper[4811]: I0122 09:18:07.686209 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-r6z6t" Jan 22 09:18:07 crc kubenswrapper[4811]: I0122 09:18:07.691647 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-fx6zn"] Jan 22 09:18:07 crc kubenswrapper[4811]: I0122 09:18:07.692271 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-fx6zn" Jan 22 09:18:07 crc kubenswrapper[4811]: I0122 09:18:07.701856 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-54ccf4f85d-r6z6t"] Jan 22 09:18:07 crc kubenswrapper[4811]: I0122 09:18:07.703587 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-6gl9s" Jan 22 09:18:07 crc kubenswrapper[4811]: I0122 09:18:07.703825 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-47mrr" Jan 22 09:18:07 crc kubenswrapper[4811]: I0122 09:18:07.704194 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 22 09:18:07 crc kubenswrapper[4811]: I0122 09:18:07.730290 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqwjg\" (UniqueName: \"kubernetes.io/projected/09ad3a19-244b-4685-8c96-0bee227b6547-kube-api-access-gqwjg\") pod \"barbican-operator-controller-manager-59dd8b7cbf-pklcs\" (UID: \"09ad3a19-244b-4685-8c96-0bee227b6547\") " pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-pklcs" Jan 22 09:18:07 crc kubenswrapper[4811]: I0122 09:18:07.746614 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fl989\" (UniqueName: \"kubernetes.io/projected/62aa676a-95ae-40a8-9db5-b5fd24a293c2-kube-api-access-fl989\") pod \"designate-operator-controller-manager-b45d7bf98-2ltqr\" (UID: \"62aa676a-95ae-40a8-9db5-b5fd24a293c2\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-2ltqr" Jan 22 09:18:07 crc kubenswrapper[4811]: I0122 09:18:07.750108 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sdxsk\" (UniqueName: \"kubernetes.io/projected/e6fe0bc0-30b4-4a2f-b36d-93d5b288ecf8-kube-api-access-sdxsk\") pod \"cinder-operator-controller-manager-69cf5d4557-rgwhg\" (UID: \"e6fe0bc0-30b4-4a2f-b36d-93d5b288ecf8\") " pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-rgwhg" Jan 22 09:18:07 crc kubenswrapper[4811]: I0122 09:18:07.755902 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-fx6zn"] Jan 22 09:18:07 crc kubenswrapper[4811]: I0122 09:18:07.759924 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-99m2t"] Jan 22 09:18:07 crc kubenswrapper[4811]: I0122 09:18:07.760642 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-99m2t" Jan 22 09:18:07 crc kubenswrapper[4811]: I0122 09:18:07.764748 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-kbb94" Jan 22 09:18:07 crc kubenswrapper[4811]: I0122 09:18:07.767232 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hk7xl\" (UniqueName: \"kubernetes.io/projected/b0f07719-5203-4d79-82b4-995b8af81a00-kube-api-access-hk7xl\") pod \"glance-operator-controller-manager-78fdd796fd-26vqb\" (UID: \"b0f07719-5203-4d79-82b4-995b8af81a00\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-26vqb" Jan 22 09:18:07 crc kubenswrapper[4811]: I0122 09:18:07.782945 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-99m2t"] Jan 22 09:18:07 crc kubenswrapper[4811]: I0122 09:18:07.783269 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xn5g6\" (UniqueName: \"kubernetes.io/projected/ce893825-4e8e-4c9b-b37e-a974d7cfda21-kube-api-access-xn5g6\") pod \"ironic-operator-controller-manager-69d6c9f5b8-fx6zn\" (UID: \"ce893825-4e8e-4c9b-b37e-a974d7cfda21\") " pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-fx6zn" Jan 22 09:18:07 crc kubenswrapper[4811]: I0122 09:18:07.783352 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/81d4cd92-880c-4806-ab95-fcb009827075-cert\") pod \"infra-operator-controller-manager-54ccf4f85d-r6z6t\" (UID: \"81d4cd92-880c-4806-ab95-fcb009827075\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-r6z6t" Jan 22 09:18:07 crc kubenswrapper[4811]: I0122 09:18:07.783433 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhh4t\" (UniqueName: \"kubernetes.io/projected/9dfc4274-a6f5-4dcb-8bcf-b2e0ff337574-kube-api-access-nhh4t\") pod \"keystone-operator-controller-manager-b8b6d4659-99m2t\" (UID: \"9dfc4274-a6f5-4dcb-8bcf-b2e0ff337574\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-99m2t" Jan 22 09:18:07 crc kubenswrapper[4811]: I0122 09:18:07.783510 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xgf5q\" (UniqueName: \"kubernetes.io/projected/e019bc4b-f0e7-4a4f-a42c-1486010a63fd-kube-api-access-xgf5q\") pod \"heat-operator-controller-manager-594c8c9d5d-vbbnq\" (UID: \"e019bc4b-f0e7-4a4f-a42c-1486010a63fd\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-vbbnq" Jan 22 09:18:07 crc kubenswrapper[4811]: I0122 09:18:07.783584 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxjb7\" (UniqueName: \"kubernetes.io/projected/81d4cd92-880c-4806-ab95-fcb009827075-kube-api-access-hxjb7\") pod \"infra-operator-controller-manager-54ccf4f85d-r6z6t\" (UID: \"81d4cd92-880c-4806-ab95-fcb009827075\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-r6z6t" Jan 22 09:18:07 crc kubenswrapper[4811]: I0122 09:18:07.783679 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjj9w\" (UniqueName: \"kubernetes.io/projected/62a9fc61-630e-4f4d-9788-f21e25ab4dda-kube-api-access-kjj9w\") pod \"horizon-operator-controller-manager-77d5c5b54f-7p5h9\" (UID: \"62a9fc61-630e-4f4d-9788-f21e25ab4dda\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-7p5h9" Jan 22 09:18:07 crc kubenswrapper[4811]: I0122 09:18:07.800804 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-4wtlm"] Jan 22 09:18:07 crc kubenswrapper[4811]: I0122 09:18:07.801493 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-4wtlm" Jan 22 09:18:07 crc kubenswrapper[4811]: I0122 09:18:07.817559 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-c87fff755-kc9m5"] Jan 22 09:18:07 crc kubenswrapper[4811]: I0122 09:18:07.818224 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-kc9m5" Jan 22 09:18:07 crc kubenswrapper[4811]: I0122 09:18:07.825803 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-w75nf" Jan 22 09:18:07 crc kubenswrapper[4811]: I0122 09:18:07.827767 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-4wtlm"] Jan 22 09:18:07 crc kubenswrapper[4811]: I0122 09:18:07.827873 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-nj7fg" Jan 22 09:18:07 crc kubenswrapper[4811]: I0122 09:18:07.836710 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-c87fff755-kc9m5"] Jan 22 09:18:07 crc kubenswrapper[4811]: I0122 09:18:07.836896 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-pklcs" Jan 22 09:18:07 crc kubenswrapper[4811]: I0122 09:18:07.837161 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjj9w\" (UniqueName: \"kubernetes.io/projected/62a9fc61-630e-4f4d-9788-f21e25ab4dda-kube-api-access-kjj9w\") pod \"horizon-operator-controller-manager-77d5c5b54f-7p5h9\" (UID: \"62a9fc61-630e-4f4d-9788-f21e25ab4dda\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-7p5h9" Jan 22 09:18:07 crc kubenswrapper[4811]: I0122 09:18:07.849056 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-rgwhg" Jan 22 09:18:07 crc kubenswrapper[4811]: I0122 09:18:07.852824 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgf5q\" (UniqueName: \"kubernetes.io/projected/e019bc4b-f0e7-4a4f-a42c-1486010a63fd-kube-api-access-xgf5q\") pod \"heat-operator-controller-manager-594c8c9d5d-vbbnq\" (UID: \"e019bc4b-f0e7-4a4f-a42c-1486010a63fd\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-vbbnq" Jan 22 09:18:07 crc kubenswrapper[4811]: I0122 09:18:07.860758 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-2ltqr" Jan 22 09:18:07 crc kubenswrapper[4811]: I0122 09:18:07.876947 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5d8f59fb49-tll52"] Jan 22 09:18:07 crc kubenswrapper[4811]: I0122 09:18:07.877573 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-tll52" Jan 22 09:18:07 crc kubenswrapper[4811]: I0122 09:18:07.884309 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhh4t\" (UniqueName: \"kubernetes.io/projected/9dfc4274-a6f5-4dcb-8bcf-b2e0ff337574-kube-api-access-nhh4t\") pod \"keystone-operator-controller-manager-b8b6d4659-99m2t\" (UID: \"9dfc4274-a6f5-4dcb-8bcf-b2e0ff337574\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-99m2t" Jan 22 09:18:07 crc kubenswrapper[4811]: I0122 09:18:07.884362 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcl88\" (UniqueName: \"kubernetes.io/projected/02697a04-4401-498c-9b69-ff0b57ce8f4b-kube-api-access-pcl88\") pod \"mariadb-operator-controller-manager-c87fff755-kc9m5\" (UID: \"02697a04-4401-498c-9b69-ff0b57ce8f4b\") " pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-kc9m5" Jan 22 09:18:07 crc kubenswrapper[4811]: I0122 09:18:07.884383 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxjb7\" (UniqueName: \"kubernetes.io/projected/81d4cd92-880c-4806-ab95-fcb009827075-kube-api-access-hxjb7\") pod \"infra-operator-controller-manager-54ccf4f85d-r6z6t\" (UID: \"81d4cd92-880c-4806-ab95-fcb009827075\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-r6z6t" Jan 22 09:18:07 crc kubenswrapper[4811]: I0122 09:18:07.884416 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ckdm\" (UniqueName: \"kubernetes.io/projected/688057d8-0445-42c1-b073-83deb026ab4c-kube-api-access-8ckdm\") pod \"manila-operator-controller-manager-78c6999f6f-4wtlm\" (UID: \"688057d8-0445-42c1-b073-83deb026ab4c\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-4wtlm" Jan 22 09:18:07 crc kubenswrapper[4811]: I0122 09:18:07.884431 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dbn7\" (UniqueName: \"kubernetes.io/projected/c284b86e-5a0e-4bd4-aa67-4e2c208ea4fa-kube-api-access-5dbn7\") pod \"neutron-operator-controller-manager-5d8f59fb49-tll52\" (UID: \"c284b86e-5a0e-4bd4-aa67-4e2c208ea4fa\") " pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-tll52" Jan 22 09:18:07 crc kubenswrapper[4811]: I0122 09:18:07.884471 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xn5g6\" (UniqueName: \"kubernetes.io/projected/ce893825-4e8e-4c9b-b37e-a974d7cfda21-kube-api-access-xn5g6\") pod \"ironic-operator-controller-manager-69d6c9f5b8-fx6zn\" (UID: \"ce893825-4e8e-4c9b-b37e-a974d7cfda21\") " pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-fx6zn" Jan 22 09:18:07 crc kubenswrapper[4811]: I0122 09:18:07.884490 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/81d4cd92-880c-4806-ab95-fcb009827075-cert\") pod \"infra-operator-controller-manager-54ccf4f85d-r6z6t\" (UID: \"81d4cd92-880c-4806-ab95-fcb009827075\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-r6z6t" Jan 22 09:18:07 crc kubenswrapper[4811]: E0122 09:18:07.884590 4811 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 22 09:18:07 crc kubenswrapper[4811]: E0122 09:18:07.884642 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/81d4cd92-880c-4806-ab95-fcb009827075-cert podName:81d4cd92-880c-4806-ab95-fcb009827075 nodeName:}" failed. No retries permitted until 2026-01-22 09:18:08.384614902 +0000 UTC m=+732.706802026 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/81d4cd92-880c-4806-ab95-fcb009827075-cert") pod "infra-operator-controller-manager-54ccf4f85d-r6z6t" (UID: "81d4cd92-880c-4806-ab95-fcb009827075") : secret "infra-operator-webhook-server-cert" not found Jan 22 09:18:07 crc kubenswrapper[4811]: I0122 09:18:07.890567 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-kpf5p" Jan 22 09:18:07 crc kubenswrapper[4811]: I0122 09:18:07.897907 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-26vqb" Jan 22 09:18:07 crc kubenswrapper[4811]: I0122 09:18:07.903667 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5d8f59fb49-tll52"] Jan 22 09:18:07 crc kubenswrapper[4811]: I0122 09:18:07.914134 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-6b8bc8d87d-h7wzt"] Jan 22 09:18:07 crc kubenswrapper[4811]: I0122 09:18:07.914684 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-h7wzt" Jan 22 09:18:07 crc kubenswrapper[4811]: I0122 09:18:07.936479 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-6sbs7" Jan 22 09:18:07 crc kubenswrapper[4811]: I0122 09:18:07.937967 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-6b8bc8d87d-h7wzt"] Jan 22 09:18:07 crc kubenswrapper[4811]: I0122 09:18:07.945168 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhh4t\" (UniqueName: \"kubernetes.io/projected/9dfc4274-a6f5-4dcb-8bcf-b2e0ff337574-kube-api-access-nhh4t\") pod \"keystone-operator-controller-manager-b8b6d4659-99m2t\" (UID: \"9dfc4274-a6f5-4dcb-8bcf-b2e0ff337574\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-99m2t" Jan 22 09:18:07 crc kubenswrapper[4811]: I0122 09:18:07.947176 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xn5g6\" (UniqueName: \"kubernetes.io/projected/ce893825-4e8e-4c9b-b37e-a974d7cfda21-kube-api-access-xn5g6\") pod \"ironic-operator-controller-manager-69d6c9f5b8-fx6zn\" (UID: \"ce893825-4e8e-4c9b-b37e-a974d7cfda21\") " pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-fx6zn" Jan 22 09:18:07 crc kubenswrapper[4811]: I0122 09:18:07.959375 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxjb7\" (UniqueName: \"kubernetes.io/projected/81d4cd92-880c-4806-ab95-fcb009827075-kube-api-access-hxjb7\") pod \"infra-operator-controller-manager-54ccf4f85d-r6z6t\" (UID: \"81d4cd92-880c-4806-ab95-fcb009827075\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-r6z6t" Jan 22 09:18:07 crc kubenswrapper[4811]: I0122 09:18:07.964939 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7bd9774b6-t9djx"] Jan 22 09:18:07 crc kubenswrapper[4811]: I0122 09:18:07.965495 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-t9djx" Jan 22 09:18:07 crc kubenswrapper[4811]: I0122 09:18:07.968815 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-b8tvf" Jan 22 09:18:07 crc kubenswrapper[4811]: I0122 09:18:07.973572 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-vbbnq" Jan 22 09:18:07 crc kubenswrapper[4811]: I0122 09:18:07.978516 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-7p5h9" Jan 22 09:18:07 crc kubenswrapper[4811]: I0122 09:18:07.984981 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gt9b6\" (UniqueName: \"kubernetes.io/projected/a247bb8f-a274-481d-916b-8ad80521af31-kube-api-access-gt9b6\") pod \"octavia-operator-controller-manager-7bd9774b6-t9djx\" (UID: \"a247bb8f-a274-481d-916b-8ad80521af31\") " pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-t9djx" Jan 22 09:18:07 crc kubenswrapper[4811]: I0122 09:18:07.985019 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pcl88\" (UniqueName: \"kubernetes.io/projected/02697a04-4401-498c-9b69-ff0b57ce8f4b-kube-api-access-pcl88\") pod \"mariadb-operator-controller-manager-c87fff755-kc9m5\" (UID: \"02697a04-4401-498c-9b69-ff0b57ce8f4b\") " pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-kc9m5" Jan 22 09:18:07 crc kubenswrapper[4811]: I0122 09:18:07.985051 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jffcl\" (UniqueName: \"kubernetes.io/projected/b157cb38-af8a-41bf-a29a-2da5b59aa500-kube-api-access-jffcl\") pod \"nova-operator-controller-manager-6b8bc8d87d-h7wzt\" (UID: \"b157cb38-af8a-41bf-a29a-2da5b59aa500\") " pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-h7wzt" Jan 22 09:18:07 crc kubenswrapper[4811]: I0122 09:18:07.985071 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ckdm\" (UniqueName: \"kubernetes.io/projected/688057d8-0445-42c1-b073-83deb026ab4c-kube-api-access-8ckdm\") pod \"manila-operator-controller-manager-78c6999f6f-4wtlm\" (UID: \"688057d8-0445-42c1-b073-83deb026ab4c\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-4wtlm" Jan 22 09:18:07 crc kubenswrapper[4811]: I0122 09:18:07.985087 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dbn7\" (UniqueName: \"kubernetes.io/projected/c284b86e-5a0e-4bd4-aa67-4e2c208ea4fa-kube-api-access-5dbn7\") pod \"neutron-operator-controller-manager-5d8f59fb49-tll52\" (UID: \"c284b86e-5a0e-4bd4-aa67-4e2c208ea4fa\") " pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-tll52" Jan 22 09:18:07 crc kubenswrapper[4811]: I0122 09:18:07.985179 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7bd9774b6-t9djx"] Jan 22 09:18:08 crc kubenswrapper[4811]: I0122 09:18:08.007514 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5dbn7\" (UniqueName: \"kubernetes.io/projected/c284b86e-5a0e-4bd4-aa67-4e2c208ea4fa-kube-api-access-5dbn7\") pod \"neutron-operator-controller-manager-5d8f59fb49-tll52\" (UID: \"c284b86e-5a0e-4bd4-aa67-4e2c208ea4fa\") " pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-tll52" Jan 22 09:18:08 crc kubenswrapper[4811]: I0122 09:18:08.013220 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pcl88\" (UniqueName: \"kubernetes.io/projected/02697a04-4401-498c-9b69-ff0b57ce8f4b-kube-api-access-pcl88\") pod \"mariadb-operator-controller-manager-c87fff755-kc9m5\" (UID: \"02697a04-4401-498c-9b69-ff0b57ce8f4b\") " pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-kc9m5" Jan 22 09:18:08 crc kubenswrapper[4811]: I0122 09:18:08.021827 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-fx6zn" Jan 22 09:18:08 crc kubenswrapper[4811]: I0122 09:18:08.037656 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7fbbcdb4d6qmzp5"] Jan 22 09:18:08 crc kubenswrapper[4811]: I0122 09:18:08.038769 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7fbbcdb4d6qmzp5" Jan 22 09:18:08 crc kubenswrapper[4811]: I0122 09:18:08.049096 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ckdm\" (UniqueName: \"kubernetes.io/projected/688057d8-0445-42c1-b073-83deb026ab4c-kube-api-access-8ckdm\") pod \"manila-operator-controller-manager-78c6999f6f-4wtlm\" (UID: \"688057d8-0445-42c1-b073-83deb026ab4c\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-4wtlm" Jan 22 09:18:08 crc kubenswrapper[4811]: I0122 09:18:08.049810 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-pb9pm" Jan 22 09:18:08 crc kubenswrapper[4811]: I0122 09:18:08.062913 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-55db956ddc-xp5jv"] Jan 22 09:18:08 crc kubenswrapper[4811]: I0122 09:18:08.067405 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-xp5jv" Jan 22 09:18:08 crc kubenswrapper[4811]: I0122 09:18:08.069041 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 22 09:18:08 crc kubenswrapper[4811]: I0122 09:18:08.102049 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gt9b6\" (UniqueName: \"kubernetes.io/projected/a247bb8f-a274-481d-916b-8ad80521af31-kube-api-access-gt9b6\") pod \"octavia-operator-controller-manager-7bd9774b6-t9djx\" (UID: \"a247bb8f-a274-481d-916b-8ad80521af31\") " pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-t9djx" Jan 22 09:18:08 crc kubenswrapper[4811]: I0122 09:18:08.102114 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jffcl\" (UniqueName: \"kubernetes.io/projected/b157cb38-af8a-41bf-a29a-2da5b59aa500-kube-api-access-jffcl\") pod \"nova-operator-controller-manager-6b8bc8d87d-h7wzt\" (UID: \"b157cb38-af8a-41bf-a29a-2da5b59aa500\") " pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-h7wzt" Jan 22 09:18:08 crc kubenswrapper[4811]: I0122 09:18:08.104502 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-5xztq" Jan 22 09:18:08 crc kubenswrapper[4811]: I0122 09:18:08.111819 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-99m2t" Jan 22 09:18:08 crc kubenswrapper[4811]: I0122 09:18:08.118069 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-55db956ddc-xp5jv"] Jan 22 09:18:08 crc kubenswrapper[4811]: I0122 09:18:08.118543 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-4wtlm" Jan 22 09:18:08 crc kubenswrapper[4811]: I0122 09:18:08.129942 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-kc9m5" Jan 22 09:18:08 crc kubenswrapper[4811]: I0122 09:18:08.154062 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7fbbcdb4d6qmzp5"] Jan 22 09:18:08 crc kubenswrapper[4811]: I0122 09:18:08.167814 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jffcl\" (UniqueName: \"kubernetes.io/projected/b157cb38-af8a-41bf-a29a-2da5b59aa500-kube-api-access-jffcl\") pod \"nova-operator-controller-manager-6b8bc8d87d-h7wzt\" (UID: \"b157cb38-af8a-41bf-a29a-2da5b59aa500\") " pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-h7wzt" Jan 22 09:18:08 crc kubenswrapper[4811]: I0122 09:18:08.170382 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gt9b6\" (UniqueName: \"kubernetes.io/projected/a247bb8f-a274-481d-916b-8ad80521af31-kube-api-access-gt9b6\") pod \"octavia-operator-controller-manager-7bd9774b6-t9djx\" (UID: \"a247bb8f-a274-481d-916b-8ad80521af31\") " pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-t9djx" Jan 22 09:18:08 crc kubenswrapper[4811]: I0122 09:18:08.171924 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-5d646b7d76-rvsm7"] Jan 22 09:18:08 crc kubenswrapper[4811]: I0122 09:18:08.202839 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-rvsm7" Jan 22 09:18:08 crc kubenswrapper[4811]: I0122 09:18:08.216109 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-gq6md" Jan 22 09:18:08 crc kubenswrapper[4811]: I0122 09:18:08.217280 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mldfr\" (UniqueName: \"kubernetes.io/projected/b579b636-697b-4a23-9de7-1f9a8537eb94-kube-api-access-mldfr\") pod \"placement-operator-controller-manager-5d646b7d76-rvsm7\" (UID: \"b579b636-697b-4a23-9de7-1f9a8537eb94\") " pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-rvsm7" Jan 22 09:18:08 crc kubenswrapper[4811]: I0122 09:18:08.217311 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6nk8\" (UniqueName: \"kubernetes.io/projected/e343c2da-412a-4226-b711-81f83fdbb04b-kube-api-access-j6nk8\") pod \"ovn-operator-controller-manager-55db956ddc-xp5jv\" (UID: \"e343c2da-412a-4226-b711-81f83fdbb04b\") " pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-xp5jv" Jan 22 09:18:08 crc kubenswrapper[4811]: I0122 09:18:08.217358 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7c209919-fd54-40e8-a741-7006cf8dd361-cert\") pod \"openstack-baremetal-operator-controller-manager-7fbbcdb4d6qmzp5\" (UID: \"7c209919-fd54-40e8-a741-7006cf8dd361\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7fbbcdb4d6qmzp5" Jan 22 09:18:08 crc kubenswrapper[4811]: I0122 09:18:08.217375 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntngj\" (UniqueName: \"kubernetes.io/projected/7c209919-fd54-40e8-a741-7006cf8dd361-kube-api-access-ntngj\") pod \"openstack-baremetal-operator-controller-manager-7fbbcdb4d6qmzp5\" (UID: \"7c209919-fd54-40e8-a741-7006cf8dd361\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7fbbcdb4d6qmzp5" Jan 22 09:18:08 crc kubenswrapper[4811]: I0122 09:18:08.217579 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5d646b7d76-rvsm7"] Jan 22 09:18:08 crc kubenswrapper[4811]: I0122 09:18:08.222857 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-tll52" Jan 22 09:18:08 crc kubenswrapper[4811]: I0122 09:18:08.235737 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-4xkc8"] Jan 22 09:18:08 crc kubenswrapper[4811]: I0122 09:18:08.236500 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-4xkc8" Jan 22 09:18:08 crc kubenswrapper[4811]: I0122 09:18:08.241342 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-h7wzt" Jan 22 09:18:08 crc kubenswrapper[4811]: I0122 09:18:08.253920 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-7btkt" Jan 22 09:18:08 crc kubenswrapper[4811]: I0122 09:18:08.260413 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-85cd9769bb-627nz"] Jan 22 09:18:08 crc kubenswrapper[4811]: I0122 09:18:08.261207 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-627nz" Jan 22 09:18:08 crc kubenswrapper[4811]: I0122 09:18:08.269151 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-nkvr8" Jan 22 09:18:08 crc kubenswrapper[4811]: I0122 09:18:08.277663 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-4xkc8"] Jan 22 09:18:08 crc kubenswrapper[4811]: I0122 09:18:08.279863 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-t9djx" Jan 22 09:18:08 crc kubenswrapper[4811]: I0122 09:18:08.280134 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-85cd9769bb-627nz"] Jan 22 09:18:08 crc kubenswrapper[4811]: I0122 09:18:08.283789 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-mg5cr"] Jan 22 09:18:08 crc kubenswrapper[4811]: I0122 09:18:08.284590 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-mg5cr" Jan 22 09:18:08 crc kubenswrapper[4811]: I0122 09:18:08.291253 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-9bhdg" Jan 22 09:18:08 crc kubenswrapper[4811]: I0122 09:18:08.311501 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-mg5cr"] Jan 22 09:18:08 crc kubenswrapper[4811]: I0122 09:18:08.316257 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5ffb9c6597-kxs2j"] Jan 22 09:18:08 crc kubenswrapper[4811]: I0122 09:18:08.316989 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-kxs2j" Jan 22 09:18:08 crc kubenswrapper[4811]: I0122 09:18:08.319504 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7c209919-fd54-40e8-a741-7006cf8dd361-cert\") pod \"openstack-baremetal-operator-controller-manager-7fbbcdb4d6qmzp5\" (UID: \"7c209919-fd54-40e8-a741-7006cf8dd361\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7fbbcdb4d6qmzp5" Jan 22 09:18:08 crc kubenswrapper[4811]: I0122 09:18:08.319554 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ntngj\" (UniqueName: \"kubernetes.io/projected/7c209919-fd54-40e8-a741-7006cf8dd361-kube-api-access-ntngj\") pod \"openstack-baremetal-operator-controller-manager-7fbbcdb4d6qmzp5\" (UID: \"7c209919-fd54-40e8-a741-7006cf8dd361\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7fbbcdb4d6qmzp5" Jan 22 09:18:08 crc kubenswrapper[4811]: I0122 09:18:08.319708 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mldfr\" (UniqueName: \"kubernetes.io/projected/b579b636-697b-4a23-9de7-1f9a8537eb94-kube-api-access-mldfr\") pod \"placement-operator-controller-manager-5d646b7d76-rvsm7\" (UID: \"b579b636-697b-4a23-9de7-1f9a8537eb94\") " pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-rvsm7" Jan 22 09:18:08 crc kubenswrapper[4811]: I0122 09:18:08.319742 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6nk8\" (UniqueName: \"kubernetes.io/projected/e343c2da-412a-4226-b711-81f83fdbb04b-kube-api-access-j6nk8\") pod \"ovn-operator-controller-manager-55db956ddc-xp5jv\" (UID: \"e343c2da-412a-4226-b711-81f83fdbb04b\") " pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-xp5jv" Jan 22 09:18:08 crc kubenswrapper[4811]: E0122 09:18:08.320673 4811 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 22 09:18:08 crc kubenswrapper[4811]: E0122 09:18:08.320742 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7c209919-fd54-40e8-a741-7006cf8dd361-cert podName:7c209919-fd54-40e8-a741-7006cf8dd361 nodeName:}" failed. No retries permitted until 2026-01-22 09:18:08.820697784 +0000 UTC m=+733.142884907 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/7c209919-fd54-40e8-a741-7006cf8dd361-cert") pod "openstack-baremetal-operator-controller-manager-7fbbcdb4d6qmzp5" (UID: "7c209919-fd54-40e8-a741-7006cf8dd361") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 22 09:18:08 crc kubenswrapper[4811]: I0122 09:18:08.322616 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-p7tfd" Jan 22 09:18:08 crc kubenswrapper[4811]: I0122 09:18:08.330318 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5ffb9c6597-kxs2j"] Jan 22 09:18:08 crc kubenswrapper[4811]: I0122 09:18:08.339470 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6nk8\" (UniqueName: \"kubernetes.io/projected/e343c2da-412a-4226-b711-81f83fdbb04b-kube-api-access-j6nk8\") pod \"ovn-operator-controller-manager-55db956ddc-xp5jv\" (UID: \"e343c2da-412a-4226-b711-81f83fdbb04b\") " pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-xp5jv" Jan 22 09:18:08 crc kubenswrapper[4811]: I0122 09:18:08.344014 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntngj\" (UniqueName: \"kubernetes.io/projected/7c209919-fd54-40e8-a741-7006cf8dd361-kube-api-access-ntngj\") pod \"openstack-baremetal-operator-controller-manager-7fbbcdb4d6qmzp5\" (UID: \"7c209919-fd54-40e8-a741-7006cf8dd361\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7fbbcdb4d6qmzp5" Jan 22 09:18:08 crc kubenswrapper[4811]: I0122 09:18:08.354484 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mldfr\" (UniqueName: \"kubernetes.io/projected/b579b636-697b-4a23-9de7-1f9a8537eb94-kube-api-access-mldfr\") pod \"placement-operator-controller-manager-5d646b7d76-rvsm7\" (UID: \"b579b636-697b-4a23-9de7-1f9a8537eb94\") " pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-rvsm7" Jan 22 09:18:08 crc kubenswrapper[4811]: I0122 09:18:08.380727 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-xp5jv" Jan 22 09:18:08 crc kubenswrapper[4811]: I0122 09:18:08.423264 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-742x7\" (UniqueName: \"kubernetes.io/projected/42323d0d-05b6-4a0d-a809-405dec7c2893-kube-api-access-742x7\") pod \"telemetry-operator-controller-manager-85cd9769bb-627nz\" (UID: \"42323d0d-05b6-4a0d-a809-405dec7c2893\") " pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-627nz" Jan 22 09:18:08 crc kubenswrapper[4811]: I0122 09:18:08.423309 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9pz9\" (UniqueName: \"kubernetes.io/projected/d990df50-3df1-46b6-b6df-5b84bf8eeb20-kube-api-access-t9pz9\") pod \"test-operator-controller-manager-69797bbcbd-mg5cr\" (UID: \"d990df50-3df1-46b6-b6df-5b84bf8eeb20\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-mg5cr" Jan 22 09:18:08 crc kubenswrapper[4811]: I0122 09:18:08.423370 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zb5s\" (UniqueName: \"kubernetes.io/projected/15eb97d5-2508-4c32-8b7e-65f1015767cf-kube-api-access-8zb5s\") pod \"watcher-operator-controller-manager-5ffb9c6597-kxs2j\" (UID: \"15eb97d5-2508-4c32-8b7e-65f1015767cf\") " pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-kxs2j" Jan 22 09:18:08 crc kubenswrapper[4811]: I0122 09:18:08.423410 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/81d4cd92-880c-4806-ab95-fcb009827075-cert\") pod \"infra-operator-controller-manager-54ccf4f85d-r6z6t\" (UID: \"81d4cd92-880c-4806-ab95-fcb009827075\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-r6z6t" Jan 22 09:18:08 crc kubenswrapper[4811]: I0122 09:18:08.423448 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2dr4\" (UniqueName: \"kubernetes.io/projected/b3409f06-ef57-4717-b3e2-9b4f788fd7f0-kube-api-access-w2dr4\") pod \"swift-operator-controller-manager-547cbdb99f-4xkc8\" (UID: \"b3409f06-ef57-4717-b3e2-9b4f788fd7f0\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-4xkc8" Jan 22 09:18:08 crc kubenswrapper[4811]: E0122 09:18:08.427172 4811 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 22 09:18:08 crc kubenswrapper[4811]: E0122 09:18:08.427219 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/81d4cd92-880c-4806-ab95-fcb009827075-cert podName:81d4cd92-880c-4806-ab95-fcb009827075 nodeName:}" failed. No retries permitted until 2026-01-22 09:18:09.427206875 +0000 UTC m=+733.749393998 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/81d4cd92-880c-4806-ab95-fcb009827075-cert") pod "infra-operator-controller-manager-54ccf4f85d-r6z6t" (UID: "81d4cd92-880c-4806-ab95-fcb009827075") : secret "infra-operator-webhook-server-cert" not found Jan 22 09:18:08 crc kubenswrapper[4811]: I0122 09:18:08.439675 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-647bb87bbd-v227g"] Jan 22 09:18:08 crc kubenswrapper[4811]: I0122 09:18:08.440452 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-647bb87bbd-v227g" Jan 22 09:18:08 crc kubenswrapper[4811]: I0122 09:18:08.444126 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 22 09:18:08 crc kubenswrapper[4811]: I0122 09:18:08.444189 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 22 09:18:08 crc kubenswrapper[4811]: I0122 09:18:08.444201 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-4vnj6" Jan 22 09:18:08 crc kubenswrapper[4811]: I0122 09:18:08.453223 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-647bb87bbd-v227g"] Jan 22 09:18:08 crc kubenswrapper[4811]: I0122 09:18:08.524172 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9pz9\" (UniqueName: \"kubernetes.io/projected/d990df50-3df1-46b6-b6df-5b84bf8eeb20-kube-api-access-t9pz9\") pod \"test-operator-controller-manager-69797bbcbd-mg5cr\" (UID: \"d990df50-3df1-46b6-b6df-5b84bf8eeb20\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-mg5cr" Jan 22 09:18:08 crc kubenswrapper[4811]: I0122 09:18:08.524251 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8zb5s\" (UniqueName: \"kubernetes.io/projected/15eb97d5-2508-4c32-8b7e-65f1015767cf-kube-api-access-8zb5s\") pod \"watcher-operator-controller-manager-5ffb9c6597-kxs2j\" (UID: \"15eb97d5-2508-4c32-8b7e-65f1015767cf\") " pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-kxs2j" Jan 22 09:18:08 crc kubenswrapper[4811]: I0122 09:18:08.524319 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2dr4\" (UniqueName: \"kubernetes.io/projected/b3409f06-ef57-4717-b3e2-9b4f788fd7f0-kube-api-access-w2dr4\") pod \"swift-operator-controller-manager-547cbdb99f-4xkc8\" (UID: \"b3409f06-ef57-4717-b3e2-9b4f788fd7f0\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-4xkc8" Jan 22 09:18:08 crc kubenswrapper[4811]: I0122 09:18:08.524356 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-742x7\" (UniqueName: \"kubernetes.io/projected/42323d0d-05b6-4a0d-a809-405dec7c2893-kube-api-access-742x7\") pod \"telemetry-operator-controller-manager-85cd9769bb-627nz\" (UID: \"42323d0d-05b6-4a0d-a809-405dec7c2893\") " pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-627nz" Jan 22 09:18:08 crc kubenswrapper[4811]: I0122 09:18:08.547685 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2dr4\" (UniqueName: \"kubernetes.io/projected/b3409f06-ef57-4717-b3e2-9b4f788fd7f0-kube-api-access-w2dr4\") pod \"swift-operator-controller-manager-547cbdb99f-4xkc8\" (UID: \"b3409f06-ef57-4717-b3e2-9b4f788fd7f0\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-4xkc8" Jan 22 09:18:08 crc kubenswrapper[4811]: I0122 09:18:08.553816 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9pz9\" (UniqueName: \"kubernetes.io/projected/d990df50-3df1-46b6-b6df-5b84bf8eeb20-kube-api-access-t9pz9\") pod \"test-operator-controller-manager-69797bbcbd-mg5cr\" (UID: \"d990df50-3df1-46b6-b6df-5b84bf8eeb20\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-mg5cr" Jan 22 09:18:08 crc kubenswrapper[4811]: I0122 09:18:08.555759 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-742x7\" (UniqueName: \"kubernetes.io/projected/42323d0d-05b6-4a0d-a809-405dec7c2893-kube-api-access-742x7\") pod \"telemetry-operator-controller-manager-85cd9769bb-627nz\" (UID: \"42323d0d-05b6-4a0d-a809-405dec7c2893\") " pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-627nz" Jan 22 09:18:08 crc kubenswrapper[4811]: I0122 09:18:08.558340 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zb5s\" (UniqueName: \"kubernetes.io/projected/15eb97d5-2508-4c32-8b7e-65f1015767cf-kube-api-access-8zb5s\") pod \"watcher-operator-controller-manager-5ffb9c6597-kxs2j\" (UID: \"15eb97d5-2508-4c32-8b7e-65f1015767cf\") " pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-kxs2j" Jan 22 09:18:08 crc kubenswrapper[4811]: I0122 09:18:08.571294 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2l9vr"] Jan 22 09:18:08 crc kubenswrapper[4811]: I0122 09:18:08.571983 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2l9vr" Jan 22 09:18:08 crc kubenswrapper[4811]: I0122 09:18:08.575426 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-wlj5l" Jan 22 09:18:08 crc kubenswrapper[4811]: I0122 09:18:08.594652 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2l9vr"] Jan 22 09:18:08 crc kubenswrapper[4811]: I0122 09:18:08.608187 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-4xkc8" Jan 22 09:18:08 crc kubenswrapper[4811]: I0122 09:18:08.618076 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-rvsm7" Jan 22 09:18:08 crc kubenswrapper[4811]: I0122 09:18:08.625384 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-477vx\" (UniqueName: \"kubernetes.io/projected/c0b74933-8fe4-4fb1-82af-eda7df5c3c06-kube-api-access-477vx\") pod \"openstack-operator-controller-manager-647bb87bbd-v227g\" (UID: \"c0b74933-8fe4-4fb1-82af-eda7df5c3c06\") " pod="openstack-operators/openstack-operator-controller-manager-647bb87bbd-v227g" Jan 22 09:18:08 crc kubenswrapper[4811]: I0122 09:18:08.625463 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c0b74933-8fe4-4fb1-82af-eda7df5c3c06-metrics-certs\") pod \"openstack-operator-controller-manager-647bb87bbd-v227g\" (UID: \"c0b74933-8fe4-4fb1-82af-eda7df5c3c06\") " pod="openstack-operators/openstack-operator-controller-manager-647bb87bbd-v227g" Jan 22 09:18:08 crc kubenswrapper[4811]: I0122 09:18:08.625548 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c0b74933-8fe4-4fb1-82af-eda7df5c3c06-webhook-certs\") pod \"openstack-operator-controller-manager-647bb87bbd-v227g\" (UID: \"c0b74933-8fe4-4fb1-82af-eda7df5c3c06\") " pod="openstack-operators/openstack-operator-controller-manager-647bb87bbd-v227g" Jan 22 09:18:08 crc kubenswrapper[4811]: I0122 09:18:08.644503 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-627nz" Jan 22 09:18:08 crc kubenswrapper[4811]: I0122 09:18:08.656748 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-kxs2j" Jan 22 09:18:08 crc kubenswrapper[4811]: I0122 09:18:08.663588 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-mg5cr" Jan 22 09:18:08 crc kubenswrapper[4811]: I0122 09:18:08.726103 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrp5t\" (UniqueName: \"kubernetes.io/projected/c624375e-a5cf-49b8-a54a-5770a6c7e738-kube-api-access-rrp5t\") pod \"rabbitmq-cluster-operator-manager-668c99d594-2l9vr\" (UID: \"c624375e-a5cf-49b8-a54a-5770a6c7e738\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2l9vr" Jan 22 09:18:08 crc kubenswrapper[4811]: I0122 09:18:08.726149 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c0b74933-8fe4-4fb1-82af-eda7df5c3c06-webhook-certs\") pod \"openstack-operator-controller-manager-647bb87bbd-v227g\" (UID: \"c0b74933-8fe4-4fb1-82af-eda7df5c3c06\") " pod="openstack-operators/openstack-operator-controller-manager-647bb87bbd-v227g" Jan 22 09:18:08 crc kubenswrapper[4811]: I0122 09:18:08.726178 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-477vx\" (UniqueName: \"kubernetes.io/projected/c0b74933-8fe4-4fb1-82af-eda7df5c3c06-kube-api-access-477vx\") pod \"openstack-operator-controller-manager-647bb87bbd-v227g\" (UID: \"c0b74933-8fe4-4fb1-82af-eda7df5c3c06\") " pod="openstack-operators/openstack-operator-controller-manager-647bb87bbd-v227g" Jan 22 09:18:08 crc kubenswrapper[4811]: I0122 09:18:08.726226 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c0b74933-8fe4-4fb1-82af-eda7df5c3c06-metrics-certs\") pod \"openstack-operator-controller-manager-647bb87bbd-v227g\" (UID: \"c0b74933-8fe4-4fb1-82af-eda7df5c3c06\") " pod="openstack-operators/openstack-operator-controller-manager-647bb87bbd-v227g" Jan 22 09:18:08 crc kubenswrapper[4811]: E0122 09:18:08.726324 4811 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 22 09:18:08 crc kubenswrapper[4811]: E0122 09:18:08.726362 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c0b74933-8fe4-4fb1-82af-eda7df5c3c06-metrics-certs podName:c0b74933-8fe4-4fb1-82af-eda7df5c3c06 nodeName:}" failed. No retries permitted until 2026-01-22 09:18:09.22635083 +0000 UTC m=+733.548537953 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c0b74933-8fe4-4fb1-82af-eda7df5c3c06-metrics-certs") pod "openstack-operator-controller-manager-647bb87bbd-v227g" (UID: "c0b74933-8fe4-4fb1-82af-eda7df5c3c06") : secret "metrics-server-cert" not found Jan 22 09:18:08 crc kubenswrapper[4811]: E0122 09:18:08.726407 4811 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 22 09:18:08 crc kubenswrapper[4811]: E0122 09:18:08.726424 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c0b74933-8fe4-4fb1-82af-eda7df5c3c06-webhook-certs podName:c0b74933-8fe4-4fb1-82af-eda7df5c3c06 nodeName:}" failed. No retries permitted until 2026-01-22 09:18:09.226419038 +0000 UTC m=+733.548606161 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/c0b74933-8fe4-4fb1-82af-eda7df5c3c06-webhook-certs") pod "openstack-operator-controller-manager-647bb87bbd-v227g" (UID: "c0b74933-8fe4-4fb1-82af-eda7df5c3c06") : secret "webhook-server-cert" not found Jan 22 09:18:08 crc kubenswrapper[4811]: I0122 09:18:08.769255 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-477vx\" (UniqueName: \"kubernetes.io/projected/c0b74933-8fe4-4fb1-82af-eda7df5c3c06-kube-api-access-477vx\") pod \"openstack-operator-controller-manager-647bb87bbd-v227g\" (UID: \"c0b74933-8fe4-4fb1-82af-eda7df5c3c06\") " pod="openstack-operators/openstack-operator-controller-manager-647bb87bbd-v227g" Jan 22 09:18:08 crc kubenswrapper[4811]: I0122 09:18:08.827652 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7c209919-fd54-40e8-a741-7006cf8dd361-cert\") pod \"openstack-baremetal-operator-controller-manager-7fbbcdb4d6qmzp5\" (UID: \"7c209919-fd54-40e8-a741-7006cf8dd361\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7fbbcdb4d6qmzp5" Jan 22 09:18:08 crc kubenswrapper[4811]: I0122 09:18:08.827690 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrp5t\" (UniqueName: \"kubernetes.io/projected/c624375e-a5cf-49b8-a54a-5770a6c7e738-kube-api-access-rrp5t\") pod \"rabbitmq-cluster-operator-manager-668c99d594-2l9vr\" (UID: \"c624375e-a5cf-49b8-a54a-5770a6c7e738\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2l9vr" Jan 22 09:18:08 crc kubenswrapper[4811]: E0122 09:18:08.827858 4811 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 22 09:18:08 crc kubenswrapper[4811]: E0122 09:18:08.827915 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7c209919-fd54-40e8-a741-7006cf8dd361-cert podName:7c209919-fd54-40e8-a741-7006cf8dd361 nodeName:}" failed. No retries permitted until 2026-01-22 09:18:09.827899122 +0000 UTC m=+734.150086244 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/7c209919-fd54-40e8-a741-7006cf8dd361-cert") pod "openstack-baremetal-operator-controller-manager-7fbbcdb4d6qmzp5" (UID: "7c209919-fd54-40e8-a741-7006cf8dd361") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 22 09:18:08 crc kubenswrapper[4811]: I0122 09:18:08.849155 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-26vqb"] Jan 22 09:18:08 crc kubenswrapper[4811]: I0122 09:18:08.861866 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrp5t\" (UniqueName: \"kubernetes.io/projected/c624375e-a5cf-49b8-a54a-5770a6c7e738-kube-api-access-rrp5t\") pod \"rabbitmq-cluster-operator-manager-668c99d594-2l9vr\" (UID: \"c624375e-a5cf-49b8-a54a-5770a6c7e738\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2l9vr" Jan 22 09:18:08 crc kubenswrapper[4811]: I0122 09:18:08.880415 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-pklcs"] Jan 22 09:18:08 crc kubenswrapper[4811]: I0122 09:18:08.902153 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-69cf5d4557-rgwhg"] Jan 22 09:18:08 crc kubenswrapper[4811]: I0122 09:18:08.907162 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-7p5h9"] Jan 22 09:18:08 crc kubenswrapper[4811]: W0122 09:18:08.913335 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod62a9fc61_630e_4f4d_9788_f21e25ab4dda.slice/crio-7c6f3ae986e6a364f0f30e29e83fef8488bda725372aea224b2bf58e21c2b201 WatchSource:0}: Error finding container 7c6f3ae986e6a364f0f30e29e83fef8488bda725372aea224b2bf58e21c2b201: Status 404 returned error can't find the container with id 7c6f3ae986e6a364f0f30e29e83fef8488bda725372aea224b2bf58e21c2b201 Jan 22 09:18:08 crc kubenswrapper[4811]: W0122 09:18:08.914192 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode6fe0bc0_30b4_4a2f_b36d_93d5b288ecf8.slice/crio-ffbe43ac488489780f4ec3ded08f8033c632e25af4fcb022b8bd79f104550d47 WatchSource:0}: Error finding container ffbe43ac488489780f4ec3ded08f8033c632e25af4fcb022b8bd79f104550d47: Status 404 returned error can't find the container with id ffbe43ac488489780f4ec3ded08f8033c632e25af4fcb022b8bd79f104550d47 Jan 22 09:18:08 crc kubenswrapper[4811]: I0122 09:18:08.928477 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-pklcs" event={"ID":"09ad3a19-244b-4685-8c96-0bee227b6547","Type":"ContainerStarted","Data":"14a848d4cbdcb9d46fe7e68b966a69ef827cb7425a3b38ef8de5ec626397286a"} Jan 22 09:18:08 crc kubenswrapper[4811]: I0122 09:18:08.929029 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2l9vr" Jan 22 09:18:08 crc kubenswrapper[4811]: I0122 09:18:08.938609 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-4wtlm"] Jan 22 09:18:08 crc kubenswrapper[4811]: I0122 09:18:08.939111 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-26vqb" event={"ID":"b0f07719-5203-4d79-82b4-995b8af81a00","Type":"ContainerStarted","Data":"201a5b31ead13bb92675383329f91c22e414c573b5e4013210d01758be398690"} Jan 22 09:18:08 crc kubenswrapper[4811]: W0122 09:18:08.952494 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod688057d8_0445_42c1_b073_83deb026ab4c.slice/crio-44080de54fe898904945b0947b4d082e3a1f427c912f044525670ff833735d8f WatchSource:0}: Error finding container 44080de54fe898904945b0947b4d082e3a1f427c912f044525670ff833735d8f: Status 404 returned error can't find the container with id 44080de54fe898904945b0947b4d082e3a1f427c912f044525670ff833735d8f Jan 22 09:18:08 crc kubenswrapper[4811]: I0122 09:18:08.956578 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-7p5h9" event={"ID":"62a9fc61-630e-4f4d-9788-f21e25ab4dda","Type":"ContainerStarted","Data":"7c6f3ae986e6a364f0f30e29e83fef8488bda725372aea224b2bf58e21c2b201"} Jan 22 09:18:09 crc kubenswrapper[4811]: I0122 09:18:09.065943 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-2ltqr"] Jan 22 09:18:09 crc kubenswrapper[4811]: I0122 09:18:09.235891 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c0b74933-8fe4-4fb1-82af-eda7df5c3c06-webhook-certs\") pod \"openstack-operator-controller-manager-647bb87bbd-v227g\" (UID: \"c0b74933-8fe4-4fb1-82af-eda7df5c3c06\") " pod="openstack-operators/openstack-operator-controller-manager-647bb87bbd-v227g" Jan 22 09:18:09 crc kubenswrapper[4811]: E0122 09:18:09.236115 4811 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 22 09:18:09 crc kubenswrapper[4811]: E0122 09:18:09.236184 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c0b74933-8fe4-4fb1-82af-eda7df5c3c06-webhook-certs podName:c0b74933-8fe4-4fb1-82af-eda7df5c3c06 nodeName:}" failed. No retries permitted until 2026-01-22 09:18:10.236154762 +0000 UTC m=+734.558341885 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/c0b74933-8fe4-4fb1-82af-eda7df5c3c06-webhook-certs") pod "openstack-operator-controller-manager-647bb87bbd-v227g" (UID: "c0b74933-8fe4-4fb1-82af-eda7df5c3c06") : secret "webhook-server-cert" not found Jan 22 09:18:09 crc kubenswrapper[4811]: I0122 09:18:09.236542 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c0b74933-8fe4-4fb1-82af-eda7df5c3c06-metrics-certs\") pod \"openstack-operator-controller-manager-647bb87bbd-v227g\" (UID: \"c0b74933-8fe4-4fb1-82af-eda7df5c3c06\") " pod="openstack-operators/openstack-operator-controller-manager-647bb87bbd-v227g" Jan 22 09:18:09 crc kubenswrapper[4811]: E0122 09:18:09.236672 4811 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 22 09:18:09 crc kubenswrapper[4811]: E0122 09:18:09.236730 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c0b74933-8fe4-4fb1-82af-eda7df5c3c06-metrics-certs podName:c0b74933-8fe4-4fb1-82af-eda7df5c3c06 nodeName:}" failed. No retries permitted until 2026-01-22 09:18:10.236695371 +0000 UTC m=+734.558882494 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c0b74933-8fe4-4fb1-82af-eda7df5c3c06-metrics-certs") pod "openstack-operator-controller-manager-647bb87bbd-v227g" (UID: "c0b74933-8fe4-4fb1-82af-eda7df5c3c06") : secret "metrics-server-cert" not found Jan 22 09:18:09 crc kubenswrapper[4811]: I0122 09:18:09.279562 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-99m2t"] Jan 22 09:18:09 crc kubenswrapper[4811]: I0122 09:18:09.285509 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-c87fff755-kc9m5"] Jan 22 09:18:09 crc kubenswrapper[4811]: I0122 09:18:09.291562 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-6b8bc8d87d-h7wzt"] Jan 22 09:18:09 crc kubenswrapper[4811]: I0122 09:18:09.305901 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-fx6zn"] Jan 22 09:18:09 crc kubenswrapper[4811]: I0122 09:18:09.320319 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-55db956ddc-xp5jv"] Jan 22 09:18:09 crc kubenswrapper[4811]: I0122 09:18:09.327478 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-vbbnq"] Jan 22 09:18:09 crc kubenswrapper[4811]: W0122 09:18:09.331165 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podce893825_4e8e_4c9b_b37e_a974d7cfda21.slice/crio-239b15e34a6d0093efa39c80b6676af070517b971f28ea9421a0ca3157942b97 WatchSource:0}: Error finding container 239b15e34a6d0093efa39c80b6676af070517b971f28ea9421a0ca3157942b97: Status 404 returned error can't find the container with id 239b15e34a6d0093efa39c80b6676af070517b971f28ea9421a0ca3157942b97 Jan 22 09:18:09 crc kubenswrapper[4811]: I0122 09:18:09.331648 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7bd9774b6-t9djx"] Jan 22 09:18:09 crc kubenswrapper[4811]: W0122 09:18:09.334135 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda247bb8f_a274_481d_916b_8ad80521af31.slice/crio-ba437d5f326113f2196056cf09a264fb4351665ae46aa56882eeaad8c1a54d6f WatchSource:0}: Error finding container ba437d5f326113f2196056cf09a264fb4351665ae46aa56882eeaad8c1a54d6f: Status 404 returned error can't find the container with id ba437d5f326113f2196056cf09a264fb4351665ae46aa56882eeaad8c1a54d6f Jan 22 09:18:09 crc kubenswrapper[4811]: W0122 09:18:09.337439 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode343c2da_412a_4226_b711_81f83fdbb04b.slice/crio-495fb0c113f2a58176124bc7e1c96923daf81a59485cadfe13014e9fcb2cfd5b WatchSource:0}: Error finding container 495fb0c113f2a58176124bc7e1c96923daf81a59485cadfe13014e9fcb2cfd5b: Status 404 returned error can't find the container with id 495fb0c113f2a58176124bc7e1c96923daf81a59485cadfe13014e9fcb2cfd5b Jan 22 09:18:09 crc kubenswrapper[4811]: E0122 09:18:09.342922 4811 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:a8fc8f9d445b1232f446119015b226008b07c6a259f5bebc1fcbb39ec310afe5,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gt9b6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-7bd9774b6-t9djx_openstack-operators(a247bb8f-a274-481d-916b-8ad80521af31): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 22 09:18:09 crc kubenswrapper[4811]: E0122 09:18:09.344245 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-t9djx" podUID="a247bb8f-a274-481d-916b-8ad80521af31" Jan 22 09:18:09 crc kubenswrapper[4811]: I0122 09:18:09.439790 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/81d4cd92-880c-4806-ab95-fcb009827075-cert\") pod \"infra-operator-controller-manager-54ccf4f85d-r6z6t\" (UID: \"81d4cd92-880c-4806-ab95-fcb009827075\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-r6z6t" Jan 22 09:18:09 crc kubenswrapper[4811]: E0122 09:18:09.440037 4811 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 22 09:18:09 crc kubenswrapper[4811]: E0122 09:18:09.440079 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/81d4cd92-880c-4806-ab95-fcb009827075-cert podName:81d4cd92-880c-4806-ab95-fcb009827075 nodeName:}" failed. No retries permitted until 2026-01-22 09:18:11.440066431 +0000 UTC m=+735.762253553 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/81d4cd92-880c-4806-ab95-fcb009827075-cert") pod "infra-operator-controller-manager-54ccf4f85d-r6z6t" (UID: "81d4cd92-880c-4806-ab95-fcb009827075") : secret "infra-operator-webhook-server-cert" not found Jan 22 09:18:09 crc kubenswrapper[4811]: I0122 09:18:09.451917 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5d8f59fb49-tll52"] Jan 22 09:18:09 crc kubenswrapper[4811]: E0122 09:18:09.458055 4811 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:b57d65d2a968705b9067192a7cb33bd4a12489db87e1d05de78c076f2062cab4,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5dbn7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-5d8f59fb49-tll52_openstack-operators(c284b86e-5a0e-4bd4-aa67-4e2c208ea4fa): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 22 09:18:09 crc kubenswrapper[4811]: E0122 09:18:09.459148 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-tll52" podUID="c284b86e-5a0e-4bd4-aa67-4e2c208ea4fa" Jan 22 09:18:09 crc kubenswrapper[4811]: I0122 09:18:09.470213 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5d646b7d76-rvsm7"] Jan 22 09:18:09 crc kubenswrapper[4811]: W0122 09:18:09.475827 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb579b636_697b_4a23_9de7_1f9a8537eb94.slice/crio-493cffc0654e7b4682496ab4000352510844b6c3ab4d53f8ac7476c1f3291509 WatchSource:0}: Error finding container 493cffc0654e7b4682496ab4000352510844b6c3ab4d53f8ac7476c1f3291509: Status 404 returned error can't find the container with id 493cffc0654e7b4682496ab4000352510844b6c3ab4d53f8ac7476c1f3291509 Jan 22 09:18:09 crc kubenswrapper[4811]: E0122 09:18:09.477749 4811 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:65cfe5b9d5b0571aaf8ff9840b12cc56e90ca4cef162dd260c3a9fa2b52c6dd0,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mldfr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-5d646b7d76-rvsm7_openstack-operators(b579b636-697b-4a23-9de7-1f9a8537eb94): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 22 09:18:09 crc kubenswrapper[4811]: E0122 09:18:09.478903 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-rvsm7" podUID="b579b636-697b-4a23-9de7-1f9a8537eb94" Jan 22 09:18:09 crc kubenswrapper[4811]: I0122 09:18:09.495908 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5ffb9c6597-kxs2j"] Jan 22 09:18:09 crc kubenswrapper[4811]: I0122 09:18:09.499046 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-85cd9769bb-627nz"] Jan 22 09:18:09 crc kubenswrapper[4811]: W0122 09:18:09.500152 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod15eb97d5_2508_4c32_8b7e_65f1015767cf.slice/crio-88311c03db93b0b8756b4b8002665e37f82e5a9601354257f859d962b87a67b4 WatchSource:0}: Error finding container 88311c03db93b0b8756b4b8002665e37f82e5a9601354257f859d962b87a67b4: Status 404 returned error can't find the container with id 88311c03db93b0b8756b4b8002665e37f82e5a9601354257f859d962b87a67b4 Jan 22 09:18:09 crc kubenswrapper[4811]: I0122 09:18:09.504130 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-mg5cr"] Jan 22 09:18:09 crc kubenswrapper[4811]: E0122 09:18:09.505541 4811 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:e02722d7581bfe1c5fc13e2fa6811d8665102ba86635c77547abf6b933cde127,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-742x7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-85cd9769bb-627nz_openstack-operators(42323d0d-05b6-4a0d-a809-405dec7c2893): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 22 09:18:09 crc kubenswrapper[4811]: E0122 09:18:09.509559 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-627nz" podUID="42323d0d-05b6-4a0d-a809-405dec7c2893" Jan 22 09:18:09 crc kubenswrapper[4811]: E0122 09:18:09.511776 4811 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-t9pz9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-69797bbcbd-mg5cr_openstack-operators(d990df50-3df1-46b6-b6df-5b84bf8eeb20): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 22 09:18:09 crc kubenswrapper[4811]: E0122 09:18:09.512877 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-mg5cr" podUID="d990df50-3df1-46b6-b6df-5b84bf8eeb20" Jan 22 09:18:09 crc kubenswrapper[4811]: I0122 09:18:09.514303 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2l9vr"] Jan 22 09:18:09 crc kubenswrapper[4811]: W0122 09:18:09.517306 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc624375e_a5cf_49b8_a54a_5770a6c7e738.slice/crio-48795d21f911c560f149809a1299025900954b9cbf156601afc1dd47b76c3092 WatchSource:0}: Error finding container 48795d21f911c560f149809a1299025900954b9cbf156601afc1dd47b76c3092: Status 404 returned error can't find the container with id 48795d21f911c560f149809a1299025900954b9cbf156601afc1dd47b76c3092 Jan 22 09:18:09 crc kubenswrapper[4811]: E0122 09:18:09.520925 4811 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rrp5t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-2l9vr_openstack-operators(c624375e-a5cf-49b8-a54a-5770a6c7e738): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 22 09:18:09 crc kubenswrapper[4811]: E0122 09:18:09.522772 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2l9vr" podUID="c624375e-a5cf-49b8-a54a-5770a6c7e738" Jan 22 09:18:09 crc kubenswrapper[4811]: I0122 09:18:09.533441 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-4xkc8"] Jan 22 09:18:09 crc kubenswrapper[4811]: W0122 09:18:09.535470 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb3409f06_ef57_4717_b3e2_9b4f788fd7f0.slice/crio-f0ce50ee3d69ba552a7709e082cac21025c599136da5185698663b932a855cd8 WatchSource:0}: Error finding container f0ce50ee3d69ba552a7709e082cac21025c599136da5185698663b932a855cd8: Status 404 returned error can't find the container with id f0ce50ee3d69ba552a7709e082cac21025c599136da5185698663b932a855cd8 Jan 22 09:18:09 crc kubenswrapper[4811]: E0122 09:18:09.537062 4811 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-w2dr4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-547cbdb99f-4xkc8_openstack-operators(b3409f06-ef57-4717-b3e2-9b4f788fd7f0): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 22 09:18:09 crc kubenswrapper[4811]: E0122 09:18:09.538378 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-4xkc8" podUID="b3409f06-ef57-4717-b3e2-9b4f788fd7f0" Jan 22 09:18:09 crc kubenswrapper[4811]: I0122 09:18:09.844948 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7c209919-fd54-40e8-a741-7006cf8dd361-cert\") pod \"openstack-baremetal-operator-controller-manager-7fbbcdb4d6qmzp5\" (UID: \"7c209919-fd54-40e8-a741-7006cf8dd361\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7fbbcdb4d6qmzp5" Jan 22 09:18:09 crc kubenswrapper[4811]: E0122 09:18:09.845140 4811 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 22 09:18:09 crc kubenswrapper[4811]: E0122 09:18:09.845203 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7c209919-fd54-40e8-a741-7006cf8dd361-cert podName:7c209919-fd54-40e8-a741-7006cf8dd361 nodeName:}" failed. No retries permitted until 2026-01-22 09:18:11.845174378 +0000 UTC m=+736.167361502 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/7c209919-fd54-40e8-a741-7006cf8dd361-cert") pod "openstack-baremetal-operator-controller-manager-7fbbcdb4d6qmzp5" (UID: "7c209919-fd54-40e8-a741-7006cf8dd361") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 22 09:18:09 crc kubenswrapper[4811]: I0122 09:18:09.969105 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-kxs2j" event={"ID":"15eb97d5-2508-4c32-8b7e-65f1015767cf","Type":"ContainerStarted","Data":"88311c03db93b0b8756b4b8002665e37f82e5a9601354257f859d962b87a67b4"} Jan 22 09:18:09 crc kubenswrapper[4811]: I0122 09:18:09.970534 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-627nz" event={"ID":"42323d0d-05b6-4a0d-a809-405dec7c2893","Type":"ContainerStarted","Data":"bcd4d8167870492906bb965160986f6239d5ccff794de9267d9e77d3414b5a81"} Jan 22 09:18:09 crc kubenswrapper[4811]: I0122 09:18:09.971645 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-xp5jv" event={"ID":"e343c2da-412a-4226-b711-81f83fdbb04b","Type":"ContainerStarted","Data":"495fb0c113f2a58176124bc7e1c96923daf81a59485cadfe13014e9fcb2cfd5b"} Jan 22 09:18:09 crc kubenswrapper[4811]: E0122 09:18:09.971766 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:e02722d7581bfe1c5fc13e2fa6811d8665102ba86635c77547abf6b933cde127\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-627nz" podUID="42323d0d-05b6-4a0d-a809-405dec7c2893" Jan 22 09:18:09 crc kubenswrapper[4811]: I0122 09:18:09.972494 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-fx6zn" event={"ID":"ce893825-4e8e-4c9b-b37e-a974d7cfda21","Type":"ContainerStarted","Data":"239b15e34a6d0093efa39c80b6676af070517b971f28ea9421a0ca3157942b97"} Jan 22 09:18:09 crc kubenswrapper[4811]: I0122 09:18:09.973691 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-rvsm7" event={"ID":"b579b636-697b-4a23-9de7-1f9a8537eb94","Type":"ContainerStarted","Data":"493cffc0654e7b4682496ab4000352510844b6c3ab4d53f8ac7476c1f3291509"} Jan 22 09:18:09 crc kubenswrapper[4811]: E0122 09:18:09.976048 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:65cfe5b9d5b0571aaf8ff9840b12cc56e90ca4cef162dd260c3a9fa2b52c6dd0\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-rvsm7" podUID="b579b636-697b-4a23-9de7-1f9a8537eb94" Jan 22 09:18:09 crc kubenswrapper[4811]: I0122 09:18:09.978615 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-h7wzt" event={"ID":"b157cb38-af8a-41bf-a29a-2da5b59aa500","Type":"ContainerStarted","Data":"0c30b79d1e8ba0b7c9912992c439fa03b71c3d338fdee0cb636b9bbe75e4b4d1"} Jan 22 09:18:09 crc kubenswrapper[4811]: I0122 09:18:09.984789 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-mg5cr" event={"ID":"d990df50-3df1-46b6-b6df-5b84bf8eeb20","Type":"ContainerStarted","Data":"e7bcfa586f36b1f4c2b4305c2db620784bb7ec47f59439f2ff2f8081021629a7"} Jan 22 09:18:09 crc kubenswrapper[4811]: I0122 09:18:09.986954 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-99m2t" event={"ID":"9dfc4274-a6f5-4dcb-8bcf-b2e0ff337574","Type":"ContainerStarted","Data":"b6771c720d8e21007cd9d491b58963a5fab79dd1403c9bb9655d6c2c86a3c238"} Jan 22 09:18:09 crc kubenswrapper[4811]: E0122 09:18:09.986848 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d\\\"\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-mg5cr" podUID="d990df50-3df1-46b6-b6df-5b84bf8eeb20" Jan 22 09:18:09 crc kubenswrapper[4811]: I0122 09:18:09.988339 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-rgwhg" event={"ID":"e6fe0bc0-30b4-4a2f-b36d-93d5b288ecf8","Type":"ContainerStarted","Data":"ffbe43ac488489780f4ec3ded08f8033c632e25af4fcb022b8bd79f104550d47"} Jan 22 09:18:09 crc kubenswrapper[4811]: I0122 09:18:09.990509 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-t9djx" event={"ID":"a247bb8f-a274-481d-916b-8ad80521af31","Type":"ContainerStarted","Data":"ba437d5f326113f2196056cf09a264fb4351665ae46aa56882eeaad8c1a54d6f"} Jan 22 09:18:09 crc kubenswrapper[4811]: E0122 09:18:09.992445 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:a8fc8f9d445b1232f446119015b226008b07c6a259f5bebc1fcbb39ec310afe5\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-t9djx" podUID="a247bb8f-a274-481d-916b-8ad80521af31" Jan 22 09:18:09 crc kubenswrapper[4811]: E0122 09:18:09.993033 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2l9vr" podUID="c624375e-a5cf-49b8-a54a-5770a6c7e738" Jan 22 09:18:09 crc kubenswrapper[4811]: E0122 09:18:09.996547 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922\\\"\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-4xkc8" podUID="b3409f06-ef57-4717-b3e2-9b4f788fd7f0" Jan 22 09:18:10 crc kubenswrapper[4811]: I0122 09:18:10.000324 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2l9vr" event={"ID":"c624375e-a5cf-49b8-a54a-5770a6c7e738","Type":"ContainerStarted","Data":"48795d21f911c560f149809a1299025900954b9cbf156601afc1dd47b76c3092"} Jan 22 09:18:10 crc kubenswrapper[4811]: I0122 09:18:10.000351 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-kc9m5" event={"ID":"02697a04-4401-498c-9b69-ff0b57ce8f4b","Type":"ContainerStarted","Data":"0bad53178b9eb8e7ceac3e6b249b4d7af35a2dbdf965e8dafad1ca014fe92b4a"} Jan 22 09:18:10 crc kubenswrapper[4811]: I0122 09:18:10.000361 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-4xkc8" event={"ID":"b3409f06-ef57-4717-b3e2-9b4f788fd7f0","Type":"ContainerStarted","Data":"f0ce50ee3d69ba552a7709e082cac21025c599136da5185698663b932a855cd8"} Jan 22 09:18:10 crc kubenswrapper[4811]: I0122 09:18:10.000370 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-vbbnq" event={"ID":"e019bc4b-f0e7-4a4f-a42c-1486010a63fd","Type":"ContainerStarted","Data":"f5cad4bcf8cd442d3cfde2de0684db283948f50c61e199c2b9b88b3c12daf426"} Jan 22 09:18:10 crc kubenswrapper[4811]: I0122 09:18:10.000397 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-tll52" event={"ID":"c284b86e-5a0e-4bd4-aa67-4e2c208ea4fa","Type":"ContainerStarted","Data":"9405d804e0aecbf41a8ff666f071de3b2a16e9e5af904fe3ee1e24d9cf5daca3"} Jan 22 09:18:10 crc kubenswrapper[4811]: E0122 09:18:10.000508 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:b57d65d2a968705b9067192a7cb33bd4a12489db87e1d05de78c076f2062cab4\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-tll52" podUID="c284b86e-5a0e-4bd4-aa67-4e2c208ea4fa" Jan 22 09:18:10 crc kubenswrapper[4811]: I0122 09:18:10.001140 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-4wtlm" event={"ID":"688057d8-0445-42c1-b073-83deb026ab4c","Type":"ContainerStarted","Data":"44080de54fe898904945b0947b4d082e3a1f427c912f044525670ff833735d8f"} Jan 22 09:18:10 crc kubenswrapper[4811]: I0122 09:18:10.004841 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-2ltqr" event={"ID":"62aa676a-95ae-40a8-9db5-b5fd24a293c2","Type":"ContainerStarted","Data":"f865939fef6d4b263efb7fa04b410f0cbd32e9bfc30c3de8a023ecaec22afd6e"} Jan 22 09:18:10 crc kubenswrapper[4811]: I0122 09:18:10.249230 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c0b74933-8fe4-4fb1-82af-eda7df5c3c06-webhook-certs\") pod \"openstack-operator-controller-manager-647bb87bbd-v227g\" (UID: \"c0b74933-8fe4-4fb1-82af-eda7df5c3c06\") " pod="openstack-operators/openstack-operator-controller-manager-647bb87bbd-v227g" Jan 22 09:18:10 crc kubenswrapper[4811]: I0122 09:18:10.249490 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c0b74933-8fe4-4fb1-82af-eda7df5c3c06-metrics-certs\") pod \"openstack-operator-controller-manager-647bb87bbd-v227g\" (UID: \"c0b74933-8fe4-4fb1-82af-eda7df5c3c06\") " pod="openstack-operators/openstack-operator-controller-manager-647bb87bbd-v227g" Jan 22 09:18:10 crc kubenswrapper[4811]: E0122 09:18:10.249636 4811 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 22 09:18:10 crc kubenswrapper[4811]: E0122 09:18:10.249674 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c0b74933-8fe4-4fb1-82af-eda7df5c3c06-metrics-certs podName:c0b74933-8fe4-4fb1-82af-eda7df5c3c06 nodeName:}" failed. No retries permitted until 2026-01-22 09:18:12.249662599 +0000 UTC m=+736.571849723 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c0b74933-8fe4-4fb1-82af-eda7df5c3c06-metrics-certs") pod "openstack-operator-controller-manager-647bb87bbd-v227g" (UID: "c0b74933-8fe4-4fb1-82af-eda7df5c3c06") : secret "metrics-server-cert" not found Jan 22 09:18:10 crc kubenswrapper[4811]: E0122 09:18:10.249698 4811 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 22 09:18:10 crc kubenswrapper[4811]: E0122 09:18:10.249925 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c0b74933-8fe4-4fb1-82af-eda7df5c3c06-webhook-certs podName:c0b74933-8fe4-4fb1-82af-eda7df5c3c06 nodeName:}" failed. No retries permitted until 2026-01-22 09:18:12.24990708 +0000 UTC m=+736.572094253 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/c0b74933-8fe4-4fb1-82af-eda7df5c3c06-webhook-certs") pod "openstack-operator-controller-manager-647bb87bbd-v227g" (UID: "c0b74933-8fe4-4fb1-82af-eda7df5c3c06") : secret "webhook-server-cert" not found Jan 22 09:18:11 crc kubenswrapper[4811]: E0122 09:18:11.019985 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922\\\"\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-4xkc8" podUID="b3409f06-ef57-4717-b3e2-9b4f788fd7f0" Jan 22 09:18:11 crc kubenswrapper[4811]: E0122 09:18:11.020051 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d\\\"\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-mg5cr" podUID="d990df50-3df1-46b6-b6df-5b84bf8eeb20" Jan 22 09:18:11 crc kubenswrapper[4811]: E0122 09:18:11.020471 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:65cfe5b9d5b0571aaf8ff9840b12cc56e90ca4cef162dd260c3a9fa2b52c6dd0\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-rvsm7" podUID="b579b636-697b-4a23-9de7-1f9a8537eb94" Jan 22 09:18:11 crc kubenswrapper[4811]: E0122 09:18:11.020760 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:b57d65d2a968705b9067192a7cb33bd4a12489db87e1d05de78c076f2062cab4\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-tll52" podUID="c284b86e-5a0e-4bd4-aa67-4e2c208ea4fa" Jan 22 09:18:11 crc kubenswrapper[4811]: E0122 09:18:11.021590 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:a8fc8f9d445b1232f446119015b226008b07c6a259f5bebc1fcbb39ec310afe5\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-t9djx" podUID="a247bb8f-a274-481d-916b-8ad80521af31" Jan 22 09:18:11 crc kubenswrapper[4811]: E0122 09:18:11.026247 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:e02722d7581bfe1c5fc13e2fa6811d8665102ba86635c77547abf6b933cde127\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-627nz" podUID="42323d0d-05b6-4a0d-a809-405dec7c2893" Jan 22 09:18:11 crc kubenswrapper[4811]: E0122 09:18:11.026258 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2l9vr" podUID="c624375e-a5cf-49b8-a54a-5770a6c7e738" Jan 22 09:18:11 crc kubenswrapper[4811]: I0122 09:18:11.482591 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/81d4cd92-880c-4806-ab95-fcb009827075-cert\") pod \"infra-operator-controller-manager-54ccf4f85d-r6z6t\" (UID: \"81d4cd92-880c-4806-ab95-fcb009827075\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-r6z6t" Jan 22 09:18:11 crc kubenswrapper[4811]: E0122 09:18:11.482745 4811 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 22 09:18:11 crc kubenswrapper[4811]: E0122 09:18:11.482801 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/81d4cd92-880c-4806-ab95-fcb009827075-cert podName:81d4cd92-880c-4806-ab95-fcb009827075 nodeName:}" failed. No retries permitted until 2026-01-22 09:18:15.482781605 +0000 UTC m=+739.804968728 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/81d4cd92-880c-4806-ab95-fcb009827075-cert") pod "infra-operator-controller-manager-54ccf4f85d-r6z6t" (UID: "81d4cd92-880c-4806-ab95-fcb009827075") : secret "infra-operator-webhook-server-cert" not found Jan 22 09:18:11 crc kubenswrapper[4811]: I0122 09:18:11.887195 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7c209919-fd54-40e8-a741-7006cf8dd361-cert\") pod \"openstack-baremetal-operator-controller-manager-7fbbcdb4d6qmzp5\" (UID: \"7c209919-fd54-40e8-a741-7006cf8dd361\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7fbbcdb4d6qmzp5" Jan 22 09:18:11 crc kubenswrapper[4811]: E0122 09:18:11.887384 4811 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 22 09:18:11 crc kubenswrapper[4811]: E0122 09:18:11.887466 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7c209919-fd54-40e8-a741-7006cf8dd361-cert podName:7c209919-fd54-40e8-a741-7006cf8dd361 nodeName:}" failed. No retries permitted until 2026-01-22 09:18:15.887450216 +0000 UTC m=+740.209637339 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/7c209919-fd54-40e8-a741-7006cf8dd361-cert") pod "openstack-baremetal-operator-controller-manager-7fbbcdb4d6qmzp5" (UID: "7c209919-fd54-40e8-a741-7006cf8dd361") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 22 09:18:12 crc kubenswrapper[4811]: I0122 09:18:12.295751 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c0b74933-8fe4-4fb1-82af-eda7df5c3c06-metrics-certs\") pod \"openstack-operator-controller-manager-647bb87bbd-v227g\" (UID: \"c0b74933-8fe4-4fb1-82af-eda7df5c3c06\") " pod="openstack-operators/openstack-operator-controller-manager-647bb87bbd-v227g" Jan 22 09:18:12 crc kubenswrapper[4811]: I0122 09:18:12.295851 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c0b74933-8fe4-4fb1-82af-eda7df5c3c06-webhook-certs\") pod \"openstack-operator-controller-manager-647bb87bbd-v227g\" (UID: \"c0b74933-8fe4-4fb1-82af-eda7df5c3c06\") " pod="openstack-operators/openstack-operator-controller-manager-647bb87bbd-v227g" Jan 22 09:18:12 crc kubenswrapper[4811]: E0122 09:18:12.295855 4811 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 22 09:18:12 crc kubenswrapper[4811]: E0122 09:18:12.295901 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c0b74933-8fe4-4fb1-82af-eda7df5c3c06-metrics-certs podName:c0b74933-8fe4-4fb1-82af-eda7df5c3c06 nodeName:}" failed. No retries permitted until 2026-01-22 09:18:16.295890354 +0000 UTC m=+740.618077477 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c0b74933-8fe4-4fb1-82af-eda7df5c3c06-metrics-certs") pod "openstack-operator-controller-manager-647bb87bbd-v227g" (UID: "c0b74933-8fe4-4fb1-82af-eda7df5c3c06") : secret "metrics-server-cert" not found Jan 22 09:18:12 crc kubenswrapper[4811]: E0122 09:18:12.295930 4811 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 22 09:18:12 crc kubenswrapper[4811]: E0122 09:18:12.295958 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c0b74933-8fe4-4fb1-82af-eda7df5c3c06-webhook-certs podName:c0b74933-8fe4-4fb1-82af-eda7df5c3c06 nodeName:}" failed. No retries permitted until 2026-01-22 09:18:16.295948584 +0000 UTC m=+740.618135708 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/c0b74933-8fe4-4fb1-82af-eda7df5c3c06-webhook-certs") pod "openstack-operator-controller-manager-647bb87bbd-v227g" (UID: "c0b74933-8fe4-4fb1-82af-eda7df5c3c06") : secret "webhook-server-cert" not found Jan 22 09:18:15 crc kubenswrapper[4811]: I0122 09:18:15.535507 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/81d4cd92-880c-4806-ab95-fcb009827075-cert\") pod \"infra-operator-controller-manager-54ccf4f85d-r6z6t\" (UID: \"81d4cd92-880c-4806-ab95-fcb009827075\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-r6z6t" Jan 22 09:18:15 crc kubenswrapper[4811]: E0122 09:18:15.535668 4811 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 22 09:18:15 crc kubenswrapper[4811]: E0122 09:18:15.535876 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/81d4cd92-880c-4806-ab95-fcb009827075-cert podName:81d4cd92-880c-4806-ab95-fcb009827075 nodeName:}" failed. No retries permitted until 2026-01-22 09:18:23.53586096 +0000 UTC m=+747.858048083 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/81d4cd92-880c-4806-ab95-fcb009827075-cert") pod "infra-operator-controller-manager-54ccf4f85d-r6z6t" (UID: "81d4cd92-880c-4806-ab95-fcb009827075") : secret "infra-operator-webhook-server-cert" not found Jan 22 09:18:15 crc kubenswrapper[4811]: I0122 09:18:15.939006 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7c209919-fd54-40e8-a741-7006cf8dd361-cert\") pod \"openstack-baremetal-operator-controller-manager-7fbbcdb4d6qmzp5\" (UID: \"7c209919-fd54-40e8-a741-7006cf8dd361\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7fbbcdb4d6qmzp5" Jan 22 09:18:15 crc kubenswrapper[4811]: E0122 09:18:15.939114 4811 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 22 09:18:15 crc kubenswrapper[4811]: E0122 09:18:15.939318 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7c209919-fd54-40e8-a741-7006cf8dd361-cert podName:7c209919-fd54-40e8-a741-7006cf8dd361 nodeName:}" failed. No retries permitted until 2026-01-22 09:18:23.939304631 +0000 UTC m=+748.261491754 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/7c209919-fd54-40e8-a741-7006cf8dd361-cert") pod "openstack-baremetal-operator-controller-manager-7fbbcdb4d6qmzp5" (UID: "7c209919-fd54-40e8-a741-7006cf8dd361") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 22 09:18:16 crc kubenswrapper[4811]: I0122 09:18:16.342902 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c0b74933-8fe4-4fb1-82af-eda7df5c3c06-webhook-certs\") pod \"openstack-operator-controller-manager-647bb87bbd-v227g\" (UID: \"c0b74933-8fe4-4fb1-82af-eda7df5c3c06\") " pod="openstack-operators/openstack-operator-controller-manager-647bb87bbd-v227g" Jan 22 09:18:16 crc kubenswrapper[4811]: I0122 09:18:16.342996 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c0b74933-8fe4-4fb1-82af-eda7df5c3c06-metrics-certs\") pod \"openstack-operator-controller-manager-647bb87bbd-v227g\" (UID: \"c0b74933-8fe4-4fb1-82af-eda7df5c3c06\") " pod="openstack-operators/openstack-operator-controller-manager-647bb87bbd-v227g" Jan 22 09:18:16 crc kubenswrapper[4811]: E0122 09:18:16.343057 4811 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 22 09:18:16 crc kubenswrapper[4811]: E0122 09:18:16.343112 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c0b74933-8fe4-4fb1-82af-eda7df5c3c06-webhook-certs podName:c0b74933-8fe4-4fb1-82af-eda7df5c3c06 nodeName:}" failed. No retries permitted until 2026-01-22 09:18:24.343099024 +0000 UTC m=+748.665286147 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/c0b74933-8fe4-4fb1-82af-eda7df5c3c06-webhook-certs") pod "openstack-operator-controller-manager-647bb87bbd-v227g" (UID: "c0b74933-8fe4-4fb1-82af-eda7df5c3c06") : secret "webhook-server-cert" not found Jan 22 09:18:16 crc kubenswrapper[4811]: E0122 09:18:16.343134 4811 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 22 09:18:16 crc kubenswrapper[4811]: E0122 09:18:16.343172 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c0b74933-8fe4-4fb1-82af-eda7df5c3c06-metrics-certs podName:c0b74933-8fe4-4fb1-82af-eda7df5c3c06 nodeName:}" failed. No retries permitted until 2026-01-22 09:18:24.343160028 +0000 UTC m=+748.665347151 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c0b74933-8fe4-4fb1-82af-eda7df5c3c06-metrics-certs") pod "openstack-operator-controller-manager-647bb87bbd-v227g" (UID: "c0b74933-8fe4-4fb1-82af-eda7df5c3c06") : secret "metrics-server-cert" not found Jan 22 09:18:22 crc kubenswrapper[4811]: E0122 09:18:22.957584 4811 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/horizon-operator@sha256:3311e627bcb860d9443592a2c67078417318c9eb77d8ef4d07f9aa7027d46822" Jan 22 09:18:22 crc kubenswrapper[4811]: E0122 09:18:22.957932 4811 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:3311e627bcb860d9443592a2c67078417318c9eb77d8ef4d07f9aa7027d46822,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kjj9w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-77d5c5b54f-7p5h9_openstack-operators(62a9fc61-630e-4f4d-9788-f21e25ab4dda): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 09:18:22 crc kubenswrapper[4811]: E0122 09:18:22.959060 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-7p5h9" podUID="62a9fc61-630e-4f4d-9788-f21e25ab4dda" Jan 22 09:18:23 crc kubenswrapper[4811]: E0122 09:18:23.080375 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/horizon-operator@sha256:3311e627bcb860d9443592a2c67078417318c9eb77d8ef4d07f9aa7027d46822\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-7p5h9" podUID="62a9fc61-630e-4f4d-9788-f21e25ab4dda" Jan 22 09:18:23 crc kubenswrapper[4811]: I0122 09:18:23.632708 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/81d4cd92-880c-4806-ab95-fcb009827075-cert\") pod \"infra-operator-controller-manager-54ccf4f85d-r6z6t\" (UID: \"81d4cd92-880c-4806-ab95-fcb009827075\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-r6z6t" Jan 22 09:18:23 crc kubenswrapper[4811]: E0122 09:18:23.632875 4811 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 22 09:18:23 crc kubenswrapper[4811]: E0122 09:18:23.632934 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/81d4cd92-880c-4806-ab95-fcb009827075-cert podName:81d4cd92-880c-4806-ab95-fcb009827075 nodeName:}" failed. No retries permitted until 2026-01-22 09:18:39.632920877 +0000 UTC m=+763.955108000 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/81d4cd92-880c-4806-ab95-fcb009827075-cert") pod "infra-operator-controller-manager-54ccf4f85d-r6z6t" (UID: "81d4cd92-880c-4806-ab95-fcb009827075") : secret "infra-operator-webhook-server-cert" not found Jan 22 09:18:24 crc kubenswrapper[4811]: I0122 09:18:24.036818 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7c209919-fd54-40e8-a741-7006cf8dd361-cert\") pod \"openstack-baremetal-operator-controller-manager-7fbbcdb4d6qmzp5\" (UID: \"7c209919-fd54-40e8-a741-7006cf8dd361\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7fbbcdb4d6qmzp5" Jan 22 09:18:24 crc kubenswrapper[4811]: E0122 09:18:24.036971 4811 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 22 09:18:24 crc kubenswrapper[4811]: E0122 09:18:24.037138 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7c209919-fd54-40e8-a741-7006cf8dd361-cert podName:7c209919-fd54-40e8-a741-7006cf8dd361 nodeName:}" failed. No retries permitted until 2026-01-22 09:18:40.037121365 +0000 UTC m=+764.359308488 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/7c209919-fd54-40e8-a741-7006cf8dd361-cert") pod "openstack-baremetal-operator-controller-manager-7fbbcdb4d6qmzp5" (UID: "7c209919-fd54-40e8-a741-7006cf8dd361") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 22 09:18:24 crc kubenswrapper[4811]: E0122 09:18:24.258485 4811 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349" Jan 22 09:18:24 crc kubenswrapper[4811]: E0122 09:18:24.258685 4811 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nhh4t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-b8b6d4659-99m2t_openstack-operators(9dfc4274-a6f5-4dcb-8bcf-b2e0ff337574): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 09:18:24 crc kubenswrapper[4811]: E0122 09:18:24.261828 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-99m2t" podUID="9dfc4274-a6f5-4dcb-8bcf-b2e0ff337574" Jan 22 09:18:24 crc kubenswrapper[4811]: I0122 09:18:24.343809 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c0b74933-8fe4-4fb1-82af-eda7df5c3c06-webhook-certs\") pod \"openstack-operator-controller-manager-647bb87bbd-v227g\" (UID: \"c0b74933-8fe4-4fb1-82af-eda7df5c3c06\") " pod="openstack-operators/openstack-operator-controller-manager-647bb87bbd-v227g" Jan 22 09:18:24 crc kubenswrapper[4811]: I0122 09:18:24.343898 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c0b74933-8fe4-4fb1-82af-eda7df5c3c06-metrics-certs\") pod \"openstack-operator-controller-manager-647bb87bbd-v227g\" (UID: \"c0b74933-8fe4-4fb1-82af-eda7df5c3c06\") " pod="openstack-operators/openstack-operator-controller-manager-647bb87bbd-v227g" Jan 22 09:18:24 crc kubenswrapper[4811]: E0122 09:18:24.343955 4811 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 22 09:18:24 crc kubenswrapper[4811]: E0122 09:18:24.344026 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c0b74933-8fe4-4fb1-82af-eda7df5c3c06-webhook-certs podName:c0b74933-8fe4-4fb1-82af-eda7df5c3c06 nodeName:}" failed. No retries permitted until 2026-01-22 09:18:40.344012963 +0000 UTC m=+764.666200086 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/c0b74933-8fe4-4fb1-82af-eda7df5c3c06-webhook-certs") pod "openstack-operator-controller-manager-647bb87bbd-v227g" (UID: "c0b74933-8fe4-4fb1-82af-eda7df5c3c06") : secret "webhook-server-cert" not found Jan 22 09:18:24 crc kubenswrapper[4811]: E0122 09:18:24.344040 4811 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 22 09:18:24 crc kubenswrapper[4811]: E0122 09:18:24.344068 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c0b74933-8fe4-4fb1-82af-eda7df5c3c06-metrics-certs podName:c0b74933-8fe4-4fb1-82af-eda7df5c3c06 nodeName:}" failed. No retries permitted until 2026-01-22 09:18:40.344059892 +0000 UTC m=+764.666247015 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c0b74933-8fe4-4fb1-82af-eda7df5c3c06-metrics-certs") pod "openstack-operator-controller-manager-647bb87bbd-v227g" (UID: "c0b74933-8fe4-4fb1-82af-eda7df5c3c06") : secret "metrics-server-cert" not found Jan 22 09:18:25 crc kubenswrapper[4811]: I0122 09:18:25.093590 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-26vqb" event={"ID":"b0f07719-5203-4d79-82b4-995b8af81a00","Type":"ContainerStarted","Data":"c9f4217b50c96c53522547744bcf43f2cb92f8d49a6544ce60fc8842dac8d532"} Jan 22 09:18:25 crc kubenswrapper[4811]: I0122 09:18:25.094015 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-26vqb" Jan 22 09:18:25 crc kubenswrapper[4811]: I0122 09:18:25.102453 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-kc9m5" event={"ID":"02697a04-4401-498c-9b69-ff0b57ce8f4b","Type":"ContainerStarted","Data":"3cbfc8085d1252c6cbbf20f0512d8cbf8215e86a936b8953aa652579088d8ced"} Jan 22 09:18:25 crc kubenswrapper[4811]: I0122 09:18:25.102949 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-kc9m5" Jan 22 09:18:25 crc kubenswrapper[4811]: I0122 09:18:25.108312 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-vbbnq" event={"ID":"e019bc4b-f0e7-4a4f-a42c-1486010a63fd","Type":"ContainerStarted","Data":"c7d64d5ec23daea122d7debe3b607c0729829b95aff29767a62263f2ae4de27d"} Jan 22 09:18:25 crc kubenswrapper[4811]: I0122 09:18:25.108428 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-vbbnq" Jan 22 09:18:25 crc kubenswrapper[4811]: I0122 09:18:25.109820 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-4wtlm" event={"ID":"688057d8-0445-42c1-b073-83deb026ab4c","Type":"ContainerStarted","Data":"942ae5b9659420f93f8ba7418c502012e234f39f5315ba5f9e30b9e57922316c"} Jan 22 09:18:25 crc kubenswrapper[4811]: I0122 09:18:25.109969 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-4wtlm" Jan 22 09:18:25 crc kubenswrapper[4811]: I0122 09:18:25.116261 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-h7wzt" event={"ID":"b157cb38-af8a-41bf-a29a-2da5b59aa500","Type":"ContainerStarted","Data":"e648e7f82059f54ad55bf66686ca1b1f94eb9da1d18d48a95a6eaa39aeaf2049"} Jan 22 09:18:25 crc kubenswrapper[4811]: I0122 09:18:25.116651 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-h7wzt" Jan 22 09:18:25 crc kubenswrapper[4811]: I0122 09:18:25.120465 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-kxs2j" event={"ID":"15eb97d5-2508-4c32-8b7e-65f1015767cf","Type":"ContainerStarted","Data":"575c6fcf59391cb20c6f95d5f4cd93600606a2b2b0795303c7ca818211f93385"} Jan 22 09:18:25 crc kubenswrapper[4811]: I0122 09:18:25.120936 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-kxs2j" Jan 22 09:18:25 crc kubenswrapper[4811]: I0122 09:18:25.122107 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-26vqb" podStartSLOduration=2.76929587 podStartE2EDuration="18.122098659s" podCreationTimestamp="2026-01-22 09:18:07 +0000 UTC" firstStartedPulling="2026-01-22 09:18:08.89039728 +0000 UTC m=+733.212584402" lastFinishedPulling="2026-01-22 09:18:24.243200068 +0000 UTC m=+748.565387191" observedRunningTime="2026-01-22 09:18:25.119664452 +0000 UTC m=+749.441851575" watchObservedRunningTime="2026-01-22 09:18:25.122098659 +0000 UTC m=+749.444285783" Jan 22 09:18:25 crc kubenswrapper[4811]: I0122 09:18:25.134965 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-rgwhg" event={"ID":"e6fe0bc0-30b4-4a2f-b36d-93d5b288ecf8","Type":"ContainerStarted","Data":"b4927dcb372dec103ac55d77942e28385c29793d125fc90f95951d1e2a874af9"} Jan 22 09:18:25 crc kubenswrapper[4811]: I0122 09:18:25.135014 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-rgwhg" Jan 22 09:18:25 crc kubenswrapper[4811]: I0122 09:18:25.141772 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-xp5jv" event={"ID":"e343c2da-412a-4226-b711-81f83fdbb04b","Type":"ContainerStarted","Data":"1edfc7c6fa9cd8f2925a977ffa91f91e599dc92a38622030f4ff4b985fd1ad1b"} Jan 22 09:18:25 crc kubenswrapper[4811]: I0122 09:18:25.142175 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-xp5jv" Jan 22 09:18:25 crc kubenswrapper[4811]: I0122 09:18:25.144293 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-fx6zn" event={"ID":"ce893825-4e8e-4c9b-b37e-a974d7cfda21","Type":"ContainerStarted","Data":"0cdf9da8dc9de49856090d393b0ac1bc044e63f22e70214515d336f8f0377c7f"} Jan 22 09:18:25 crc kubenswrapper[4811]: I0122 09:18:25.144532 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-fx6zn" Jan 22 09:18:25 crc kubenswrapper[4811]: I0122 09:18:25.145411 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-kc9m5" podStartSLOduration=3.184519576 podStartE2EDuration="18.145403847s" podCreationTimestamp="2026-01-22 09:18:07 +0000 UTC" firstStartedPulling="2026-01-22 09:18:09.285795858 +0000 UTC m=+733.607982980" lastFinishedPulling="2026-01-22 09:18:24.246680127 +0000 UTC m=+748.568867251" observedRunningTime="2026-01-22 09:18:25.142443506 +0000 UTC m=+749.464630630" watchObservedRunningTime="2026-01-22 09:18:25.145403847 +0000 UTC m=+749.467590970" Jan 22 09:18:25 crc kubenswrapper[4811]: I0122 09:18:25.153055 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-pklcs" event={"ID":"09ad3a19-244b-4685-8c96-0bee227b6547","Type":"ContainerStarted","Data":"0fb3c63e9e10be39f9fda8a5a3332beefae6000c6705a2f5844cb695f761b94f"} Jan 22 09:18:25 crc kubenswrapper[4811]: I0122 09:18:25.153090 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-pklcs" Jan 22 09:18:25 crc kubenswrapper[4811]: I0122 09:18:25.170338 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-2ltqr" event={"ID":"62aa676a-95ae-40a8-9db5-b5fd24a293c2","Type":"ContainerStarted","Data":"7d4f9b68512282f94f5540712a6545b91d5289137ea336f2af3dd1ebab84fd73"} Jan 22 09:18:25 crc kubenswrapper[4811]: I0122 09:18:25.170415 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-2ltqr" Jan 22 09:18:25 crc kubenswrapper[4811]: E0122 09:18:25.171380 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-99m2t" podUID="9dfc4274-a6f5-4dcb-8bcf-b2e0ff337574" Jan 22 09:18:25 crc kubenswrapper[4811]: I0122 09:18:25.198994 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-vbbnq" podStartSLOduration=3.287865285 podStartE2EDuration="18.198984356s" podCreationTimestamp="2026-01-22 09:18:07 +0000 UTC" firstStartedPulling="2026-01-22 09:18:09.335225042 +0000 UTC m=+733.657412166" lastFinishedPulling="2026-01-22 09:18:24.246344114 +0000 UTC m=+748.568531237" observedRunningTime="2026-01-22 09:18:25.170375934 +0000 UTC m=+749.492563057" watchObservedRunningTime="2026-01-22 09:18:25.198984356 +0000 UTC m=+749.521171480" Jan 22 09:18:25 crc kubenswrapper[4811]: I0122 09:18:25.239299 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-4wtlm" podStartSLOduration=2.948702391 podStartE2EDuration="18.239285937s" podCreationTimestamp="2026-01-22 09:18:07 +0000 UTC" firstStartedPulling="2026-01-22 09:18:08.956665493 +0000 UTC m=+733.278852617" lastFinishedPulling="2026-01-22 09:18:24.247249039 +0000 UTC m=+748.569436163" observedRunningTime="2026-01-22 09:18:25.236423873 +0000 UTC m=+749.558611006" watchObservedRunningTime="2026-01-22 09:18:25.239285937 +0000 UTC m=+749.561473061" Jan 22 09:18:25 crc kubenswrapper[4811]: I0122 09:18:25.241344 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-h7wzt" podStartSLOduration=3.283764415 podStartE2EDuration="18.241336602s" podCreationTimestamp="2026-01-22 09:18:07 +0000 UTC" firstStartedPulling="2026-01-22 09:18:09.290642202 +0000 UTC m=+733.612829325" lastFinishedPulling="2026-01-22 09:18:24.24821439 +0000 UTC m=+748.570401512" observedRunningTime="2026-01-22 09:18:25.201340397 +0000 UTC m=+749.523527520" watchObservedRunningTime="2026-01-22 09:18:25.241336602 +0000 UTC m=+749.563523726" Jan 22 09:18:25 crc kubenswrapper[4811]: I0122 09:18:25.259062 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-xp5jv" podStartSLOduration=2.352696812 podStartE2EDuration="17.259052324s" podCreationTimestamp="2026-01-22 09:18:08 +0000 UTC" firstStartedPulling="2026-01-22 09:18:09.339560214 +0000 UTC m=+733.661747337" lastFinishedPulling="2026-01-22 09:18:24.245915736 +0000 UTC m=+748.568102849" observedRunningTime="2026-01-22 09:18:25.254870812 +0000 UTC m=+749.577057935" watchObservedRunningTime="2026-01-22 09:18:25.259052324 +0000 UTC m=+749.581239446" Jan 22 09:18:25 crc kubenswrapper[4811]: I0122 09:18:25.286281 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-pklcs" podStartSLOduration=2.950235341 podStartE2EDuration="18.286268472s" podCreationTimestamp="2026-01-22 09:18:07 +0000 UTC" firstStartedPulling="2026-01-22 09:18:08.909847158 +0000 UTC m=+733.232034282" lastFinishedPulling="2026-01-22 09:18:24.24588029 +0000 UTC m=+748.568067413" observedRunningTime="2026-01-22 09:18:25.283123844 +0000 UTC m=+749.605310967" watchObservedRunningTime="2026-01-22 09:18:25.286268472 +0000 UTC m=+749.608455595" Jan 22 09:18:25 crc kubenswrapper[4811]: I0122 09:18:25.321009 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-2ltqr" podStartSLOduration=3.158924543 podStartE2EDuration="18.320995887s" podCreationTimestamp="2026-01-22 09:18:07 +0000 UTC" firstStartedPulling="2026-01-22 09:18:09.085148101 +0000 UTC m=+733.407335224" lastFinishedPulling="2026-01-22 09:18:24.247219444 +0000 UTC m=+748.569406568" observedRunningTime="2026-01-22 09:18:25.316939692 +0000 UTC m=+749.639126834" watchObservedRunningTime="2026-01-22 09:18:25.320995887 +0000 UTC m=+749.643183010" Jan 22 09:18:25 crc kubenswrapper[4811]: I0122 09:18:25.347268 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-kxs2j" podStartSLOduration=2.602424305 podStartE2EDuration="17.34725433s" podCreationTimestamp="2026-01-22 09:18:08 +0000 UTC" firstStartedPulling="2026-01-22 09:18:09.502401471 +0000 UTC m=+733.824588595" lastFinishedPulling="2026-01-22 09:18:24.247231507 +0000 UTC m=+748.569418620" observedRunningTime="2026-01-22 09:18:25.341971822 +0000 UTC m=+749.664158946" watchObservedRunningTime="2026-01-22 09:18:25.34725433 +0000 UTC m=+749.669441452" Jan 22 09:18:25 crc kubenswrapper[4811]: I0122 09:18:25.402608 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-fx6zn" podStartSLOduration=3.491715085 podStartE2EDuration="18.40259679s" podCreationTimestamp="2026-01-22 09:18:07 +0000 UTC" firstStartedPulling="2026-01-22 09:18:09.333869857 +0000 UTC m=+733.656056981" lastFinishedPulling="2026-01-22 09:18:24.244751563 +0000 UTC m=+748.566938686" observedRunningTime="2026-01-22 09:18:25.400230791 +0000 UTC m=+749.722417933" watchObservedRunningTime="2026-01-22 09:18:25.40259679 +0000 UTC m=+749.724783913" Jan 22 09:18:25 crc kubenswrapper[4811]: I0122 09:18:25.421543 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-rgwhg" podStartSLOduration=3.487531963 podStartE2EDuration="18.421515148s" podCreationTimestamp="2026-01-22 09:18:07 +0000 UTC" firstStartedPulling="2026-01-22 09:18:08.924858162 +0000 UTC m=+733.247045286" lastFinishedPulling="2026-01-22 09:18:23.858841348 +0000 UTC m=+748.181028471" observedRunningTime="2026-01-22 09:18:25.418767249 +0000 UTC m=+749.740954372" watchObservedRunningTime="2026-01-22 09:18:25.421515148 +0000 UTC m=+749.743702272" Jan 22 09:18:33 crc kubenswrapper[4811]: I0122 09:18:33.217612 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2l9vr" event={"ID":"c624375e-a5cf-49b8-a54a-5770a6c7e738","Type":"ContainerStarted","Data":"53b172b30a6a9b037820d5161b8c9bfabafea4b0337adb11f3d2c293f3921658"} Jan 22 09:18:33 crc kubenswrapper[4811]: I0122 09:18:33.219612 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-627nz" event={"ID":"42323d0d-05b6-4a0d-a809-405dec7c2893","Type":"ContainerStarted","Data":"3dc12f1ba88d8934c0fd989672dae0c7c2711c8d3865d57d37563f04666f649e"} Jan 22 09:18:33 crc kubenswrapper[4811]: I0122 09:18:33.219883 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-627nz" Jan 22 09:18:33 crc kubenswrapper[4811]: I0122 09:18:33.220981 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-t9djx" event={"ID":"a247bb8f-a274-481d-916b-8ad80521af31","Type":"ContainerStarted","Data":"945946a7f03219b1ab8ee8d06c4cfe33c96fb201ea5ee11e13ad03190961a6b0"} Jan 22 09:18:33 crc kubenswrapper[4811]: I0122 09:18:33.221165 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-t9djx" Jan 22 09:18:33 crc kubenswrapper[4811]: I0122 09:18:33.222506 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-tll52" event={"ID":"c284b86e-5a0e-4bd4-aa67-4e2c208ea4fa","Type":"ContainerStarted","Data":"8eb6163d1552812534e7a7b4d8d9b86b85650d1106438a6500df6d42c4938058"} Jan 22 09:18:33 crc kubenswrapper[4811]: I0122 09:18:33.222671 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-tll52" Jan 22 09:18:33 crc kubenswrapper[4811]: I0122 09:18:33.223646 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-mg5cr" event={"ID":"d990df50-3df1-46b6-b6df-5b84bf8eeb20","Type":"ContainerStarted","Data":"237becdabccca3301335086d41149c7b0f3bc820416c2c66f33d24c5f010d96f"} Jan 22 09:18:33 crc kubenswrapper[4811]: I0122 09:18:33.223802 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-mg5cr" Jan 22 09:18:33 crc kubenswrapper[4811]: I0122 09:18:33.224747 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-rvsm7" event={"ID":"b579b636-697b-4a23-9de7-1f9a8537eb94","Type":"ContainerStarted","Data":"8bc3ae140789019779daddad9c343734e0c6c5629a3e24379a3fb8b3e2490ab3"} Jan 22 09:18:33 crc kubenswrapper[4811]: I0122 09:18:33.224907 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-rvsm7" Jan 22 09:18:33 crc kubenswrapper[4811]: I0122 09:18:33.226310 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-4xkc8" event={"ID":"b3409f06-ef57-4717-b3e2-9b4f788fd7f0","Type":"ContainerStarted","Data":"cfcd4616122112fa77900f8dbedf22028f018d806db1f5ba54c4b66f9b0aa80e"} Jan 22 09:18:33 crc kubenswrapper[4811]: I0122 09:18:33.226444 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-4xkc8" Jan 22 09:18:33 crc kubenswrapper[4811]: I0122 09:18:33.237504 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2l9vr" podStartSLOduration=2.064556557 podStartE2EDuration="25.237494388s" podCreationTimestamp="2026-01-22 09:18:08 +0000 UTC" firstStartedPulling="2026-01-22 09:18:09.520845656 +0000 UTC m=+733.843032779" lastFinishedPulling="2026-01-22 09:18:32.693783487 +0000 UTC m=+757.015970610" observedRunningTime="2026-01-22 09:18:33.233270035 +0000 UTC m=+757.555457158" watchObservedRunningTime="2026-01-22 09:18:33.237494388 +0000 UTC m=+757.559681510" Jan 22 09:18:33 crc kubenswrapper[4811]: I0122 09:18:33.256295 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-tll52" podStartSLOduration=3.033849357 podStartE2EDuration="26.256285114s" podCreationTimestamp="2026-01-22 09:18:07 +0000 UTC" firstStartedPulling="2026-01-22 09:18:09.457955939 +0000 UTC m=+733.780143062" lastFinishedPulling="2026-01-22 09:18:32.680391695 +0000 UTC m=+757.002578819" observedRunningTime="2026-01-22 09:18:33.254672966 +0000 UTC m=+757.576860089" watchObservedRunningTime="2026-01-22 09:18:33.256285114 +0000 UTC m=+757.578472238" Jan 22 09:18:33 crc kubenswrapper[4811]: I0122 09:18:33.282301 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-mg5cr" podStartSLOduration=2.097357931 podStartE2EDuration="25.282287916s" podCreationTimestamp="2026-01-22 09:18:08 +0000 UTC" firstStartedPulling="2026-01-22 09:18:09.511646647 +0000 UTC m=+733.833833770" lastFinishedPulling="2026-01-22 09:18:32.696576631 +0000 UTC m=+757.018763755" observedRunningTime="2026-01-22 09:18:33.279214503 +0000 UTC m=+757.601401625" watchObservedRunningTime="2026-01-22 09:18:33.282287916 +0000 UTC m=+757.604475039" Jan 22 09:18:33 crc kubenswrapper[4811]: I0122 09:18:33.296311 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-t9djx" podStartSLOduration=2.958258032 podStartE2EDuration="26.296299976s" podCreationTimestamp="2026-01-22 09:18:07 +0000 UTC" firstStartedPulling="2026-01-22 09:18:09.342798027 +0000 UTC m=+733.664985150" lastFinishedPulling="2026-01-22 09:18:32.68083997 +0000 UTC m=+757.003027094" observedRunningTime="2026-01-22 09:18:33.294876514 +0000 UTC m=+757.617063637" watchObservedRunningTime="2026-01-22 09:18:33.296299976 +0000 UTC m=+757.618487099" Jan 22 09:18:33 crc kubenswrapper[4811]: I0122 09:18:33.317651 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-rvsm7" podStartSLOduration=2.088100813 podStartE2EDuration="25.317639838s" podCreationTimestamp="2026-01-22 09:18:08 +0000 UTC" firstStartedPulling="2026-01-22 09:18:09.477596107 +0000 UTC m=+733.799783230" lastFinishedPulling="2026-01-22 09:18:32.707135132 +0000 UTC m=+757.029322255" observedRunningTime="2026-01-22 09:18:33.312907288 +0000 UTC m=+757.635094411" watchObservedRunningTime="2026-01-22 09:18:33.317639838 +0000 UTC m=+757.639826961" Jan 22 09:18:33 crc kubenswrapper[4811]: I0122 09:18:33.326974 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-627nz" podStartSLOduration=2.139581685 podStartE2EDuration="25.326958252s" podCreationTimestamp="2026-01-22 09:18:08 +0000 UTC" firstStartedPulling="2026-01-22 09:18:09.505448554 +0000 UTC m=+733.827635667" lastFinishedPulling="2026-01-22 09:18:32.692825111 +0000 UTC m=+757.015012234" observedRunningTime="2026-01-22 09:18:33.325584913 +0000 UTC m=+757.647772036" watchObservedRunningTime="2026-01-22 09:18:33.326958252 +0000 UTC m=+757.649145374" Jan 22 09:18:33 crc kubenswrapper[4811]: I0122 09:18:33.346728 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-4xkc8" podStartSLOduration=2.188648068 podStartE2EDuration="25.346716422s" podCreationTimestamp="2026-01-22 09:18:08 +0000 UTC" firstStartedPulling="2026-01-22 09:18:09.536992209 +0000 UTC m=+733.859179333" lastFinishedPulling="2026-01-22 09:18:32.695060564 +0000 UTC m=+757.017247687" observedRunningTime="2026-01-22 09:18:33.339660875 +0000 UTC m=+757.661847997" watchObservedRunningTime="2026-01-22 09:18:33.346716422 +0000 UTC m=+757.668903545" Jan 22 09:18:35 crc kubenswrapper[4811]: I0122 09:18:35.501583 4811 patch_prober.go:28] interesting pod/machine-config-daemon-txvcq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 09:18:35 crc kubenswrapper[4811]: I0122 09:18:35.501864 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 09:18:35 crc kubenswrapper[4811]: I0122 09:18:35.501898 4811 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" Jan 22 09:18:35 crc kubenswrapper[4811]: I0122 09:18:35.502233 4811 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"73255475db73f5da91cb9bd8424c8edb822b1f0ea5ba4103c87f2ef8a2771756"} pod="openshift-machine-config-operator/machine-config-daemon-txvcq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 09:18:35 crc kubenswrapper[4811]: I0122 09:18:35.502278 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" containerName="machine-config-daemon" containerID="cri-o://73255475db73f5da91cb9bd8424c8edb822b1f0ea5ba4103c87f2ef8a2771756" gracePeriod=600 Jan 22 09:18:36 crc kubenswrapper[4811]: I0122 09:18:36.241387 4811 generic.go:334] "Generic (PLEG): container finished" podID="84068a6b-e189-419b-87f5-f31428f6eafe" containerID="73255475db73f5da91cb9bd8424c8edb822b1f0ea5ba4103c87f2ef8a2771756" exitCode=0 Jan 22 09:18:36 crc kubenswrapper[4811]: I0122 09:18:36.241416 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" event={"ID":"84068a6b-e189-419b-87f5-f31428f6eafe","Type":"ContainerDied","Data":"73255475db73f5da91cb9bd8424c8edb822b1f0ea5ba4103c87f2ef8a2771756"} Jan 22 09:18:36 crc kubenswrapper[4811]: I0122 09:18:36.241608 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" event={"ID":"84068a6b-e189-419b-87f5-f31428f6eafe","Type":"ContainerStarted","Data":"db7394b1a0d63dd1b71a9c2dafe49c27f24a0ba7e76972ad14dd0e5bca8208b9"} Jan 22 09:18:36 crc kubenswrapper[4811]: I0122 09:18:36.241644 4811 scope.go:117] "RemoveContainer" containerID="7a6f566969cf05ffa4068902405d136144c4f92f8d1b0d3256e6fd01cf51ac2e" Jan 22 09:18:37 crc kubenswrapper[4811]: I0122 09:18:37.234436 4811 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 22 09:18:37 crc kubenswrapper[4811]: I0122 09:18:37.250706 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-99m2t" event={"ID":"9dfc4274-a6f5-4dcb-8bcf-b2e0ff337574","Type":"ContainerStarted","Data":"67014376b0556db61361c4edb81bff6c7a712f76109db5137ecf40b252da393b"} Jan 22 09:18:37 crc kubenswrapper[4811]: I0122 09:18:37.250865 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-99m2t" Jan 22 09:18:37 crc kubenswrapper[4811]: I0122 09:18:37.278376 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-99m2t" podStartSLOduration=3.066444885 podStartE2EDuration="30.278362104s" podCreationTimestamp="2026-01-22 09:18:07 +0000 UTC" firstStartedPulling="2026-01-22 09:18:09.288401508 +0000 UTC m=+733.610588631" lastFinishedPulling="2026-01-22 09:18:36.500318728 +0000 UTC m=+760.822505850" observedRunningTime="2026-01-22 09:18:37.274795382 +0000 UTC m=+761.596982505" watchObservedRunningTime="2026-01-22 09:18:37.278362104 +0000 UTC m=+761.600549218" Jan 22 09:18:37 crc kubenswrapper[4811]: I0122 09:18:37.839425 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-pklcs" Jan 22 09:18:37 crc kubenswrapper[4811]: I0122 09:18:37.854265 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-rgwhg" Jan 22 09:18:37 crc kubenswrapper[4811]: I0122 09:18:37.866971 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-2ltqr" Jan 22 09:18:37 crc kubenswrapper[4811]: I0122 09:18:37.900446 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-26vqb" Jan 22 09:18:37 crc kubenswrapper[4811]: I0122 09:18:37.976119 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-vbbnq" Jan 22 09:18:38 crc kubenswrapper[4811]: I0122 09:18:38.027951 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-fx6zn" Jan 22 09:18:38 crc kubenswrapper[4811]: I0122 09:18:38.121127 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-4wtlm" Jan 22 09:18:38 crc kubenswrapper[4811]: I0122 09:18:38.132312 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-kc9m5" Jan 22 09:18:38 crc kubenswrapper[4811]: I0122 09:18:38.224846 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-tll52" Jan 22 09:18:38 crc kubenswrapper[4811]: I0122 09:18:38.243610 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-h7wzt" Jan 22 09:18:38 crc kubenswrapper[4811]: I0122 09:18:38.288881 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-t9djx" Jan 22 09:18:38 crc kubenswrapper[4811]: I0122 09:18:38.383086 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-xp5jv" Jan 22 09:18:38 crc kubenswrapper[4811]: I0122 09:18:38.612859 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-4xkc8" Jan 22 09:18:38 crc kubenswrapper[4811]: I0122 09:18:38.622391 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-rvsm7" Jan 22 09:18:38 crc kubenswrapper[4811]: I0122 09:18:38.647321 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-627nz" Jan 22 09:18:38 crc kubenswrapper[4811]: I0122 09:18:38.659292 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-kxs2j" Jan 22 09:18:38 crc kubenswrapper[4811]: I0122 09:18:38.668068 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-mg5cr" Jan 22 09:18:39 crc kubenswrapper[4811]: I0122 09:18:39.274969 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-7p5h9" event={"ID":"62a9fc61-630e-4f4d-9788-f21e25ab4dda","Type":"ContainerStarted","Data":"2ae4d918695c604b351bc52e751d7275c1221d801ea32f35d0019013e6e671c1"} Jan 22 09:18:39 crc kubenswrapper[4811]: I0122 09:18:39.275163 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-7p5h9" Jan 22 09:18:39 crc kubenswrapper[4811]: I0122 09:18:39.288197 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-7p5h9" podStartSLOduration=2.684292895 podStartE2EDuration="32.288184382s" podCreationTimestamp="2026-01-22 09:18:07 +0000 UTC" firstStartedPulling="2026-01-22 09:18:08.919226417 +0000 UTC m=+733.241413540" lastFinishedPulling="2026-01-22 09:18:38.523117903 +0000 UTC m=+762.845305027" observedRunningTime="2026-01-22 09:18:39.283746136 +0000 UTC m=+763.605933260" watchObservedRunningTime="2026-01-22 09:18:39.288184382 +0000 UTC m=+763.610371506" Jan 22 09:18:39 crc kubenswrapper[4811]: I0122 09:18:39.634141 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/81d4cd92-880c-4806-ab95-fcb009827075-cert\") pod \"infra-operator-controller-manager-54ccf4f85d-r6z6t\" (UID: \"81d4cd92-880c-4806-ab95-fcb009827075\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-r6z6t" Jan 22 09:18:39 crc kubenswrapper[4811]: I0122 09:18:39.639012 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/81d4cd92-880c-4806-ab95-fcb009827075-cert\") pod \"infra-operator-controller-manager-54ccf4f85d-r6z6t\" (UID: \"81d4cd92-880c-4806-ab95-fcb009827075\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-r6z6t" Jan 22 09:18:39 crc kubenswrapper[4811]: I0122 09:18:39.812844 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-r6z6t" Jan 22 09:18:40 crc kubenswrapper[4811]: I0122 09:18:40.038986 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7c209919-fd54-40e8-a741-7006cf8dd361-cert\") pod \"openstack-baremetal-operator-controller-manager-7fbbcdb4d6qmzp5\" (UID: \"7c209919-fd54-40e8-a741-7006cf8dd361\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7fbbcdb4d6qmzp5" Jan 22 09:18:40 crc kubenswrapper[4811]: I0122 09:18:40.043105 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7c209919-fd54-40e8-a741-7006cf8dd361-cert\") pod \"openstack-baremetal-operator-controller-manager-7fbbcdb4d6qmzp5\" (UID: \"7c209919-fd54-40e8-a741-7006cf8dd361\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7fbbcdb4d6qmzp5" Jan 22 09:18:40 crc kubenswrapper[4811]: I0122 09:18:40.165308 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7fbbcdb4d6qmzp5" Jan 22 09:18:40 crc kubenswrapper[4811]: I0122 09:18:40.173749 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-54ccf4f85d-r6z6t"] Jan 22 09:18:40 crc kubenswrapper[4811]: W0122 09:18:40.179114 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod81d4cd92_880c_4806_ab95_fcb009827075.slice/crio-95f66a258b9fb9e1cee049645967f677150be13153dfa2d3bfaf542b6b90f1db WatchSource:0}: Error finding container 95f66a258b9fb9e1cee049645967f677150be13153dfa2d3bfaf542b6b90f1db: Status 404 returned error can't find the container with id 95f66a258b9fb9e1cee049645967f677150be13153dfa2d3bfaf542b6b90f1db Jan 22 09:18:40 crc kubenswrapper[4811]: I0122 09:18:40.283071 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-r6z6t" event={"ID":"81d4cd92-880c-4806-ab95-fcb009827075","Type":"ContainerStarted","Data":"95f66a258b9fb9e1cee049645967f677150be13153dfa2d3bfaf542b6b90f1db"} Jan 22 09:18:40 crc kubenswrapper[4811]: I0122 09:18:40.321554 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7fbbcdb4d6qmzp5"] Jan 22 09:18:40 crc kubenswrapper[4811]: W0122 09:18:40.327581 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7c209919_fd54_40e8_a741_7006cf8dd361.slice/crio-038508d3179613b47167c52d2277155c42beef78aaf25d2edaf2e2d1b040ad8a WatchSource:0}: Error finding container 038508d3179613b47167c52d2277155c42beef78aaf25d2edaf2e2d1b040ad8a: Status 404 returned error can't find the container with id 038508d3179613b47167c52d2277155c42beef78aaf25d2edaf2e2d1b040ad8a Jan 22 09:18:40 crc kubenswrapper[4811]: I0122 09:18:40.443060 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c0b74933-8fe4-4fb1-82af-eda7df5c3c06-webhook-certs\") pod \"openstack-operator-controller-manager-647bb87bbd-v227g\" (UID: \"c0b74933-8fe4-4fb1-82af-eda7df5c3c06\") " pod="openstack-operators/openstack-operator-controller-manager-647bb87bbd-v227g" Jan 22 09:18:40 crc kubenswrapper[4811]: I0122 09:18:40.443151 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c0b74933-8fe4-4fb1-82af-eda7df5c3c06-metrics-certs\") pod \"openstack-operator-controller-manager-647bb87bbd-v227g\" (UID: \"c0b74933-8fe4-4fb1-82af-eda7df5c3c06\") " pod="openstack-operators/openstack-operator-controller-manager-647bb87bbd-v227g" Jan 22 09:18:40 crc kubenswrapper[4811]: I0122 09:18:40.445766 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c0b74933-8fe4-4fb1-82af-eda7df5c3c06-metrics-certs\") pod \"openstack-operator-controller-manager-647bb87bbd-v227g\" (UID: \"c0b74933-8fe4-4fb1-82af-eda7df5c3c06\") " pod="openstack-operators/openstack-operator-controller-manager-647bb87bbd-v227g" Jan 22 09:18:40 crc kubenswrapper[4811]: I0122 09:18:40.445900 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c0b74933-8fe4-4fb1-82af-eda7df5c3c06-webhook-certs\") pod \"openstack-operator-controller-manager-647bb87bbd-v227g\" (UID: \"c0b74933-8fe4-4fb1-82af-eda7df5c3c06\") " pod="openstack-operators/openstack-operator-controller-manager-647bb87bbd-v227g" Jan 22 09:18:40 crc kubenswrapper[4811]: I0122 09:18:40.598406 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-647bb87bbd-v227g" Jan 22 09:18:40 crc kubenswrapper[4811]: I0122 09:18:40.951492 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-647bb87bbd-v227g"] Jan 22 09:18:40 crc kubenswrapper[4811]: W0122 09:18:40.955848 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc0b74933_8fe4_4fb1_82af_eda7df5c3c06.slice/crio-2deb9b233a7b08b93e14804d0a2570b297afc46cc7daa7213ad5499d75b501e7 WatchSource:0}: Error finding container 2deb9b233a7b08b93e14804d0a2570b297afc46cc7daa7213ad5499d75b501e7: Status 404 returned error can't find the container with id 2deb9b233a7b08b93e14804d0a2570b297afc46cc7daa7213ad5499d75b501e7 Jan 22 09:18:41 crc kubenswrapper[4811]: I0122 09:18:41.288974 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-647bb87bbd-v227g" event={"ID":"c0b74933-8fe4-4fb1-82af-eda7df5c3c06","Type":"ContainerStarted","Data":"7b45936a5caa8ffad74657ebe92d8cee13a3ebf8f46a1d013e9461f58046a7d1"} Jan 22 09:18:41 crc kubenswrapper[4811]: I0122 09:18:41.289243 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-647bb87bbd-v227g" event={"ID":"c0b74933-8fe4-4fb1-82af-eda7df5c3c06","Type":"ContainerStarted","Data":"2deb9b233a7b08b93e14804d0a2570b297afc46cc7daa7213ad5499d75b501e7"} Jan 22 09:18:41 crc kubenswrapper[4811]: I0122 09:18:41.290200 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-647bb87bbd-v227g" Jan 22 09:18:41 crc kubenswrapper[4811]: I0122 09:18:41.290430 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7fbbcdb4d6qmzp5" event={"ID":"7c209919-fd54-40e8-a741-7006cf8dd361","Type":"ContainerStarted","Data":"038508d3179613b47167c52d2277155c42beef78aaf25d2edaf2e2d1b040ad8a"} Jan 22 09:18:41 crc kubenswrapper[4811]: I0122 09:18:41.317573 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-647bb87bbd-v227g" podStartSLOduration=33.317558562 podStartE2EDuration="33.317558562s" podCreationTimestamp="2026-01-22 09:18:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:18:41.312614393 +0000 UTC m=+765.634801516" watchObservedRunningTime="2026-01-22 09:18:41.317558562 +0000 UTC m=+765.639745685" Jan 22 09:18:43 crc kubenswrapper[4811]: I0122 09:18:43.301029 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7fbbcdb4d6qmzp5" event={"ID":"7c209919-fd54-40e8-a741-7006cf8dd361","Type":"ContainerStarted","Data":"86112b9cad1419dbe124ced85963d8ac79ae29a2b964fbd321bcc948b77dcc12"} Jan 22 09:18:43 crc kubenswrapper[4811]: I0122 09:18:43.301241 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7fbbcdb4d6qmzp5" Jan 22 09:18:43 crc kubenswrapper[4811]: I0122 09:18:43.322223 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7fbbcdb4d6qmzp5" podStartSLOduration=34.243389775 podStartE2EDuration="36.322210283s" podCreationTimestamp="2026-01-22 09:18:07 +0000 UTC" firstStartedPulling="2026-01-22 09:18:40.329680762 +0000 UTC m=+764.651867875" lastFinishedPulling="2026-01-22 09:18:42.408501259 +0000 UTC m=+766.730688383" observedRunningTime="2026-01-22 09:18:43.320312797 +0000 UTC m=+767.642499919" watchObservedRunningTime="2026-01-22 09:18:43.322210283 +0000 UTC m=+767.644397406" Jan 22 09:18:47 crc kubenswrapper[4811]: I0122 09:18:47.980572 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-7p5h9" Jan 22 09:18:48 crc kubenswrapper[4811]: I0122 09:18:48.114779 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-99m2t" Jan 22 09:18:50 crc kubenswrapper[4811]: I0122 09:18:50.170677 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7fbbcdb4d6qmzp5" Jan 22 09:18:50 crc kubenswrapper[4811]: I0122 09:18:50.602888 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-647bb87bbd-v227g" Jan 22 09:18:50 crc kubenswrapper[4811]: E0122 09:18:50.901561 4811 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = reading blob sha256:bde0ffd8a744787dafb5635491b476d5ce7f50c60d7f271d7f2010653e3e7f11: Get \"https://cdn01.quay.io/quayio-production-s3/sha256/bd/bde0ffd8a744787dafb5635491b476d5ce7f50c60d7f271d7f2010653e3e7f11?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIATAAF2YHTGR23ZTE6%2F20260122%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20260122T091841Z&X-Amz-Expires=600&X-Amz-SignedHeaders=host&X-Amz-Signature=70b8f6e2de38898af6de882585aa052fd36361441c8bd3b98a9ec659e44a40c1®ion=us-east-1&namespace=openstack-k8s-operators&username=openshift-release-dev+ocm_access_1b89217552bc42d1be3fb06a1aed001a&repo_name=infra-operator&akamai_signature=exp=1769074421~hmac=e6cd062fba38ff55862195f7690d8b4bdf90a5519a658f09028d6190b4dd4b16\": context canceled" image="quay.io/openstack-k8s-operators/infra-operator@sha256:2eac1b9dadaddf4734f35e3dd1996dca960e97d2f304cbd48254b900a840a84a" Jan 22 09:18:50 crc kubenswrapper[4811]: E0122 09:18:50.901763 4811 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/infra-operator@sha256:2eac1b9dadaddf4734f35e3dd1996dca960e97d2f304cbd48254b900a840a84a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:true,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{600 -3} {} 600m DecimalSI},memory: {{2147483648 0} {} 2Gi BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{536870912 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cert,ReadOnly:true,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hxjb7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infra-operator-controller-manager-54ccf4f85d-r6z6t_openstack-operators(81d4cd92-880c-4806-ab95-fcb009827075): ErrImagePull: rpc error: code = Canceled desc = reading blob sha256:bde0ffd8a744787dafb5635491b476d5ce7f50c60d7f271d7f2010653e3e7f11: Get \"https://cdn01.quay.io/quayio-production-s3/sha256/bd/bde0ffd8a744787dafb5635491b476d5ce7f50c60d7f271d7f2010653e3e7f11?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIATAAF2YHTGR23ZTE6%2F20260122%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20260122T091841Z&X-Amz-Expires=600&X-Amz-SignedHeaders=host&X-Amz-Signature=70b8f6e2de38898af6de882585aa052fd36361441c8bd3b98a9ec659e44a40c1®ion=us-east-1&namespace=openstack-k8s-operators&username=openshift-release-dev+ocm_access_1b89217552bc42d1be3fb06a1aed001a&repo_name=infra-operator&akamai_signature=exp=1769074421~hmac=e6cd062fba38ff55862195f7690d8b4bdf90a5519a658f09028d6190b4dd4b16\": context canceled" logger="UnhandledError" Jan 22 09:18:50 crc kubenswrapper[4811]: E0122 09:18:50.902913 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = reading blob sha256:bde0ffd8a744787dafb5635491b476d5ce7f50c60d7f271d7f2010653e3e7f11: Get \\\"https://cdn01.quay.io/quayio-production-s3/sha256/bd/bde0ffd8a744787dafb5635491b476d5ce7f50c60d7f271d7f2010653e3e7f11?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIATAAF2YHTGR23ZTE6%2F20260122%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20260122T091841Z&X-Amz-Expires=600&X-Amz-SignedHeaders=host&X-Amz-Signature=70b8f6e2de38898af6de882585aa052fd36361441c8bd3b98a9ec659e44a40c1®ion=us-east-1&namespace=openstack-k8s-operators&username=openshift-release-dev+ocm_access_1b89217552bc42d1be3fb06a1aed001a&repo_name=infra-operator&akamai_signature=exp=1769074421~hmac=e6cd062fba38ff55862195f7690d8b4bdf90a5519a658f09028d6190b4dd4b16\\\": context canceled\"" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-r6z6t" podUID="81d4cd92-880c-4806-ab95-fcb009827075" Jan 22 09:18:51 crc kubenswrapper[4811]: E0122 09:18:51.337409 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/infra-operator@sha256:2eac1b9dadaddf4734f35e3dd1996dca960e97d2f304cbd48254b900a840a84a\\\"\"" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-r6z6t" podUID="81d4cd92-880c-4806-ab95-fcb009827075" Jan 22 09:19:06 crc kubenswrapper[4811]: I0122 09:19:06.410840 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-r6z6t" event={"ID":"81d4cd92-880c-4806-ab95-fcb009827075","Type":"ContainerStarted","Data":"5226a319b7d0e0218510bf4734f2cff13459ab384f1e9a2a0b8261e6c25d897b"} Jan 22 09:19:06 crc kubenswrapper[4811]: I0122 09:19:06.411548 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-r6z6t" Jan 22 09:19:06 crc kubenswrapper[4811]: I0122 09:19:06.425180 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-r6z6t" podStartSLOduration=33.943387954 podStartE2EDuration="59.425164163s" podCreationTimestamp="2026-01-22 09:18:07 +0000 UTC" firstStartedPulling="2026-01-22 09:18:40.180871282 +0000 UTC m=+764.503058405" lastFinishedPulling="2026-01-22 09:19:05.662647491 +0000 UTC m=+789.984834614" observedRunningTime="2026-01-22 09:19:06.423937241 +0000 UTC m=+790.746124364" watchObservedRunningTime="2026-01-22 09:19:06.425164163 +0000 UTC m=+790.747351286" Jan 22 09:19:19 crc kubenswrapper[4811]: I0122 09:19:19.817406 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-r6z6t" Jan 22 09:19:35 crc kubenswrapper[4811]: I0122 09:19:35.322605 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-n5xpq"] Jan 22 09:19:35 crc kubenswrapper[4811]: I0122 09:19:35.323800 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84bb9d8bd9-n5xpq" Jan 22 09:19:35 crc kubenswrapper[4811]: I0122 09:19:35.325634 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 22 09:19:35 crc kubenswrapper[4811]: I0122 09:19:35.325754 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 22 09:19:35 crc kubenswrapper[4811]: I0122 09:19:35.329698 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 22 09:19:35 crc kubenswrapper[4811]: I0122 09:19:35.331987 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-nqgwj" Jan 22 09:19:35 crc kubenswrapper[4811]: I0122 09:19:35.338697 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-n5xpq"] Jan 22 09:19:35 crc kubenswrapper[4811]: I0122 09:19:35.389586 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7966b817-cc01-43ca-8024-0e35d83779d1-config\") pod \"dnsmasq-dns-84bb9d8bd9-n5xpq\" (UID: \"7966b817-cc01-43ca-8024-0e35d83779d1\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-n5xpq" Jan 22 09:19:35 crc kubenswrapper[4811]: I0122 09:19:35.389663 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfkjr\" (UniqueName: \"kubernetes.io/projected/7966b817-cc01-43ca-8024-0e35d83779d1-kube-api-access-tfkjr\") pod \"dnsmasq-dns-84bb9d8bd9-n5xpq\" (UID: \"7966b817-cc01-43ca-8024-0e35d83779d1\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-n5xpq" Jan 22 09:19:35 crc kubenswrapper[4811]: I0122 09:19:35.395465 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-9dnnl"] Jan 22 09:19:35 crc kubenswrapper[4811]: I0122 09:19:35.396353 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f854695bc-9dnnl" Jan 22 09:19:35 crc kubenswrapper[4811]: I0122 09:19:35.402804 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 22 09:19:35 crc kubenswrapper[4811]: I0122 09:19:35.411569 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-9dnnl"] Jan 22 09:19:35 crc kubenswrapper[4811]: I0122 09:19:35.490617 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfkjr\" (UniqueName: \"kubernetes.io/projected/7966b817-cc01-43ca-8024-0e35d83779d1-kube-api-access-tfkjr\") pod \"dnsmasq-dns-84bb9d8bd9-n5xpq\" (UID: \"7966b817-cc01-43ca-8024-0e35d83779d1\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-n5xpq" Jan 22 09:19:35 crc kubenswrapper[4811]: I0122 09:19:35.490692 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5b583653-a583-41a2-a836-0c0783e7be54-dns-svc\") pod \"dnsmasq-dns-5f854695bc-9dnnl\" (UID: \"5b583653-a583-41a2-a836-0c0783e7be54\") " pod="openstack/dnsmasq-dns-5f854695bc-9dnnl" Jan 22 09:19:35 crc kubenswrapper[4811]: I0122 09:19:35.490764 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpmnd\" (UniqueName: \"kubernetes.io/projected/5b583653-a583-41a2-a836-0c0783e7be54-kube-api-access-kpmnd\") pod \"dnsmasq-dns-5f854695bc-9dnnl\" (UID: \"5b583653-a583-41a2-a836-0c0783e7be54\") " pod="openstack/dnsmasq-dns-5f854695bc-9dnnl" Jan 22 09:19:35 crc kubenswrapper[4811]: I0122 09:19:35.490802 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b583653-a583-41a2-a836-0c0783e7be54-config\") pod \"dnsmasq-dns-5f854695bc-9dnnl\" (UID: \"5b583653-a583-41a2-a836-0c0783e7be54\") " pod="openstack/dnsmasq-dns-5f854695bc-9dnnl" Jan 22 09:19:35 crc kubenswrapper[4811]: I0122 09:19:35.490839 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7966b817-cc01-43ca-8024-0e35d83779d1-config\") pod \"dnsmasq-dns-84bb9d8bd9-n5xpq\" (UID: \"7966b817-cc01-43ca-8024-0e35d83779d1\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-n5xpq" Jan 22 09:19:35 crc kubenswrapper[4811]: I0122 09:19:35.491542 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7966b817-cc01-43ca-8024-0e35d83779d1-config\") pod \"dnsmasq-dns-84bb9d8bd9-n5xpq\" (UID: \"7966b817-cc01-43ca-8024-0e35d83779d1\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-n5xpq" Jan 22 09:19:35 crc kubenswrapper[4811]: I0122 09:19:35.507861 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfkjr\" (UniqueName: \"kubernetes.io/projected/7966b817-cc01-43ca-8024-0e35d83779d1-kube-api-access-tfkjr\") pod \"dnsmasq-dns-84bb9d8bd9-n5xpq\" (UID: \"7966b817-cc01-43ca-8024-0e35d83779d1\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-n5xpq" Jan 22 09:19:35 crc kubenswrapper[4811]: I0122 09:19:35.591556 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5b583653-a583-41a2-a836-0c0783e7be54-dns-svc\") pod \"dnsmasq-dns-5f854695bc-9dnnl\" (UID: \"5b583653-a583-41a2-a836-0c0783e7be54\") " pod="openstack/dnsmasq-dns-5f854695bc-9dnnl" Jan 22 09:19:35 crc kubenswrapper[4811]: I0122 09:19:35.591897 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kpmnd\" (UniqueName: \"kubernetes.io/projected/5b583653-a583-41a2-a836-0c0783e7be54-kube-api-access-kpmnd\") pod \"dnsmasq-dns-5f854695bc-9dnnl\" (UID: \"5b583653-a583-41a2-a836-0c0783e7be54\") " pod="openstack/dnsmasq-dns-5f854695bc-9dnnl" Jan 22 09:19:35 crc kubenswrapper[4811]: I0122 09:19:35.592006 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b583653-a583-41a2-a836-0c0783e7be54-config\") pod \"dnsmasq-dns-5f854695bc-9dnnl\" (UID: \"5b583653-a583-41a2-a836-0c0783e7be54\") " pod="openstack/dnsmasq-dns-5f854695bc-9dnnl" Jan 22 09:19:35 crc kubenswrapper[4811]: I0122 09:19:35.592298 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5b583653-a583-41a2-a836-0c0783e7be54-dns-svc\") pod \"dnsmasq-dns-5f854695bc-9dnnl\" (UID: \"5b583653-a583-41a2-a836-0c0783e7be54\") " pod="openstack/dnsmasq-dns-5f854695bc-9dnnl" Jan 22 09:19:35 crc kubenswrapper[4811]: I0122 09:19:35.592649 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b583653-a583-41a2-a836-0c0783e7be54-config\") pod \"dnsmasq-dns-5f854695bc-9dnnl\" (UID: \"5b583653-a583-41a2-a836-0c0783e7be54\") " pod="openstack/dnsmasq-dns-5f854695bc-9dnnl" Jan 22 09:19:35 crc kubenswrapper[4811]: I0122 09:19:35.604642 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kpmnd\" (UniqueName: \"kubernetes.io/projected/5b583653-a583-41a2-a836-0c0783e7be54-kube-api-access-kpmnd\") pod \"dnsmasq-dns-5f854695bc-9dnnl\" (UID: \"5b583653-a583-41a2-a836-0c0783e7be54\") " pod="openstack/dnsmasq-dns-5f854695bc-9dnnl" Jan 22 09:19:35 crc kubenswrapper[4811]: I0122 09:19:35.637138 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84bb9d8bd9-n5xpq" Jan 22 09:19:35 crc kubenswrapper[4811]: I0122 09:19:35.711475 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f854695bc-9dnnl" Jan 22 09:19:36 crc kubenswrapper[4811]: I0122 09:19:36.032336 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-n5xpq"] Jan 22 09:19:36 crc kubenswrapper[4811]: I0122 09:19:36.034454 4811 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 09:19:36 crc kubenswrapper[4811]: I0122 09:19:36.096831 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-9dnnl"] Jan 22 09:19:36 crc kubenswrapper[4811]: W0122 09:19:36.103146 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5b583653_a583_41a2_a836_0c0783e7be54.slice/crio-06e8a17cecaec375945d784c45633298feec31eebb114d079eb4ee3e74dfb4a4 WatchSource:0}: Error finding container 06e8a17cecaec375945d784c45633298feec31eebb114d079eb4ee3e74dfb4a4: Status 404 returned error can't find the container with id 06e8a17cecaec375945d784c45633298feec31eebb114d079eb4ee3e74dfb4a4 Jan 22 09:19:36 crc kubenswrapper[4811]: I0122 09:19:36.576698 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f854695bc-9dnnl" event={"ID":"5b583653-a583-41a2-a836-0c0783e7be54","Type":"ContainerStarted","Data":"06e8a17cecaec375945d784c45633298feec31eebb114d079eb4ee3e74dfb4a4"} Jan 22 09:19:36 crc kubenswrapper[4811]: I0122 09:19:36.577777 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84bb9d8bd9-n5xpq" event={"ID":"7966b817-cc01-43ca-8024-0e35d83779d1","Type":"ContainerStarted","Data":"7ec577e6807a4e9f00bbcb8846c8ee5041e68439673ab1da6b9dbdc3ceb32bf4"} Jan 22 09:19:38 crc kubenswrapper[4811]: I0122 09:19:38.235300 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-9dnnl"] Jan 22 09:19:38 crc kubenswrapper[4811]: I0122 09:19:38.261032 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-744ffd65bc-b96wb"] Jan 22 09:19:38 crc kubenswrapper[4811]: I0122 09:19:38.261999 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-744ffd65bc-b96wb" Jan 22 09:19:38 crc kubenswrapper[4811]: I0122 09:19:38.290668 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-744ffd65bc-b96wb"] Jan 22 09:19:38 crc kubenswrapper[4811]: I0122 09:19:38.444461 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggqfx\" (UniqueName: \"kubernetes.io/projected/55f4ca47-d6d3-4ec2-9df3-2898ba04d543-kube-api-access-ggqfx\") pod \"dnsmasq-dns-744ffd65bc-b96wb\" (UID: \"55f4ca47-d6d3-4ec2-9df3-2898ba04d543\") " pod="openstack/dnsmasq-dns-744ffd65bc-b96wb" Jan 22 09:19:38 crc kubenswrapper[4811]: I0122 09:19:38.444499 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/55f4ca47-d6d3-4ec2-9df3-2898ba04d543-dns-svc\") pod \"dnsmasq-dns-744ffd65bc-b96wb\" (UID: \"55f4ca47-d6d3-4ec2-9df3-2898ba04d543\") " pod="openstack/dnsmasq-dns-744ffd65bc-b96wb" Jan 22 09:19:38 crc kubenswrapper[4811]: I0122 09:19:38.444563 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55f4ca47-d6d3-4ec2-9df3-2898ba04d543-config\") pod \"dnsmasq-dns-744ffd65bc-b96wb\" (UID: \"55f4ca47-d6d3-4ec2-9df3-2898ba04d543\") " pod="openstack/dnsmasq-dns-744ffd65bc-b96wb" Jan 22 09:19:38 crc kubenswrapper[4811]: I0122 09:19:38.545524 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/55f4ca47-d6d3-4ec2-9df3-2898ba04d543-dns-svc\") pod \"dnsmasq-dns-744ffd65bc-b96wb\" (UID: \"55f4ca47-d6d3-4ec2-9df3-2898ba04d543\") " pod="openstack/dnsmasq-dns-744ffd65bc-b96wb" Jan 22 09:19:38 crc kubenswrapper[4811]: I0122 09:19:38.546297 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/55f4ca47-d6d3-4ec2-9df3-2898ba04d543-dns-svc\") pod \"dnsmasq-dns-744ffd65bc-b96wb\" (UID: \"55f4ca47-d6d3-4ec2-9df3-2898ba04d543\") " pod="openstack/dnsmasq-dns-744ffd65bc-b96wb" Jan 22 09:19:38 crc kubenswrapper[4811]: I0122 09:19:38.546365 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggqfx\" (UniqueName: \"kubernetes.io/projected/55f4ca47-d6d3-4ec2-9df3-2898ba04d543-kube-api-access-ggqfx\") pod \"dnsmasq-dns-744ffd65bc-b96wb\" (UID: \"55f4ca47-d6d3-4ec2-9df3-2898ba04d543\") " pod="openstack/dnsmasq-dns-744ffd65bc-b96wb" Jan 22 09:19:38 crc kubenswrapper[4811]: I0122 09:19:38.546734 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55f4ca47-d6d3-4ec2-9df3-2898ba04d543-config\") pod \"dnsmasq-dns-744ffd65bc-b96wb\" (UID: \"55f4ca47-d6d3-4ec2-9df3-2898ba04d543\") " pod="openstack/dnsmasq-dns-744ffd65bc-b96wb" Jan 22 09:19:38 crc kubenswrapper[4811]: I0122 09:19:38.547268 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55f4ca47-d6d3-4ec2-9df3-2898ba04d543-config\") pod \"dnsmasq-dns-744ffd65bc-b96wb\" (UID: \"55f4ca47-d6d3-4ec2-9df3-2898ba04d543\") " pod="openstack/dnsmasq-dns-744ffd65bc-b96wb" Jan 22 09:19:38 crc kubenswrapper[4811]: I0122 09:19:38.565813 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-n5xpq"] Jan 22 09:19:38 crc kubenswrapper[4811]: I0122 09:19:38.589007 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-h2gf4"] Jan 22 09:19:38 crc kubenswrapper[4811]: I0122 09:19:38.589380 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggqfx\" (UniqueName: \"kubernetes.io/projected/55f4ca47-d6d3-4ec2-9df3-2898ba04d543-kube-api-access-ggqfx\") pod \"dnsmasq-dns-744ffd65bc-b96wb\" (UID: \"55f4ca47-d6d3-4ec2-9df3-2898ba04d543\") " pod="openstack/dnsmasq-dns-744ffd65bc-b96wb" Jan 22 09:19:38 crc kubenswrapper[4811]: I0122 09:19:38.589943 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95f5f6995-h2gf4" Jan 22 09:19:38 crc kubenswrapper[4811]: I0122 09:19:38.591308 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-744ffd65bc-b96wb" Jan 22 09:19:38 crc kubenswrapper[4811]: I0122 09:19:38.651201 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-h2gf4"] Jan 22 09:19:38 crc kubenswrapper[4811]: I0122 09:19:38.749834 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzf4t\" (UniqueName: \"kubernetes.io/projected/56f1c973-5c84-45ef-a32c-82b736796f2c-kube-api-access-tzf4t\") pod \"dnsmasq-dns-95f5f6995-h2gf4\" (UID: \"56f1c973-5c84-45ef-a32c-82b736796f2c\") " pod="openstack/dnsmasq-dns-95f5f6995-h2gf4" Jan 22 09:19:38 crc kubenswrapper[4811]: I0122 09:19:38.749992 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/56f1c973-5c84-45ef-a32c-82b736796f2c-dns-svc\") pod \"dnsmasq-dns-95f5f6995-h2gf4\" (UID: \"56f1c973-5c84-45ef-a32c-82b736796f2c\") " pod="openstack/dnsmasq-dns-95f5f6995-h2gf4" Jan 22 09:19:38 crc kubenswrapper[4811]: I0122 09:19:38.750028 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56f1c973-5c84-45ef-a32c-82b736796f2c-config\") pod \"dnsmasq-dns-95f5f6995-h2gf4\" (UID: \"56f1c973-5c84-45ef-a32c-82b736796f2c\") " pod="openstack/dnsmasq-dns-95f5f6995-h2gf4" Jan 22 09:19:38 crc kubenswrapper[4811]: I0122 09:19:38.851578 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/56f1c973-5c84-45ef-a32c-82b736796f2c-dns-svc\") pod \"dnsmasq-dns-95f5f6995-h2gf4\" (UID: \"56f1c973-5c84-45ef-a32c-82b736796f2c\") " pod="openstack/dnsmasq-dns-95f5f6995-h2gf4" Jan 22 09:19:38 crc kubenswrapper[4811]: I0122 09:19:38.852738 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/56f1c973-5c84-45ef-a32c-82b736796f2c-dns-svc\") pod \"dnsmasq-dns-95f5f6995-h2gf4\" (UID: \"56f1c973-5c84-45ef-a32c-82b736796f2c\") " pod="openstack/dnsmasq-dns-95f5f6995-h2gf4" Jan 22 09:19:38 crc kubenswrapper[4811]: I0122 09:19:38.852768 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56f1c973-5c84-45ef-a32c-82b736796f2c-config\") pod \"dnsmasq-dns-95f5f6995-h2gf4\" (UID: \"56f1c973-5c84-45ef-a32c-82b736796f2c\") " pod="openstack/dnsmasq-dns-95f5f6995-h2gf4" Jan 22 09:19:38 crc kubenswrapper[4811]: I0122 09:19:38.852930 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tzf4t\" (UniqueName: \"kubernetes.io/projected/56f1c973-5c84-45ef-a32c-82b736796f2c-kube-api-access-tzf4t\") pod \"dnsmasq-dns-95f5f6995-h2gf4\" (UID: \"56f1c973-5c84-45ef-a32c-82b736796f2c\") " pod="openstack/dnsmasq-dns-95f5f6995-h2gf4" Jan 22 09:19:38 crc kubenswrapper[4811]: I0122 09:19:38.853690 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56f1c973-5c84-45ef-a32c-82b736796f2c-config\") pod \"dnsmasq-dns-95f5f6995-h2gf4\" (UID: \"56f1c973-5c84-45ef-a32c-82b736796f2c\") " pod="openstack/dnsmasq-dns-95f5f6995-h2gf4" Jan 22 09:19:38 crc kubenswrapper[4811]: I0122 09:19:38.874479 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzf4t\" (UniqueName: \"kubernetes.io/projected/56f1c973-5c84-45ef-a32c-82b736796f2c-kube-api-access-tzf4t\") pod \"dnsmasq-dns-95f5f6995-h2gf4\" (UID: \"56f1c973-5c84-45ef-a32c-82b736796f2c\") " pod="openstack/dnsmasq-dns-95f5f6995-h2gf4" Jan 22 09:19:38 crc kubenswrapper[4811]: I0122 09:19:38.910197 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95f5f6995-h2gf4" Jan 22 09:19:39 crc kubenswrapper[4811]: I0122 09:19:39.216572 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-744ffd65bc-b96wb"] Jan 22 09:19:39 crc kubenswrapper[4811]: W0122 09:19:39.225907 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod55f4ca47_d6d3_4ec2_9df3_2898ba04d543.slice/crio-da6df1aaa86a44ee16cf125d31d2231bcc843fad5b3624632f418ebf1b05221b WatchSource:0}: Error finding container da6df1aaa86a44ee16cf125d31d2231bcc843fad5b3624632f418ebf1b05221b: Status 404 returned error can't find the container with id da6df1aaa86a44ee16cf125d31d2231bcc843fad5b3624632f418ebf1b05221b Jan 22 09:19:39 crc kubenswrapper[4811]: I0122 09:19:39.344557 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-h2gf4"] Jan 22 09:19:39 crc kubenswrapper[4811]: I0122 09:19:39.411456 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 22 09:19:39 crc kubenswrapper[4811]: I0122 09:19:39.412433 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 22 09:19:39 crc kubenswrapper[4811]: I0122 09:19:39.417390 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 22 09:19:39 crc kubenswrapper[4811]: I0122 09:19:39.417473 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 22 09:19:39 crc kubenswrapper[4811]: I0122 09:19:39.417539 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 22 09:19:39 crc kubenswrapper[4811]: I0122 09:19:39.417671 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-2z5wj" Jan 22 09:19:39 crc kubenswrapper[4811]: I0122 09:19:39.417701 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 22 09:19:39 crc kubenswrapper[4811]: I0122 09:19:39.417862 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 22 09:19:39 crc kubenswrapper[4811]: I0122 09:19:39.417959 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 22 09:19:39 crc kubenswrapper[4811]: I0122 09:19:39.423327 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 22 09:19:39 crc kubenswrapper[4811]: I0122 09:19:39.575436 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9e61948b-1761-46b8-9ab3-e776224f335a-pod-info\") pod \"rabbitmq-server-0\" (UID: \"9e61948b-1761-46b8-9ab3-e776224f335a\") " pod="openstack/rabbitmq-server-0" Jan 22 09:19:39 crc kubenswrapper[4811]: I0122 09:19:39.575485 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9e61948b-1761-46b8-9ab3-e776224f335a-server-conf\") pod \"rabbitmq-server-0\" (UID: \"9e61948b-1761-46b8-9ab3-e776224f335a\") " pod="openstack/rabbitmq-server-0" Jan 22 09:19:39 crc kubenswrapper[4811]: I0122 09:19:39.575504 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9e61948b-1761-46b8-9ab3-e776224f335a-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"9e61948b-1761-46b8-9ab3-e776224f335a\") " pod="openstack/rabbitmq-server-0" Jan 22 09:19:39 crc kubenswrapper[4811]: I0122 09:19:39.575541 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9e61948b-1761-46b8-9ab3-e776224f335a-config-data\") pod \"rabbitmq-server-0\" (UID: \"9e61948b-1761-46b8-9ab3-e776224f335a\") " pod="openstack/rabbitmq-server-0" Jan 22 09:19:39 crc kubenswrapper[4811]: I0122 09:19:39.575564 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9e61948b-1761-46b8-9ab3-e776224f335a-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"9e61948b-1761-46b8-9ab3-e776224f335a\") " pod="openstack/rabbitmq-server-0" Jan 22 09:19:39 crc kubenswrapper[4811]: I0122 09:19:39.575584 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9e61948b-1761-46b8-9ab3-e776224f335a-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"9e61948b-1761-46b8-9ab3-e776224f335a\") " pod="openstack/rabbitmq-server-0" Jan 22 09:19:39 crc kubenswrapper[4811]: I0122 09:19:39.575596 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27qdb\" (UniqueName: \"kubernetes.io/projected/9e61948b-1761-46b8-9ab3-e776224f335a-kube-api-access-27qdb\") pod \"rabbitmq-server-0\" (UID: \"9e61948b-1761-46b8-9ab3-e776224f335a\") " pod="openstack/rabbitmq-server-0" Jan 22 09:19:39 crc kubenswrapper[4811]: I0122 09:19:39.575682 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"9e61948b-1761-46b8-9ab3-e776224f335a\") " pod="openstack/rabbitmq-server-0" Jan 22 09:19:39 crc kubenswrapper[4811]: I0122 09:19:39.575710 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9e61948b-1761-46b8-9ab3-e776224f335a-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"9e61948b-1761-46b8-9ab3-e776224f335a\") " pod="openstack/rabbitmq-server-0" Jan 22 09:19:39 crc kubenswrapper[4811]: I0122 09:19:39.575766 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9e61948b-1761-46b8-9ab3-e776224f335a-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"9e61948b-1761-46b8-9ab3-e776224f335a\") " pod="openstack/rabbitmq-server-0" Jan 22 09:19:39 crc kubenswrapper[4811]: I0122 09:19:39.575835 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9e61948b-1761-46b8-9ab3-e776224f335a-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"9e61948b-1761-46b8-9ab3-e776224f335a\") " pod="openstack/rabbitmq-server-0" Jan 22 09:19:39 crc kubenswrapper[4811]: I0122 09:19:39.627549 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95f5f6995-h2gf4" event={"ID":"56f1c973-5c84-45ef-a32c-82b736796f2c","Type":"ContainerStarted","Data":"33cc12ffeba6168150472993db930414f96ce478945eae6f949e8cb405eb46f3"} Jan 22 09:19:39 crc kubenswrapper[4811]: I0122 09:19:39.629036 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-744ffd65bc-b96wb" event={"ID":"55f4ca47-d6d3-4ec2-9df3-2898ba04d543","Type":"ContainerStarted","Data":"da6df1aaa86a44ee16cf125d31d2231bcc843fad5b3624632f418ebf1b05221b"} Jan 22 09:19:39 crc kubenswrapper[4811]: I0122 09:19:39.677222 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"9e61948b-1761-46b8-9ab3-e776224f335a\") " pod="openstack/rabbitmq-server-0" Jan 22 09:19:39 crc kubenswrapper[4811]: I0122 09:19:39.677274 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9e61948b-1761-46b8-9ab3-e776224f335a-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"9e61948b-1761-46b8-9ab3-e776224f335a\") " pod="openstack/rabbitmq-server-0" Jan 22 09:19:39 crc kubenswrapper[4811]: I0122 09:19:39.677334 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9e61948b-1761-46b8-9ab3-e776224f335a-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"9e61948b-1761-46b8-9ab3-e776224f335a\") " pod="openstack/rabbitmq-server-0" Jan 22 09:19:39 crc kubenswrapper[4811]: I0122 09:19:39.677372 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9e61948b-1761-46b8-9ab3-e776224f335a-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"9e61948b-1761-46b8-9ab3-e776224f335a\") " pod="openstack/rabbitmq-server-0" Jan 22 09:19:39 crc kubenswrapper[4811]: I0122 09:19:39.677389 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9e61948b-1761-46b8-9ab3-e776224f335a-pod-info\") pod \"rabbitmq-server-0\" (UID: \"9e61948b-1761-46b8-9ab3-e776224f335a\") " pod="openstack/rabbitmq-server-0" Jan 22 09:19:39 crc kubenswrapper[4811]: I0122 09:19:39.677405 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9e61948b-1761-46b8-9ab3-e776224f335a-server-conf\") pod \"rabbitmq-server-0\" (UID: \"9e61948b-1761-46b8-9ab3-e776224f335a\") " pod="openstack/rabbitmq-server-0" Jan 22 09:19:39 crc kubenswrapper[4811]: I0122 09:19:39.677419 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9e61948b-1761-46b8-9ab3-e776224f335a-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"9e61948b-1761-46b8-9ab3-e776224f335a\") " pod="openstack/rabbitmq-server-0" Jan 22 09:19:39 crc kubenswrapper[4811]: I0122 09:19:39.677443 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9e61948b-1761-46b8-9ab3-e776224f335a-config-data\") pod \"rabbitmq-server-0\" (UID: \"9e61948b-1761-46b8-9ab3-e776224f335a\") " pod="openstack/rabbitmq-server-0" Jan 22 09:19:39 crc kubenswrapper[4811]: I0122 09:19:39.677465 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9e61948b-1761-46b8-9ab3-e776224f335a-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"9e61948b-1761-46b8-9ab3-e776224f335a\") " pod="openstack/rabbitmq-server-0" Jan 22 09:19:39 crc kubenswrapper[4811]: I0122 09:19:39.677480 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-27qdb\" (UniqueName: \"kubernetes.io/projected/9e61948b-1761-46b8-9ab3-e776224f335a-kube-api-access-27qdb\") pod \"rabbitmq-server-0\" (UID: \"9e61948b-1761-46b8-9ab3-e776224f335a\") " pod="openstack/rabbitmq-server-0" Jan 22 09:19:39 crc kubenswrapper[4811]: I0122 09:19:39.677496 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9e61948b-1761-46b8-9ab3-e776224f335a-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"9e61948b-1761-46b8-9ab3-e776224f335a\") " pod="openstack/rabbitmq-server-0" Jan 22 09:19:39 crc kubenswrapper[4811]: I0122 09:19:39.678800 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9e61948b-1761-46b8-9ab3-e776224f335a-config-data\") pod \"rabbitmq-server-0\" (UID: \"9e61948b-1761-46b8-9ab3-e776224f335a\") " pod="openstack/rabbitmq-server-0" Jan 22 09:19:39 crc kubenswrapper[4811]: I0122 09:19:39.679366 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9e61948b-1761-46b8-9ab3-e776224f335a-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"9e61948b-1761-46b8-9ab3-e776224f335a\") " pod="openstack/rabbitmq-server-0" Jan 22 09:19:39 crc kubenswrapper[4811]: I0122 09:19:39.679643 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9e61948b-1761-46b8-9ab3-e776224f335a-server-conf\") pod \"rabbitmq-server-0\" (UID: \"9e61948b-1761-46b8-9ab3-e776224f335a\") " pod="openstack/rabbitmq-server-0" Jan 22 09:19:39 crc kubenswrapper[4811]: I0122 09:19:39.679946 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9e61948b-1761-46b8-9ab3-e776224f335a-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"9e61948b-1761-46b8-9ab3-e776224f335a\") " pod="openstack/rabbitmq-server-0" Jan 22 09:19:39 crc kubenswrapper[4811]: I0122 09:19:39.680121 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9e61948b-1761-46b8-9ab3-e776224f335a-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"9e61948b-1761-46b8-9ab3-e776224f335a\") " pod="openstack/rabbitmq-server-0" Jan 22 09:19:39 crc kubenswrapper[4811]: I0122 09:19:39.680154 4811 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"9e61948b-1761-46b8-9ab3-e776224f335a\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/rabbitmq-server-0" Jan 22 09:19:39 crc kubenswrapper[4811]: I0122 09:19:39.685274 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9e61948b-1761-46b8-9ab3-e776224f335a-pod-info\") pod \"rabbitmq-server-0\" (UID: \"9e61948b-1761-46b8-9ab3-e776224f335a\") " pod="openstack/rabbitmq-server-0" Jan 22 09:19:39 crc kubenswrapper[4811]: I0122 09:19:39.687224 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9e61948b-1761-46b8-9ab3-e776224f335a-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"9e61948b-1761-46b8-9ab3-e776224f335a\") " pod="openstack/rabbitmq-server-0" Jan 22 09:19:39 crc kubenswrapper[4811]: I0122 09:19:39.687705 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9e61948b-1761-46b8-9ab3-e776224f335a-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"9e61948b-1761-46b8-9ab3-e776224f335a\") " pod="openstack/rabbitmq-server-0" Jan 22 09:19:39 crc kubenswrapper[4811]: I0122 09:19:39.699280 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-27qdb\" (UniqueName: \"kubernetes.io/projected/9e61948b-1761-46b8-9ab3-e776224f335a-kube-api-access-27qdb\") pod \"rabbitmq-server-0\" (UID: \"9e61948b-1761-46b8-9ab3-e776224f335a\") " pod="openstack/rabbitmq-server-0" Jan 22 09:19:39 crc kubenswrapper[4811]: I0122 09:19:39.701255 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9e61948b-1761-46b8-9ab3-e776224f335a-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"9e61948b-1761-46b8-9ab3-e776224f335a\") " pod="openstack/rabbitmq-server-0" Jan 22 09:19:39 crc kubenswrapper[4811]: I0122 09:19:39.709528 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"9e61948b-1761-46b8-9ab3-e776224f335a\") " pod="openstack/rabbitmq-server-0" Jan 22 09:19:39 crc kubenswrapper[4811]: I0122 09:19:39.726351 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 22 09:19:39 crc kubenswrapper[4811]: I0122 09:19:39.733021 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:19:39 crc kubenswrapper[4811]: I0122 09:19:39.737975 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 22 09:19:39 crc kubenswrapper[4811]: I0122 09:19:39.742432 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 22 09:19:39 crc kubenswrapper[4811]: I0122 09:19:39.742596 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-rsthp" Jan 22 09:19:39 crc kubenswrapper[4811]: I0122 09:19:39.742764 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 22 09:19:39 crc kubenswrapper[4811]: I0122 09:19:39.742927 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 22 09:19:39 crc kubenswrapper[4811]: I0122 09:19:39.743080 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 22 09:19:39 crc kubenswrapper[4811]: I0122 09:19:39.743747 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 22 09:19:39 crc kubenswrapper[4811]: I0122 09:19:39.743877 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 22 09:19:39 crc kubenswrapper[4811]: I0122 09:19:39.748363 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 22 09:19:39 crc kubenswrapper[4811]: I0122 09:19:39.885229 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0241fc5c-fa26-44a1-9db3-006b438b9123-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"0241fc5c-fa26-44a1-9db3-006b438b9123\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:19:39 crc kubenswrapper[4811]: I0122 09:19:39.885432 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0241fc5c-fa26-44a1-9db3-006b438b9123-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"0241fc5c-fa26-44a1-9db3-006b438b9123\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:19:39 crc kubenswrapper[4811]: I0122 09:19:39.885453 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0241fc5c-fa26-44a1-9db3-006b438b9123-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0241fc5c-fa26-44a1-9db3-006b438b9123\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:19:39 crc kubenswrapper[4811]: I0122 09:19:39.885481 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0241fc5c-fa26-44a1-9db3-006b438b9123-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"0241fc5c-fa26-44a1-9db3-006b438b9123\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:19:39 crc kubenswrapper[4811]: I0122 09:19:39.885515 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0241fc5c-fa26-44a1-9db3-006b438b9123-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"0241fc5c-fa26-44a1-9db3-006b438b9123\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:19:39 crc kubenswrapper[4811]: I0122 09:19:39.885595 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8q7tr\" (UniqueName: \"kubernetes.io/projected/0241fc5c-fa26-44a1-9db3-006b438b9123-kube-api-access-8q7tr\") pod \"rabbitmq-cell1-server-0\" (UID: \"0241fc5c-fa26-44a1-9db3-006b438b9123\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:19:39 crc kubenswrapper[4811]: I0122 09:19:39.885611 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0241fc5c-fa26-44a1-9db3-006b438b9123-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"0241fc5c-fa26-44a1-9db3-006b438b9123\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:19:39 crc kubenswrapper[4811]: I0122 09:19:39.885674 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0241fc5c-fa26-44a1-9db3-006b438b9123-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"0241fc5c-fa26-44a1-9db3-006b438b9123\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:19:39 crc kubenswrapper[4811]: I0122 09:19:39.885694 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"0241fc5c-fa26-44a1-9db3-006b438b9123\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:19:39 crc kubenswrapper[4811]: I0122 09:19:39.885711 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0241fc5c-fa26-44a1-9db3-006b438b9123-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"0241fc5c-fa26-44a1-9db3-006b438b9123\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:19:39 crc kubenswrapper[4811]: I0122 09:19:39.885815 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0241fc5c-fa26-44a1-9db3-006b438b9123-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0241fc5c-fa26-44a1-9db3-006b438b9123\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:19:39 crc kubenswrapper[4811]: I0122 09:19:39.987556 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8q7tr\" (UniqueName: \"kubernetes.io/projected/0241fc5c-fa26-44a1-9db3-006b438b9123-kube-api-access-8q7tr\") pod \"rabbitmq-cell1-server-0\" (UID: \"0241fc5c-fa26-44a1-9db3-006b438b9123\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:19:39 crc kubenswrapper[4811]: I0122 09:19:39.987674 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0241fc5c-fa26-44a1-9db3-006b438b9123-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"0241fc5c-fa26-44a1-9db3-006b438b9123\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:19:39 crc kubenswrapper[4811]: I0122 09:19:39.987737 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0241fc5c-fa26-44a1-9db3-006b438b9123-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"0241fc5c-fa26-44a1-9db3-006b438b9123\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:19:39 crc kubenswrapper[4811]: I0122 09:19:39.987775 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"0241fc5c-fa26-44a1-9db3-006b438b9123\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:19:39 crc kubenswrapper[4811]: I0122 09:19:39.987806 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0241fc5c-fa26-44a1-9db3-006b438b9123-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"0241fc5c-fa26-44a1-9db3-006b438b9123\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:19:39 crc kubenswrapper[4811]: I0122 09:19:39.987912 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0241fc5c-fa26-44a1-9db3-006b438b9123-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0241fc5c-fa26-44a1-9db3-006b438b9123\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:19:39 crc kubenswrapper[4811]: I0122 09:19:39.987951 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0241fc5c-fa26-44a1-9db3-006b438b9123-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"0241fc5c-fa26-44a1-9db3-006b438b9123\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:19:39 crc kubenswrapper[4811]: I0122 09:19:39.987968 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0241fc5c-fa26-44a1-9db3-006b438b9123-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"0241fc5c-fa26-44a1-9db3-006b438b9123\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:19:39 crc kubenswrapper[4811]: I0122 09:19:39.987983 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0241fc5c-fa26-44a1-9db3-006b438b9123-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0241fc5c-fa26-44a1-9db3-006b438b9123\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:19:39 crc kubenswrapper[4811]: I0122 09:19:39.988008 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0241fc5c-fa26-44a1-9db3-006b438b9123-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"0241fc5c-fa26-44a1-9db3-006b438b9123\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:19:39 crc kubenswrapper[4811]: I0122 09:19:39.988040 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0241fc5c-fa26-44a1-9db3-006b438b9123-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"0241fc5c-fa26-44a1-9db3-006b438b9123\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:19:40 crc kubenswrapper[4811]: I0122 09:19:39.988918 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0241fc5c-fa26-44a1-9db3-006b438b9123-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"0241fc5c-fa26-44a1-9db3-006b438b9123\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:19:40 crc kubenswrapper[4811]: I0122 09:19:39.989298 4811 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"0241fc5c-fa26-44a1-9db3-006b438b9123\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:19:40 crc kubenswrapper[4811]: I0122 09:19:39.989842 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0241fc5c-fa26-44a1-9db3-006b438b9123-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0241fc5c-fa26-44a1-9db3-006b438b9123\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:19:40 crc kubenswrapper[4811]: I0122 09:19:39.990117 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0241fc5c-fa26-44a1-9db3-006b438b9123-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"0241fc5c-fa26-44a1-9db3-006b438b9123\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:19:40 crc kubenswrapper[4811]: I0122 09:19:39.990441 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0241fc5c-fa26-44a1-9db3-006b438b9123-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0241fc5c-fa26-44a1-9db3-006b438b9123\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:19:40 crc kubenswrapper[4811]: I0122 09:19:39.992474 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0241fc5c-fa26-44a1-9db3-006b438b9123-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"0241fc5c-fa26-44a1-9db3-006b438b9123\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:19:40 crc kubenswrapper[4811]: I0122 09:19:40.002523 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0241fc5c-fa26-44a1-9db3-006b438b9123-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"0241fc5c-fa26-44a1-9db3-006b438b9123\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:19:40 crc kubenswrapper[4811]: I0122 09:19:40.008832 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0241fc5c-fa26-44a1-9db3-006b438b9123-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"0241fc5c-fa26-44a1-9db3-006b438b9123\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:19:40 crc kubenswrapper[4811]: I0122 09:19:40.009217 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8q7tr\" (UniqueName: \"kubernetes.io/projected/0241fc5c-fa26-44a1-9db3-006b438b9123-kube-api-access-8q7tr\") pod \"rabbitmq-cell1-server-0\" (UID: \"0241fc5c-fa26-44a1-9db3-006b438b9123\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:19:40 crc kubenswrapper[4811]: I0122 09:19:40.013113 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0241fc5c-fa26-44a1-9db3-006b438b9123-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"0241fc5c-fa26-44a1-9db3-006b438b9123\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:19:40 crc kubenswrapper[4811]: I0122 09:19:40.018848 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0241fc5c-fa26-44a1-9db3-006b438b9123-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"0241fc5c-fa26-44a1-9db3-006b438b9123\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:19:40 crc kubenswrapper[4811]: I0122 09:19:40.029783 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"0241fc5c-fa26-44a1-9db3-006b438b9123\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:19:40 crc kubenswrapper[4811]: I0122 09:19:40.096186 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:19:40 crc kubenswrapper[4811]: I0122 09:19:40.263968 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 22 09:19:40 crc kubenswrapper[4811]: I0122 09:19:40.514879 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 22 09:19:40 crc kubenswrapper[4811]: I0122 09:19:40.636519 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"0241fc5c-fa26-44a1-9db3-006b438b9123","Type":"ContainerStarted","Data":"2796e813866443f7a51b4bb8a2cf7c91cb1efd4134ad227f9111830fdedf185c"} Jan 22 09:19:40 crc kubenswrapper[4811]: I0122 09:19:40.642970 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"9e61948b-1761-46b8-9ab3-e776224f335a","Type":"ContainerStarted","Data":"384cd764743c61387d16fa47233c3c5901d73950512ce9d783dff5cbef81eecb"} Jan 22 09:19:40 crc kubenswrapper[4811]: I0122 09:19:40.694504 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 22 09:19:40 crc kubenswrapper[4811]: I0122 09:19:40.695859 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 22 09:19:40 crc kubenswrapper[4811]: I0122 09:19:40.697409 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 22 09:19:40 crc kubenswrapper[4811]: I0122 09:19:40.698995 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 22 09:19:40 crc kubenswrapper[4811]: I0122 09:19:40.699078 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-9wd97" Jan 22 09:19:40 crc kubenswrapper[4811]: I0122 09:19:40.699292 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 22 09:19:40 crc kubenswrapper[4811]: I0122 09:19:40.699596 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 22 09:19:40 crc kubenswrapper[4811]: I0122 09:19:40.706898 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 22 09:19:40 crc kubenswrapper[4811]: I0122 09:19:40.821405 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/02bd0635-dfd1-4e78-8fbf-57366ce83cdb-config-data-generated\") pod \"openstack-galera-0\" (UID: \"02bd0635-dfd1-4e78-8fbf-57366ce83cdb\") " pod="openstack/openstack-galera-0" Jan 22 09:19:40 crc kubenswrapper[4811]: I0122 09:19:40.821456 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/02bd0635-dfd1-4e78-8fbf-57366ce83cdb-kolla-config\") pod \"openstack-galera-0\" (UID: \"02bd0635-dfd1-4e78-8fbf-57366ce83cdb\") " pod="openstack/openstack-galera-0" Jan 22 09:19:40 crc kubenswrapper[4811]: I0122 09:19:40.821479 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zb6b\" (UniqueName: \"kubernetes.io/projected/02bd0635-dfd1-4e78-8fbf-57366ce83cdb-kube-api-access-8zb6b\") pod \"openstack-galera-0\" (UID: \"02bd0635-dfd1-4e78-8fbf-57366ce83cdb\") " pod="openstack/openstack-galera-0" Jan 22 09:19:40 crc kubenswrapper[4811]: I0122 09:19:40.821498 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/02bd0635-dfd1-4e78-8fbf-57366ce83cdb-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"02bd0635-dfd1-4e78-8fbf-57366ce83cdb\") " pod="openstack/openstack-galera-0" Jan 22 09:19:40 crc kubenswrapper[4811]: I0122 09:19:40.821533 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-galera-0\" (UID: \"02bd0635-dfd1-4e78-8fbf-57366ce83cdb\") " pod="openstack/openstack-galera-0" Jan 22 09:19:40 crc kubenswrapper[4811]: I0122 09:19:40.821937 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/02bd0635-dfd1-4e78-8fbf-57366ce83cdb-operator-scripts\") pod \"openstack-galera-0\" (UID: \"02bd0635-dfd1-4e78-8fbf-57366ce83cdb\") " pod="openstack/openstack-galera-0" Jan 22 09:19:40 crc kubenswrapper[4811]: I0122 09:19:40.821998 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02bd0635-dfd1-4e78-8fbf-57366ce83cdb-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"02bd0635-dfd1-4e78-8fbf-57366ce83cdb\") " pod="openstack/openstack-galera-0" Jan 22 09:19:40 crc kubenswrapper[4811]: I0122 09:19:40.822195 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/02bd0635-dfd1-4e78-8fbf-57366ce83cdb-config-data-default\") pod \"openstack-galera-0\" (UID: \"02bd0635-dfd1-4e78-8fbf-57366ce83cdb\") " pod="openstack/openstack-galera-0" Jan 22 09:19:40 crc kubenswrapper[4811]: I0122 09:19:40.923429 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-galera-0\" (UID: \"02bd0635-dfd1-4e78-8fbf-57366ce83cdb\") " pod="openstack/openstack-galera-0" Jan 22 09:19:40 crc kubenswrapper[4811]: I0122 09:19:40.923487 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/02bd0635-dfd1-4e78-8fbf-57366ce83cdb-operator-scripts\") pod \"openstack-galera-0\" (UID: \"02bd0635-dfd1-4e78-8fbf-57366ce83cdb\") " pod="openstack/openstack-galera-0" Jan 22 09:19:40 crc kubenswrapper[4811]: I0122 09:19:40.923516 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02bd0635-dfd1-4e78-8fbf-57366ce83cdb-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"02bd0635-dfd1-4e78-8fbf-57366ce83cdb\") " pod="openstack/openstack-galera-0" Jan 22 09:19:40 crc kubenswrapper[4811]: I0122 09:19:40.923637 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/02bd0635-dfd1-4e78-8fbf-57366ce83cdb-config-data-default\") pod \"openstack-galera-0\" (UID: \"02bd0635-dfd1-4e78-8fbf-57366ce83cdb\") " pod="openstack/openstack-galera-0" Jan 22 09:19:40 crc kubenswrapper[4811]: I0122 09:19:40.923682 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/02bd0635-dfd1-4e78-8fbf-57366ce83cdb-config-data-generated\") pod \"openstack-galera-0\" (UID: \"02bd0635-dfd1-4e78-8fbf-57366ce83cdb\") " pod="openstack/openstack-galera-0" Jan 22 09:19:40 crc kubenswrapper[4811]: I0122 09:19:40.923709 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/02bd0635-dfd1-4e78-8fbf-57366ce83cdb-kolla-config\") pod \"openstack-galera-0\" (UID: \"02bd0635-dfd1-4e78-8fbf-57366ce83cdb\") " pod="openstack/openstack-galera-0" Jan 22 09:19:40 crc kubenswrapper[4811]: I0122 09:19:40.923728 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8zb6b\" (UniqueName: \"kubernetes.io/projected/02bd0635-dfd1-4e78-8fbf-57366ce83cdb-kube-api-access-8zb6b\") pod \"openstack-galera-0\" (UID: \"02bd0635-dfd1-4e78-8fbf-57366ce83cdb\") " pod="openstack/openstack-galera-0" Jan 22 09:19:40 crc kubenswrapper[4811]: I0122 09:19:40.923744 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/02bd0635-dfd1-4e78-8fbf-57366ce83cdb-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"02bd0635-dfd1-4e78-8fbf-57366ce83cdb\") " pod="openstack/openstack-galera-0" Jan 22 09:19:40 crc kubenswrapper[4811]: I0122 09:19:40.927667 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/02bd0635-dfd1-4e78-8fbf-57366ce83cdb-kolla-config\") pod \"openstack-galera-0\" (UID: \"02bd0635-dfd1-4e78-8fbf-57366ce83cdb\") " pod="openstack/openstack-galera-0" Jan 22 09:19:40 crc kubenswrapper[4811]: I0122 09:19:40.927720 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/02bd0635-dfd1-4e78-8fbf-57366ce83cdb-config-data-default\") pod \"openstack-galera-0\" (UID: \"02bd0635-dfd1-4e78-8fbf-57366ce83cdb\") " pod="openstack/openstack-galera-0" Jan 22 09:19:40 crc kubenswrapper[4811]: I0122 09:19:40.927807 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/02bd0635-dfd1-4e78-8fbf-57366ce83cdb-config-data-generated\") pod \"openstack-galera-0\" (UID: \"02bd0635-dfd1-4e78-8fbf-57366ce83cdb\") " pod="openstack/openstack-galera-0" Jan 22 09:19:40 crc kubenswrapper[4811]: I0122 09:19:40.927932 4811 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-galera-0\" (UID: \"02bd0635-dfd1-4e78-8fbf-57366ce83cdb\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/openstack-galera-0" Jan 22 09:19:40 crc kubenswrapper[4811]: I0122 09:19:40.931603 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/02bd0635-dfd1-4e78-8fbf-57366ce83cdb-operator-scripts\") pod \"openstack-galera-0\" (UID: \"02bd0635-dfd1-4e78-8fbf-57366ce83cdb\") " pod="openstack/openstack-galera-0" Jan 22 09:19:40 crc kubenswrapper[4811]: I0122 09:19:40.942211 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/02bd0635-dfd1-4e78-8fbf-57366ce83cdb-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"02bd0635-dfd1-4e78-8fbf-57366ce83cdb\") " pod="openstack/openstack-galera-0" Jan 22 09:19:40 crc kubenswrapper[4811]: I0122 09:19:40.942746 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02bd0635-dfd1-4e78-8fbf-57366ce83cdb-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"02bd0635-dfd1-4e78-8fbf-57366ce83cdb\") " pod="openstack/openstack-galera-0" Jan 22 09:19:40 crc kubenswrapper[4811]: I0122 09:19:40.946482 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zb6b\" (UniqueName: \"kubernetes.io/projected/02bd0635-dfd1-4e78-8fbf-57366ce83cdb-kube-api-access-8zb6b\") pod \"openstack-galera-0\" (UID: \"02bd0635-dfd1-4e78-8fbf-57366ce83cdb\") " pod="openstack/openstack-galera-0" Jan 22 09:19:40 crc kubenswrapper[4811]: I0122 09:19:40.960279 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-galera-0\" (UID: \"02bd0635-dfd1-4e78-8fbf-57366ce83cdb\") " pod="openstack/openstack-galera-0" Jan 22 09:19:41 crc kubenswrapper[4811]: I0122 09:19:41.035395 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 22 09:19:41 crc kubenswrapper[4811]: I0122 09:19:41.605126 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 22 09:19:42 crc kubenswrapper[4811]: I0122 09:19:42.043112 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 22 09:19:42 crc kubenswrapper[4811]: I0122 09:19:42.049398 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 22 09:19:42 crc kubenswrapper[4811]: I0122 09:19:42.051077 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-2v2kn" Jan 22 09:19:42 crc kubenswrapper[4811]: I0122 09:19:42.052846 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 22 09:19:42 crc kubenswrapper[4811]: I0122 09:19:42.053202 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 22 09:19:42 crc kubenswrapper[4811]: I0122 09:19:42.054372 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 22 09:19:42 crc kubenswrapper[4811]: I0122 09:19:42.072539 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 22 09:19:42 crc kubenswrapper[4811]: I0122 09:19:42.151815 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/db2ecb97-db87-43bf-8ffb-7cbd7460ba19-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"db2ecb97-db87-43bf-8ffb-7cbd7460ba19\") " pod="openstack/openstack-cell1-galera-0" Jan 22 09:19:42 crc kubenswrapper[4811]: I0122 09:19:42.151912 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db2ecb97-db87-43bf-8ffb-7cbd7460ba19-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"db2ecb97-db87-43bf-8ffb-7cbd7460ba19\") " pod="openstack/openstack-cell1-galera-0" Jan 22 09:19:42 crc kubenswrapper[4811]: I0122 09:19:42.151930 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xg2bd\" (UniqueName: \"kubernetes.io/projected/db2ecb97-db87-43bf-8ffb-7cbd7460ba19-kube-api-access-xg2bd\") pod \"openstack-cell1-galera-0\" (UID: \"db2ecb97-db87-43bf-8ffb-7cbd7460ba19\") " pod="openstack/openstack-cell1-galera-0" Jan 22 09:19:42 crc kubenswrapper[4811]: I0122 09:19:42.152735 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/db2ecb97-db87-43bf-8ffb-7cbd7460ba19-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"db2ecb97-db87-43bf-8ffb-7cbd7460ba19\") " pod="openstack/openstack-cell1-galera-0" Jan 22 09:19:42 crc kubenswrapper[4811]: I0122 09:19:42.153370 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/db2ecb97-db87-43bf-8ffb-7cbd7460ba19-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"db2ecb97-db87-43bf-8ffb-7cbd7460ba19\") " pod="openstack/openstack-cell1-galera-0" Jan 22 09:19:42 crc kubenswrapper[4811]: I0122 09:19:42.153453 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-cell1-galera-0\" (UID: \"db2ecb97-db87-43bf-8ffb-7cbd7460ba19\") " pod="openstack/openstack-cell1-galera-0" Jan 22 09:19:42 crc kubenswrapper[4811]: I0122 09:19:42.153538 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db2ecb97-db87-43bf-8ffb-7cbd7460ba19-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"db2ecb97-db87-43bf-8ffb-7cbd7460ba19\") " pod="openstack/openstack-cell1-galera-0" Jan 22 09:19:42 crc kubenswrapper[4811]: I0122 09:19:42.153586 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/db2ecb97-db87-43bf-8ffb-7cbd7460ba19-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"db2ecb97-db87-43bf-8ffb-7cbd7460ba19\") " pod="openstack/openstack-cell1-galera-0" Jan 22 09:19:42 crc kubenswrapper[4811]: I0122 09:19:42.255802 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/db2ecb97-db87-43bf-8ffb-7cbd7460ba19-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"db2ecb97-db87-43bf-8ffb-7cbd7460ba19\") " pod="openstack/openstack-cell1-galera-0" Jan 22 09:19:42 crc kubenswrapper[4811]: I0122 09:19:42.256128 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/db2ecb97-db87-43bf-8ffb-7cbd7460ba19-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"db2ecb97-db87-43bf-8ffb-7cbd7460ba19\") " pod="openstack/openstack-cell1-galera-0" Jan 22 09:19:42 crc kubenswrapper[4811]: I0122 09:19:42.256200 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db2ecb97-db87-43bf-8ffb-7cbd7460ba19-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"db2ecb97-db87-43bf-8ffb-7cbd7460ba19\") " pod="openstack/openstack-cell1-galera-0" Jan 22 09:19:42 crc kubenswrapper[4811]: I0122 09:19:42.256216 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xg2bd\" (UniqueName: \"kubernetes.io/projected/db2ecb97-db87-43bf-8ffb-7cbd7460ba19-kube-api-access-xg2bd\") pod \"openstack-cell1-galera-0\" (UID: \"db2ecb97-db87-43bf-8ffb-7cbd7460ba19\") " pod="openstack/openstack-cell1-galera-0" Jan 22 09:19:42 crc kubenswrapper[4811]: I0122 09:19:42.256233 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/db2ecb97-db87-43bf-8ffb-7cbd7460ba19-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"db2ecb97-db87-43bf-8ffb-7cbd7460ba19\") " pod="openstack/openstack-cell1-galera-0" Jan 22 09:19:42 crc kubenswrapper[4811]: I0122 09:19:42.256324 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/db2ecb97-db87-43bf-8ffb-7cbd7460ba19-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"db2ecb97-db87-43bf-8ffb-7cbd7460ba19\") " pod="openstack/openstack-cell1-galera-0" Jan 22 09:19:42 crc kubenswrapper[4811]: I0122 09:19:42.256430 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-cell1-galera-0\" (UID: \"db2ecb97-db87-43bf-8ffb-7cbd7460ba19\") " pod="openstack/openstack-cell1-galera-0" Jan 22 09:19:42 crc kubenswrapper[4811]: I0122 09:19:42.256483 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db2ecb97-db87-43bf-8ffb-7cbd7460ba19-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"db2ecb97-db87-43bf-8ffb-7cbd7460ba19\") " pod="openstack/openstack-cell1-galera-0" Jan 22 09:19:42 crc kubenswrapper[4811]: I0122 09:19:42.259109 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db2ecb97-db87-43bf-8ffb-7cbd7460ba19-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"db2ecb97-db87-43bf-8ffb-7cbd7460ba19\") " pod="openstack/openstack-cell1-galera-0" Jan 22 09:19:42 crc kubenswrapper[4811]: I0122 09:19:42.259524 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/db2ecb97-db87-43bf-8ffb-7cbd7460ba19-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"db2ecb97-db87-43bf-8ffb-7cbd7460ba19\") " pod="openstack/openstack-cell1-galera-0" Jan 22 09:19:42 crc kubenswrapper[4811]: I0122 09:19:42.261154 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/db2ecb97-db87-43bf-8ffb-7cbd7460ba19-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"db2ecb97-db87-43bf-8ffb-7cbd7460ba19\") " pod="openstack/openstack-cell1-galera-0" Jan 22 09:19:42 crc kubenswrapper[4811]: I0122 09:19:42.261730 4811 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-cell1-galera-0\" (UID: \"db2ecb97-db87-43bf-8ffb-7cbd7460ba19\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/openstack-cell1-galera-0" Jan 22 09:19:42 crc kubenswrapper[4811]: I0122 09:19:42.262152 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/db2ecb97-db87-43bf-8ffb-7cbd7460ba19-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"db2ecb97-db87-43bf-8ffb-7cbd7460ba19\") " pod="openstack/openstack-cell1-galera-0" Jan 22 09:19:42 crc kubenswrapper[4811]: I0122 09:19:42.287901 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db2ecb97-db87-43bf-8ffb-7cbd7460ba19-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"db2ecb97-db87-43bf-8ffb-7cbd7460ba19\") " pod="openstack/openstack-cell1-galera-0" Jan 22 09:19:42 crc kubenswrapper[4811]: I0122 09:19:42.288033 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/db2ecb97-db87-43bf-8ffb-7cbd7460ba19-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"db2ecb97-db87-43bf-8ffb-7cbd7460ba19\") " pod="openstack/openstack-cell1-galera-0" Jan 22 09:19:42 crc kubenswrapper[4811]: I0122 09:19:42.295720 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-cell1-galera-0\" (UID: \"db2ecb97-db87-43bf-8ffb-7cbd7460ba19\") " pod="openstack/openstack-cell1-galera-0" Jan 22 09:19:42 crc kubenswrapper[4811]: I0122 09:19:42.335946 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xg2bd\" (UniqueName: \"kubernetes.io/projected/db2ecb97-db87-43bf-8ffb-7cbd7460ba19-kube-api-access-xg2bd\") pod \"openstack-cell1-galera-0\" (UID: \"db2ecb97-db87-43bf-8ffb-7cbd7460ba19\") " pod="openstack/openstack-cell1-galera-0" Jan 22 09:19:42 crc kubenswrapper[4811]: I0122 09:19:42.389017 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 22 09:19:42 crc kubenswrapper[4811]: I0122 09:19:42.434780 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 22 09:19:42 crc kubenswrapper[4811]: I0122 09:19:42.435531 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 22 09:19:42 crc kubenswrapper[4811]: I0122 09:19:42.441907 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-dlrhf" Jan 22 09:19:42 crc kubenswrapper[4811]: I0122 09:19:42.441959 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 22 09:19:42 crc kubenswrapper[4811]: I0122 09:19:42.442228 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 22 09:19:42 crc kubenswrapper[4811]: I0122 09:19:42.447248 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 22 09:19:42 crc kubenswrapper[4811]: I0122 09:19:42.560148 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/864b7037-2c34-48b6-b75d-38110d9816dc-kolla-config\") pod \"memcached-0\" (UID: \"864b7037-2c34-48b6-b75d-38110d9816dc\") " pod="openstack/memcached-0" Jan 22 09:19:42 crc kubenswrapper[4811]: I0122 09:19:42.560181 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/864b7037-2c34-48b6-b75d-38110d9816dc-config-data\") pod \"memcached-0\" (UID: \"864b7037-2c34-48b6-b75d-38110d9816dc\") " pod="openstack/memcached-0" Jan 22 09:19:42 crc kubenswrapper[4811]: I0122 09:19:42.560222 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/864b7037-2c34-48b6-b75d-38110d9816dc-combined-ca-bundle\") pod \"memcached-0\" (UID: \"864b7037-2c34-48b6-b75d-38110d9816dc\") " pod="openstack/memcached-0" Jan 22 09:19:42 crc kubenswrapper[4811]: I0122 09:19:42.560262 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/864b7037-2c34-48b6-b75d-38110d9816dc-memcached-tls-certs\") pod \"memcached-0\" (UID: \"864b7037-2c34-48b6-b75d-38110d9816dc\") " pod="openstack/memcached-0" Jan 22 09:19:42 crc kubenswrapper[4811]: I0122 09:19:42.560291 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bbw6\" (UniqueName: \"kubernetes.io/projected/864b7037-2c34-48b6-b75d-38110d9816dc-kube-api-access-5bbw6\") pod \"memcached-0\" (UID: \"864b7037-2c34-48b6-b75d-38110d9816dc\") " pod="openstack/memcached-0" Jan 22 09:19:42 crc kubenswrapper[4811]: I0122 09:19:42.661488 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/864b7037-2c34-48b6-b75d-38110d9816dc-memcached-tls-certs\") pod \"memcached-0\" (UID: \"864b7037-2c34-48b6-b75d-38110d9816dc\") " pod="openstack/memcached-0" Jan 22 09:19:42 crc kubenswrapper[4811]: I0122 09:19:42.661539 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5bbw6\" (UniqueName: \"kubernetes.io/projected/864b7037-2c34-48b6-b75d-38110d9816dc-kube-api-access-5bbw6\") pod \"memcached-0\" (UID: \"864b7037-2c34-48b6-b75d-38110d9816dc\") " pod="openstack/memcached-0" Jan 22 09:19:42 crc kubenswrapper[4811]: I0122 09:19:42.661662 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/864b7037-2c34-48b6-b75d-38110d9816dc-kolla-config\") pod \"memcached-0\" (UID: \"864b7037-2c34-48b6-b75d-38110d9816dc\") " pod="openstack/memcached-0" Jan 22 09:19:42 crc kubenswrapper[4811]: I0122 09:19:42.661678 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/864b7037-2c34-48b6-b75d-38110d9816dc-config-data\") pod \"memcached-0\" (UID: \"864b7037-2c34-48b6-b75d-38110d9816dc\") " pod="openstack/memcached-0" Jan 22 09:19:42 crc kubenswrapper[4811]: I0122 09:19:42.661717 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/864b7037-2c34-48b6-b75d-38110d9816dc-combined-ca-bundle\") pod \"memcached-0\" (UID: \"864b7037-2c34-48b6-b75d-38110d9816dc\") " pod="openstack/memcached-0" Jan 22 09:19:42 crc kubenswrapper[4811]: I0122 09:19:42.662407 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/864b7037-2c34-48b6-b75d-38110d9816dc-kolla-config\") pod \"memcached-0\" (UID: \"864b7037-2c34-48b6-b75d-38110d9816dc\") " pod="openstack/memcached-0" Jan 22 09:19:42 crc kubenswrapper[4811]: I0122 09:19:42.662885 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/864b7037-2c34-48b6-b75d-38110d9816dc-config-data\") pod \"memcached-0\" (UID: \"864b7037-2c34-48b6-b75d-38110d9816dc\") " pod="openstack/memcached-0" Jan 22 09:19:42 crc kubenswrapper[4811]: I0122 09:19:42.683769 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/864b7037-2c34-48b6-b75d-38110d9816dc-memcached-tls-certs\") pod \"memcached-0\" (UID: \"864b7037-2c34-48b6-b75d-38110d9816dc\") " pod="openstack/memcached-0" Jan 22 09:19:42 crc kubenswrapper[4811]: I0122 09:19:42.684315 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/864b7037-2c34-48b6-b75d-38110d9816dc-combined-ca-bundle\") pod \"memcached-0\" (UID: \"864b7037-2c34-48b6-b75d-38110d9816dc\") " pod="openstack/memcached-0" Jan 22 09:19:42 crc kubenswrapper[4811]: I0122 09:19:42.724754 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5bbw6\" (UniqueName: \"kubernetes.io/projected/864b7037-2c34-48b6-b75d-38110d9816dc-kube-api-access-5bbw6\") pod \"memcached-0\" (UID: \"864b7037-2c34-48b6-b75d-38110d9816dc\") " pod="openstack/memcached-0" Jan 22 09:19:42 crc kubenswrapper[4811]: I0122 09:19:42.762438 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 22 09:19:42 crc kubenswrapper[4811]: I0122 09:19:42.767049 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"02bd0635-dfd1-4e78-8fbf-57366ce83cdb","Type":"ContainerStarted","Data":"763dcf5e1118ee30c1821a433397da314c9746e93af812d290a25e6694f780e2"} Jan 22 09:19:43 crc kubenswrapper[4811]: I0122 09:19:43.418767 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 22 09:19:43 crc kubenswrapper[4811]: W0122 09:19:43.445090 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod864b7037_2c34_48b6_b75d_38110d9816dc.slice/crio-1796e027702b3f9f973ff035a3ed89a0f9a346bc4b33b6376c2b82b8fe496f52 WatchSource:0}: Error finding container 1796e027702b3f9f973ff035a3ed89a0f9a346bc4b33b6376c2b82b8fe496f52: Status 404 returned error can't find the container with id 1796e027702b3f9f973ff035a3ed89a0f9a346bc4b33b6376c2b82b8fe496f52 Jan 22 09:19:43 crc kubenswrapper[4811]: I0122 09:19:43.522261 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 22 09:19:43 crc kubenswrapper[4811]: W0122 09:19:43.529077 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddb2ecb97_db87_43bf_8ffb_7cbd7460ba19.slice/crio-6f0907c809b61e8559c57657aada4b06332f704f73f9ef403659b49a915ce9f4 WatchSource:0}: Error finding container 6f0907c809b61e8559c57657aada4b06332f704f73f9ef403659b49a915ce9f4: Status 404 returned error can't find the container with id 6f0907c809b61e8559c57657aada4b06332f704f73f9ef403659b49a915ce9f4 Jan 22 09:19:43 crc kubenswrapper[4811]: I0122 09:19:43.782870 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"db2ecb97-db87-43bf-8ffb-7cbd7460ba19","Type":"ContainerStarted","Data":"6f0907c809b61e8559c57657aada4b06332f704f73f9ef403659b49a915ce9f4"} Jan 22 09:19:43 crc kubenswrapper[4811]: I0122 09:19:43.787117 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"864b7037-2c34-48b6-b75d-38110d9816dc","Type":"ContainerStarted","Data":"1796e027702b3f9f973ff035a3ed89a0f9a346bc4b33b6376c2b82b8fe496f52"} Jan 22 09:19:44 crc kubenswrapper[4811]: I0122 09:19:44.079231 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 22 09:19:44 crc kubenswrapper[4811]: I0122 09:19:44.080039 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 22 09:19:44 crc kubenswrapper[4811]: I0122 09:19:44.088750 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-2bbdw" Jan 22 09:19:44 crc kubenswrapper[4811]: I0122 09:19:44.112855 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 22 09:19:44 crc kubenswrapper[4811]: I0122 09:19:44.201500 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xz9pf\" (UniqueName: \"kubernetes.io/projected/e8d5030f-13fb-403c-9d6d-e9d87f27800f-kube-api-access-xz9pf\") pod \"kube-state-metrics-0\" (UID: \"e8d5030f-13fb-403c-9d6d-e9d87f27800f\") " pod="openstack/kube-state-metrics-0" Jan 22 09:19:44 crc kubenswrapper[4811]: I0122 09:19:44.302399 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xz9pf\" (UniqueName: \"kubernetes.io/projected/e8d5030f-13fb-403c-9d6d-e9d87f27800f-kube-api-access-xz9pf\") pod \"kube-state-metrics-0\" (UID: \"e8d5030f-13fb-403c-9d6d-e9d87f27800f\") " pod="openstack/kube-state-metrics-0" Jan 22 09:19:44 crc kubenswrapper[4811]: I0122 09:19:44.337670 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xz9pf\" (UniqueName: \"kubernetes.io/projected/e8d5030f-13fb-403c-9d6d-e9d87f27800f-kube-api-access-xz9pf\") pod \"kube-state-metrics-0\" (UID: \"e8d5030f-13fb-403c-9d6d-e9d87f27800f\") " pod="openstack/kube-state-metrics-0" Jan 22 09:19:44 crc kubenswrapper[4811]: I0122 09:19:44.402148 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 22 09:19:48 crc kubenswrapper[4811]: I0122 09:19:48.233869 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-nm2bb"] Jan 22 09:19:48 crc kubenswrapper[4811]: I0122 09:19:48.235226 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-nm2bb" Jan 22 09:19:48 crc kubenswrapper[4811]: I0122 09:19:48.237299 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-rdhc2" Jan 22 09:19:48 crc kubenswrapper[4811]: I0122 09:19:48.237378 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 22 09:19:48 crc kubenswrapper[4811]: I0122 09:19:48.238324 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 22 09:19:48 crc kubenswrapper[4811]: I0122 09:19:48.247404 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-kdqzr"] Jan 22 09:19:48 crc kubenswrapper[4811]: I0122 09:19:48.248503 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-kdqzr" Jan 22 09:19:48 crc kubenswrapper[4811]: I0122 09:19:48.264118 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-kdqzr"] Jan 22 09:19:48 crc kubenswrapper[4811]: I0122 09:19:48.279750 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-nm2bb"] Jan 22 09:19:48 crc kubenswrapper[4811]: I0122 09:19:48.303276 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 22 09:19:48 crc kubenswrapper[4811]: I0122 09:19:48.304415 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 22 09:19:48 crc kubenswrapper[4811]: I0122 09:19:48.305679 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 22 09:19:48 crc kubenswrapper[4811]: I0122 09:19:48.305935 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 22 09:19:48 crc kubenswrapper[4811]: I0122 09:19:48.306078 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 22 09:19:48 crc kubenswrapper[4811]: I0122 09:19:48.307142 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 22 09:19:48 crc kubenswrapper[4811]: I0122 09:19:48.307323 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-sxgnz" Jan 22 09:19:48 crc kubenswrapper[4811]: I0122 09:19:48.314299 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 22 09:19:48 crc kubenswrapper[4811]: I0122 09:19:48.367218 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cbf54ce8-3114-43c1-a1ce-6a13dd41297a-scripts\") pod \"ovn-controller-ovs-kdqzr\" (UID: \"cbf54ce8-3114-43c1-a1ce-6a13dd41297a\") " pod="openstack/ovn-controller-ovs-kdqzr" Jan 22 09:19:48 crc kubenswrapper[4811]: I0122 09:19:48.367272 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phwml\" (UniqueName: \"kubernetes.io/projected/29754ede-0901-4bbd-aa87-49a8e93050b9-kube-api-access-phwml\") pod \"ovn-controller-nm2bb\" (UID: \"29754ede-0901-4bbd-aa87-49a8e93050b9\") " pod="openstack/ovn-controller-nm2bb" Jan 22 09:19:48 crc kubenswrapper[4811]: I0122 09:19:48.367305 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/cbf54ce8-3114-43c1-a1ce-6a13dd41297a-var-run\") pod \"ovn-controller-ovs-kdqzr\" (UID: \"cbf54ce8-3114-43c1-a1ce-6a13dd41297a\") " pod="openstack/ovn-controller-ovs-kdqzr" Jan 22 09:19:48 crc kubenswrapper[4811]: I0122 09:19:48.367322 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/29754ede-0901-4bbd-aa87-49a8e93050b9-var-run\") pod \"ovn-controller-nm2bb\" (UID: \"29754ede-0901-4bbd-aa87-49a8e93050b9\") " pod="openstack/ovn-controller-nm2bb" Jan 22 09:19:48 crc kubenswrapper[4811]: I0122 09:19:48.367342 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njn8c\" (UniqueName: \"kubernetes.io/projected/cbf54ce8-3114-43c1-a1ce-6a13dd41297a-kube-api-access-njn8c\") pod \"ovn-controller-ovs-kdqzr\" (UID: \"cbf54ce8-3114-43c1-a1ce-6a13dd41297a\") " pod="openstack/ovn-controller-ovs-kdqzr" Jan 22 09:19:48 crc kubenswrapper[4811]: I0122 09:19:48.367389 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29754ede-0901-4bbd-aa87-49a8e93050b9-combined-ca-bundle\") pod \"ovn-controller-nm2bb\" (UID: \"29754ede-0901-4bbd-aa87-49a8e93050b9\") " pod="openstack/ovn-controller-nm2bb" Jan 22 09:19:48 crc kubenswrapper[4811]: I0122 09:19:48.367482 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/29754ede-0901-4bbd-aa87-49a8e93050b9-scripts\") pod \"ovn-controller-nm2bb\" (UID: \"29754ede-0901-4bbd-aa87-49a8e93050b9\") " pod="openstack/ovn-controller-nm2bb" Jan 22 09:19:48 crc kubenswrapper[4811]: I0122 09:19:48.367510 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/29754ede-0901-4bbd-aa87-49a8e93050b9-var-log-ovn\") pod \"ovn-controller-nm2bb\" (UID: \"29754ede-0901-4bbd-aa87-49a8e93050b9\") " pod="openstack/ovn-controller-nm2bb" Jan 22 09:19:48 crc kubenswrapper[4811]: I0122 09:19:48.367530 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/cbf54ce8-3114-43c1-a1ce-6a13dd41297a-var-lib\") pod \"ovn-controller-ovs-kdqzr\" (UID: \"cbf54ce8-3114-43c1-a1ce-6a13dd41297a\") " pod="openstack/ovn-controller-ovs-kdqzr" Jan 22 09:19:48 crc kubenswrapper[4811]: I0122 09:19:48.367552 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/29754ede-0901-4bbd-aa87-49a8e93050b9-ovn-controller-tls-certs\") pod \"ovn-controller-nm2bb\" (UID: \"29754ede-0901-4bbd-aa87-49a8e93050b9\") " pod="openstack/ovn-controller-nm2bb" Jan 22 09:19:48 crc kubenswrapper[4811]: I0122 09:19:48.367565 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/29754ede-0901-4bbd-aa87-49a8e93050b9-var-run-ovn\") pod \"ovn-controller-nm2bb\" (UID: \"29754ede-0901-4bbd-aa87-49a8e93050b9\") " pod="openstack/ovn-controller-nm2bb" Jan 22 09:19:48 crc kubenswrapper[4811]: I0122 09:19:48.367584 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/cbf54ce8-3114-43c1-a1ce-6a13dd41297a-var-log\") pod \"ovn-controller-ovs-kdqzr\" (UID: \"cbf54ce8-3114-43c1-a1ce-6a13dd41297a\") " pod="openstack/ovn-controller-ovs-kdqzr" Jan 22 09:19:48 crc kubenswrapper[4811]: I0122 09:19:48.367616 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/cbf54ce8-3114-43c1-a1ce-6a13dd41297a-etc-ovs\") pod \"ovn-controller-ovs-kdqzr\" (UID: \"cbf54ce8-3114-43c1-a1ce-6a13dd41297a\") " pod="openstack/ovn-controller-ovs-kdqzr" Jan 22 09:19:48 crc kubenswrapper[4811]: I0122 09:19:48.475276 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4344c0dd-b6d6-4448-b943-0e036ee2098b-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"4344c0dd-b6d6-4448-b943-0e036ee2098b\") " pod="openstack/ovsdbserver-nb-0" Jan 22 09:19:48 crc kubenswrapper[4811]: I0122 09:19:48.475326 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phwml\" (UniqueName: \"kubernetes.io/projected/29754ede-0901-4bbd-aa87-49a8e93050b9-kube-api-access-phwml\") pod \"ovn-controller-nm2bb\" (UID: \"29754ede-0901-4bbd-aa87-49a8e93050b9\") " pod="openstack/ovn-controller-nm2bb" Jan 22 09:19:48 crc kubenswrapper[4811]: I0122 09:19:48.475359 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/cbf54ce8-3114-43c1-a1ce-6a13dd41297a-var-run\") pod \"ovn-controller-ovs-kdqzr\" (UID: \"cbf54ce8-3114-43c1-a1ce-6a13dd41297a\") " pod="openstack/ovn-controller-ovs-kdqzr" Jan 22 09:19:48 crc kubenswrapper[4811]: I0122 09:19:48.475430 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/29754ede-0901-4bbd-aa87-49a8e93050b9-var-run\") pod \"ovn-controller-nm2bb\" (UID: \"29754ede-0901-4bbd-aa87-49a8e93050b9\") " pod="openstack/ovn-controller-nm2bb" Jan 22 09:19:48 crc kubenswrapper[4811]: I0122 09:19:48.475827 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/29754ede-0901-4bbd-aa87-49a8e93050b9-var-run\") pod \"ovn-controller-nm2bb\" (UID: \"29754ede-0901-4bbd-aa87-49a8e93050b9\") " pod="openstack/ovn-controller-nm2bb" Jan 22 09:19:48 crc kubenswrapper[4811]: I0122 09:19:48.475859 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/4344c0dd-b6d6-4448-b943-0e036ee2098b-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"4344c0dd-b6d6-4448-b943-0e036ee2098b\") " pod="openstack/ovsdbserver-nb-0" Jan 22 09:19:48 crc kubenswrapper[4811]: I0122 09:19:48.475900 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njn8c\" (UniqueName: \"kubernetes.io/projected/cbf54ce8-3114-43c1-a1ce-6a13dd41297a-kube-api-access-njn8c\") pod \"ovn-controller-ovs-kdqzr\" (UID: \"cbf54ce8-3114-43c1-a1ce-6a13dd41297a\") " pod="openstack/ovn-controller-ovs-kdqzr" Jan 22 09:19:48 crc kubenswrapper[4811]: I0122 09:19:48.475920 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4344c0dd-b6d6-4448-b943-0e036ee2098b-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"4344c0dd-b6d6-4448-b943-0e036ee2098b\") " pod="openstack/ovsdbserver-nb-0" Jan 22 09:19:48 crc kubenswrapper[4811]: I0122 09:19:48.475947 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29754ede-0901-4bbd-aa87-49a8e93050b9-combined-ca-bundle\") pod \"ovn-controller-nm2bb\" (UID: \"29754ede-0901-4bbd-aa87-49a8e93050b9\") " pod="openstack/ovn-controller-nm2bb" Jan 22 09:19:48 crc kubenswrapper[4811]: I0122 09:19:48.475983 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/29754ede-0901-4bbd-aa87-49a8e93050b9-scripts\") pod \"ovn-controller-nm2bb\" (UID: \"29754ede-0901-4bbd-aa87-49a8e93050b9\") " pod="openstack/ovn-controller-nm2bb" Jan 22 09:19:48 crc kubenswrapper[4811]: I0122 09:19:48.476007 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4344c0dd-b6d6-4448-b943-0e036ee2098b-config\") pod \"ovsdbserver-nb-0\" (UID: \"4344c0dd-b6d6-4448-b943-0e036ee2098b\") " pod="openstack/ovsdbserver-nb-0" Jan 22 09:19:48 crc kubenswrapper[4811]: I0122 09:19:48.476038 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/29754ede-0901-4bbd-aa87-49a8e93050b9-var-log-ovn\") pod \"ovn-controller-nm2bb\" (UID: \"29754ede-0901-4bbd-aa87-49a8e93050b9\") " pod="openstack/ovn-controller-nm2bb" Jan 22 09:19:48 crc kubenswrapper[4811]: I0122 09:19:48.476060 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/cbf54ce8-3114-43c1-a1ce-6a13dd41297a-var-lib\") pod \"ovn-controller-ovs-kdqzr\" (UID: \"cbf54ce8-3114-43c1-a1ce-6a13dd41297a\") " pod="openstack/ovn-controller-ovs-kdqzr" Jan 22 09:19:48 crc kubenswrapper[4811]: I0122 09:19:48.476083 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4344c0dd-b6d6-4448-b943-0e036ee2098b-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"4344c0dd-b6d6-4448-b943-0e036ee2098b\") " pod="openstack/ovsdbserver-nb-0" Jan 22 09:19:48 crc kubenswrapper[4811]: I0122 09:19:48.476099 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/29754ede-0901-4bbd-aa87-49a8e93050b9-ovn-controller-tls-certs\") pod \"ovn-controller-nm2bb\" (UID: \"29754ede-0901-4bbd-aa87-49a8e93050b9\") " pod="openstack/ovn-controller-nm2bb" Jan 22 09:19:48 crc kubenswrapper[4811]: I0122 09:19:48.476116 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/29754ede-0901-4bbd-aa87-49a8e93050b9-var-run-ovn\") pod \"ovn-controller-nm2bb\" (UID: \"29754ede-0901-4bbd-aa87-49a8e93050b9\") " pod="openstack/ovn-controller-nm2bb" Jan 22 09:19:48 crc kubenswrapper[4811]: I0122 09:19:48.476131 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/cbf54ce8-3114-43c1-a1ce-6a13dd41297a-var-log\") pod \"ovn-controller-ovs-kdqzr\" (UID: \"cbf54ce8-3114-43c1-a1ce-6a13dd41297a\") " pod="openstack/ovn-controller-ovs-kdqzr" Jan 22 09:19:48 crc kubenswrapper[4811]: I0122 09:19:48.476163 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-nb-0\" (UID: \"4344c0dd-b6d6-4448-b943-0e036ee2098b\") " pod="openstack/ovsdbserver-nb-0" Jan 22 09:19:48 crc kubenswrapper[4811]: I0122 09:19:48.476181 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/cbf54ce8-3114-43c1-a1ce-6a13dd41297a-etc-ovs\") pod \"ovn-controller-ovs-kdqzr\" (UID: \"cbf54ce8-3114-43c1-a1ce-6a13dd41297a\") " pod="openstack/ovn-controller-ovs-kdqzr" Jan 22 09:19:48 crc kubenswrapper[4811]: I0122 09:19:48.476214 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4344c0dd-b6d6-4448-b943-0e036ee2098b-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"4344c0dd-b6d6-4448-b943-0e036ee2098b\") " pod="openstack/ovsdbserver-nb-0" Jan 22 09:19:48 crc kubenswrapper[4811]: I0122 09:19:48.476229 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nchrl\" (UniqueName: \"kubernetes.io/projected/4344c0dd-b6d6-4448-b943-0e036ee2098b-kube-api-access-nchrl\") pod \"ovsdbserver-nb-0\" (UID: \"4344c0dd-b6d6-4448-b943-0e036ee2098b\") " pod="openstack/ovsdbserver-nb-0" Jan 22 09:19:48 crc kubenswrapper[4811]: I0122 09:19:48.476260 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cbf54ce8-3114-43c1-a1ce-6a13dd41297a-scripts\") pod \"ovn-controller-ovs-kdqzr\" (UID: \"cbf54ce8-3114-43c1-a1ce-6a13dd41297a\") " pod="openstack/ovn-controller-ovs-kdqzr" Jan 22 09:19:48 crc kubenswrapper[4811]: I0122 09:19:48.477489 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/cbf54ce8-3114-43c1-a1ce-6a13dd41297a-etc-ovs\") pod \"ovn-controller-ovs-kdqzr\" (UID: \"cbf54ce8-3114-43c1-a1ce-6a13dd41297a\") " pod="openstack/ovn-controller-ovs-kdqzr" Jan 22 09:19:48 crc kubenswrapper[4811]: I0122 09:19:48.477580 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/29754ede-0901-4bbd-aa87-49a8e93050b9-var-run-ovn\") pod \"ovn-controller-nm2bb\" (UID: \"29754ede-0901-4bbd-aa87-49a8e93050b9\") " pod="openstack/ovn-controller-nm2bb" Jan 22 09:19:48 crc kubenswrapper[4811]: I0122 09:19:48.477746 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/cbf54ce8-3114-43c1-a1ce-6a13dd41297a-var-log\") pod \"ovn-controller-ovs-kdqzr\" (UID: \"cbf54ce8-3114-43c1-a1ce-6a13dd41297a\") " pod="openstack/ovn-controller-ovs-kdqzr" Jan 22 09:19:48 crc kubenswrapper[4811]: I0122 09:19:48.478092 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/cbf54ce8-3114-43c1-a1ce-6a13dd41297a-var-lib\") pod \"ovn-controller-ovs-kdqzr\" (UID: \"cbf54ce8-3114-43c1-a1ce-6a13dd41297a\") " pod="openstack/ovn-controller-ovs-kdqzr" Jan 22 09:19:48 crc kubenswrapper[4811]: I0122 09:19:48.478495 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/29754ede-0901-4bbd-aa87-49a8e93050b9-var-log-ovn\") pod \"ovn-controller-nm2bb\" (UID: \"29754ede-0901-4bbd-aa87-49a8e93050b9\") " pod="openstack/ovn-controller-nm2bb" Jan 22 09:19:48 crc kubenswrapper[4811]: I0122 09:19:48.479559 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/29754ede-0901-4bbd-aa87-49a8e93050b9-scripts\") pod \"ovn-controller-nm2bb\" (UID: \"29754ede-0901-4bbd-aa87-49a8e93050b9\") " pod="openstack/ovn-controller-nm2bb" Jan 22 09:19:48 crc kubenswrapper[4811]: I0122 09:19:48.479892 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cbf54ce8-3114-43c1-a1ce-6a13dd41297a-scripts\") pod \"ovn-controller-ovs-kdqzr\" (UID: \"cbf54ce8-3114-43c1-a1ce-6a13dd41297a\") " pod="openstack/ovn-controller-ovs-kdqzr" Jan 22 09:19:48 crc kubenswrapper[4811]: I0122 09:19:48.480821 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/cbf54ce8-3114-43c1-a1ce-6a13dd41297a-var-run\") pod \"ovn-controller-ovs-kdqzr\" (UID: \"cbf54ce8-3114-43c1-a1ce-6a13dd41297a\") " pod="openstack/ovn-controller-ovs-kdqzr" Jan 22 09:19:48 crc kubenswrapper[4811]: I0122 09:19:48.488942 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29754ede-0901-4bbd-aa87-49a8e93050b9-combined-ca-bundle\") pod \"ovn-controller-nm2bb\" (UID: \"29754ede-0901-4bbd-aa87-49a8e93050b9\") " pod="openstack/ovn-controller-nm2bb" Jan 22 09:19:48 crc kubenswrapper[4811]: I0122 09:19:48.489538 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/29754ede-0901-4bbd-aa87-49a8e93050b9-ovn-controller-tls-certs\") pod \"ovn-controller-nm2bb\" (UID: \"29754ede-0901-4bbd-aa87-49a8e93050b9\") " pod="openstack/ovn-controller-nm2bb" Jan 22 09:19:48 crc kubenswrapper[4811]: I0122 09:19:48.496271 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njn8c\" (UniqueName: \"kubernetes.io/projected/cbf54ce8-3114-43c1-a1ce-6a13dd41297a-kube-api-access-njn8c\") pod \"ovn-controller-ovs-kdqzr\" (UID: \"cbf54ce8-3114-43c1-a1ce-6a13dd41297a\") " pod="openstack/ovn-controller-ovs-kdqzr" Jan 22 09:19:48 crc kubenswrapper[4811]: I0122 09:19:48.496289 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phwml\" (UniqueName: \"kubernetes.io/projected/29754ede-0901-4bbd-aa87-49a8e93050b9-kube-api-access-phwml\") pod \"ovn-controller-nm2bb\" (UID: \"29754ede-0901-4bbd-aa87-49a8e93050b9\") " pod="openstack/ovn-controller-nm2bb" Jan 22 09:19:48 crc kubenswrapper[4811]: I0122 09:19:48.551232 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-nm2bb" Jan 22 09:19:48 crc kubenswrapper[4811]: I0122 09:19:48.564258 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-kdqzr" Jan 22 09:19:48 crc kubenswrapper[4811]: I0122 09:19:48.577718 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/4344c0dd-b6d6-4448-b943-0e036ee2098b-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"4344c0dd-b6d6-4448-b943-0e036ee2098b\") " pod="openstack/ovsdbserver-nb-0" Jan 22 09:19:48 crc kubenswrapper[4811]: I0122 09:19:48.577763 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4344c0dd-b6d6-4448-b943-0e036ee2098b-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"4344c0dd-b6d6-4448-b943-0e036ee2098b\") " pod="openstack/ovsdbserver-nb-0" Jan 22 09:19:48 crc kubenswrapper[4811]: I0122 09:19:48.577839 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4344c0dd-b6d6-4448-b943-0e036ee2098b-config\") pod \"ovsdbserver-nb-0\" (UID: \"4344c0dd-b6d6-4448-b943-0e036ee2098b\") " pod="openstack/ovsdbserver-nb-0" Jan 22 09:19:48 crc kubenswrapper[4811]: I0122 09:19:48.577893 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4344c0dd-b6d6-4448-b943-0e036ee2098b-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"4344c0dd-b6d6-4448-b943-0e036ee2098b\") " pod="openstack/ovsdbserver-nb-0" Jan 22 09:19:48 crc kubenswrapper[4811]: I0122 09:19:48.577958 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-nb-0\" (UID: \"4344c0dd-b6d6-4448-b943-0e036ee2098b\") " pod="openstack/ovsdbserver-nb-0" Jan 22 09:19:48 crc kubenswrapper[4811]: I0122 09:19:48.577993 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4344c0dd-b6d6-4448-b943-0e036ee2098b-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"4344c0dd-b6d6-4448-b943-0e036ee2098b\") " pod="openstack/ovsdbserver-nb-0" Jan 22 09:19:48 crc kubenswrapper[4811]: I0122 09:19:48.578008 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nchrl\" (UniqueName: \"kubernetes.io/projected/4344c0dd-b6d6-4448-b943-0e036ee2098b-kube-api-access-nchrl\") pod \"ovsdbserver-nb-0\" (UID: \"4344c0dd-b6d6-4448-b943-0e036ee2098b\") " pod="openstack/ovsdbserver-nb-0" Jan 22 09:19:48 crc kubenswrapper[4811]: I0122 09:19:48.578059 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4344c0dd-b6d6-4448-b943-0e036ee2098b-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"4344c0dd-b6d6-4448-b943-0e036ee2098b\") " pod="openstack/ovsdbserver-nb-0" Jan 22 09:19:48 crc kubenswrapper[4811]: I0122 09:19:48.582021 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4344c0dd-b6d6-4448-b943-0e036ee2098b-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"4344c0dd-b6d6-4448-b943-0e036ee2098b\") " pod="openstack/ovsdbserver-nb-0" Jan 22 09:19:48 crc kubenswrapper[4811]: I0122 09:19:48.583115 4811 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-nb-0\" (UID: \"4344c0dd-b6d6-4448-b943-0e036ee2098b\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/ovsdbserver-nb-0" Jan 22 09:19:48 crc kubenswrapper[4811]: I0122 09:19:48.583225 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/4344c0dd-b6d6-4448-b943-0e036ee2098b-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"4344c0dd-b6d6-4448-b943-0e036ee2098b\") " pod="openstack/ovsdbserver-nb-0" Jan 22 09:19:48 crc kubenswrapper[4811]: I0122 09:19:48.583492 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4344c0dd-b6d6-4448-b943-0e036ee2098b-config\") pod \"ovsdbserver-nb-0\" (UID: \"4344c0dd-b6d6-4448-b943-0e036ee2098b\") " pod="openstack/ovsdbserver-nb-0" Jan 22 09:19:48 crc kubenswrapper[4811]: I0122 09:19:48.588517 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4344c0dd-b6d6-4448-b943-0e036ee2098b-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"4344c0dd-b6d6-4448-b943-0e036ee2098b\") " pod="openstack/ovsdbserver-nb-0" Jan 22 09:19:48 crc kubenswrapper[4811]: I0122 09:19:48.592378 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4344c0dd-b6d6-4448-b943-0e036ee2098b-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"4344c0dd-b6d6-4448-b943-0e036ee2098b\") " pod="openstack/ovsdbserver-nb-0" Jan 22 09:19:48 crc kubenswrapper[4811]: I0122 09:19:48.599688 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4344c0dd-b6d6-4448-b943-0e036ee2098b-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"4344c0dd-b6d6-4448-b943-0e036ee2098b\") " pod="openstack/ovsdbserver-nb-0" Jan 22 09:19:48 crc kubenswrapper[4811]: I0122 09:19:48.600417 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nchrl\" (UniqueName: \"kubernetes.io/projected/4344c0dd-b6d6-4448-b943-0e036ee2098b-kube-api-access-nchrl\") pod \"ovsdbserver-nb-0\" (UID: \"4344c0dd-b6d6-4448-b943-0e036ee2098b\") " pod="openstack/ovsdbserver-nb-0" Jan 22 09:19:48 crc kubenswrapper[4811]: I0122 09:19:48.604413 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-nb-0\" (UID: \"4344c0dd-b6d6-4448-b943-0e036ee2098b\") " pod="openstack/ovsdbserver-nb-0" Jan 22 09:19:48 crc kubenswrapper[4811]: I0122 09:19:48.622323 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 22 09:19:50 crc kubenswrapper[4811]: I0122 09:19:50.498721 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 22 09:19:50 crc kubenswrapper[4811]: I0122 09:19:50.501179 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 22 09:19:50 crc kubenswrapper[4811]: I0122 09:19:50.503050 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 22 09:19:50 crc kubenswrapper[4811]: I0122 09:19:50.504280 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 22 09:19:50 crc kubenswrapper[4811]: I0122 09:19:50.504488 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 22 09:19:50 crc kubenswrapper[4811]: I0122 09:19:50.504656 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-phfkt" Jan 22 09:19:50 crc kubenswrapper[4811]: I0122 09:19:50.507446 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 22 09:19:50 crc kubenswrapper[4811]: I0122 09:19:50.618296 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d0e5eec8-870c-4be3-8ac6-4ad6698d8a4b-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"d0e5eec8-870c-4be3-8ac6-4ad6698d8a4b\") " pod="openstack/ovsdbserver-sb-0" Jan 22 09:19:50 crc kubenswrapper[4811]: I0122 09:19:50.618348 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d0e5eec8-870c-4be3-8ac6-4ad6698d8a4b-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"d0e5eec8-870c-4be3-8ac6-4ad6698d8a4b\") " pod="openstack/ovsdbserver-sb-0" Jan 22 09:19:50 crc kubenswrapper[4811]: I0122 09:19:50.618394 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-sb-0\" (UID: \"d0e5eec8-870c-4be3-8ac6-4ad6698d8a4b\") " pod="openstack/ovsdbserver-sb-0" Jan 22 09:19:50 crc kubenswrapper[4811]: I0122 09:19:50.618412 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnm77\" (UniqueName: \"kubernetes.io/projected/d0e5eec8-870c-4be3-8ac6-4ad6698d8a4b-kube-api-access-gnm77\") pod \"ovsdbserver-sb-0\" (UID: \"d0e5eec8-870c-4be3-8ac6-4ad6698d8a4b\") " pod="openstack/ovsdbserver-sb-0" Jan 22 09:19:50 crc kubenswrapper[4811]: I0122 09:19:50.618488 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d0e5eec8-870c-4be3-8ac6-4ad6698d8a4b-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"d0e5eec8-870c-4be3-8ac6-4ad6698d8a4b\") " pod="openstack/ovsdbserver-sb-0" Jan 22 09:19:50 crc kubenswrapper[4811]: I0122 09:19:50.618517 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0e5eec8-870c-4be3-8ac6-4ad6698d8a4b-config\") pod \"ovsdbserver-sb-0\" (UID: \"d0e5eec8-870c-4be3-8ac6-4ad6698d8a4b\") " pod="openstack/ovsdbserver-sb-0" Jan 22 09:19:50 crc kubenswrapper[4811]: I0122 09:19:50.618669 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0e5eec8-870c-4be3-8ac6-4ad6698d8a4b-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"d0e5eec8-870c-4be3-8ac6-4ad6698d8a4b\") " pod="openstack/ovsdbserver-sb-0" Jan 22 09:19:50 crc kubenswrapper[4811]: I0122 09:19:50.618820 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d0e5eec8-870c-4be3-8ac6-4ad6698d8a4b-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"d0e5eec8-870c-4be3-8ac6-4ad6698d8a4b\") " pod="openstack/ovsdbserver-sb-0" Jan 22 09:19:50 crc kubenswrapper[4811]: I0122 09:19:50.727497 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d0e5eec8-870c-4be3-8ac6-4ad6698d8a4b-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"d0e5eec8-870c-4be3-8ac6-4ad6698d8a4b\") " pod="openstack/ovsdbserver-sb-0" Jan 22 09:19:50 crc kubenswrapper[4811]: I0122 09:19:50.727552 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0e5eec8-870c-4be3-8ac6-4ad6698d8a4b-config\") pod \"ovsdbserver-sb-0\" (UID: \"d0e5eec8-870c-4be3-8ac6-4ad6698d8a4b\") " pod="openstack/ovsdbserver-sb-0" Jan 22 09:19:50 crc kubenswrapper[4811]: I0122 09:19:50.727596 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0e5eec8-870c-4be3-8ac6-4ad6698d8a4b-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"d0e5eec8-870c-4be3-8ac6-4ad6698d8a4b\") " pod="openstack/ovsdbserver-sb-0" Jan 22 09:19:50 crc kubenswrapper[4811]: I0122 09:19:50.727664 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d0e5eec8-870c-4be3-8ac6-4ad6698d8a4b-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"d0e5eec8-870c-4be3-8ac6-4ad6698d8a4b\") " pod="openstack/ovsdbserver-sb-0" Jan 22 09:19:50 crc kubenswrapper[4811]: I0122 09:19:50.727819 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d0e5eec8-870c-4be3-8ac6-4ad6698d8a4b-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"d0e5eec8-870c-4be3-8ac6-4ad6698d8a4b\") " pod="openstack/ovsdbserver-sb-0" Jan 22 09:19:50 crc kubenswrapper[4811]: I0122 09:19:50.727852 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d0e5eec8-870c-4be3-8ac6-4ad6698d8a4b-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"d0e5eec8-870c-4be3-8ac6-4ad6698d8a4b\") " pod="openstack/ovsdbserver-sb-0" Jan 22 09:19:50 crc kubenswrapper[4811]: I0122 09:19:50.727895 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-sb-0\" (UID: \"d0e5eec8-870c-4be3-8ac6-4ad6698d8a4b\") " pod="openstack/ovsdbserver-sb-0" Jan 22 09:19:50 crc kubenswrapper[4811]: I0122 09:19:50.727911 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gnm77\" (UniqueName: \"kubernetes.io/projected/d0e5eec8-870c-4be3-8ac6-4ad6698d8a4b-kube-api-access-gnm77\") pod \"ovsdbserver-sb-0\" (UID: \"d0e5eec8-870c-4be3-8ac6-4ad6698d8a4b\") " pod="openstack/ovsdbserver-sb-0" Jan 22 09:19:50 crc kubenswrapper[4811]: I0122 09:19:50.729983 4811 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-sb-0\" (UID: \"d0e5eec8-870c-4be3-8ac6-4ad6698d8a4b\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/ovsdbserver-sb-0" Jan 22 09:19:50 crc kubenswrapper[4811]: I0122 09:19:50.731019 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d0e5eec8-870c-4be3-8ac6-4ad6698d8a4b-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"d0e5eec8-870c-4be3-8ac6-4ad6698d8a4b\") " pod="openstack/ovsdbserver-sb-0" Jan 22 09:19:50 crc kubenswrapper[4811]: I0122 09:19:50.731863 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0e5eec8-870c-4be3-8ac6-4ad6698d8a4b-config\") pod \"ovsdbserver-sb-0\" (UID: \"d0e5eec8-870c-4be3-8ac6-4ad6698d8a4b\") " pod="openstack/ovsdbserver-sb-0" Jan 22 09:19:50 crc kubenswrapper[4811]: I0122 09:19:50.732257 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d0e5eec8-870c-4be3-8ac6-4ad6698d8a4b-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"d0e5eec8-870c-4be3-8ac6-4ad6698d8a4b\") " pod="openstack/ovsdbserver-sb-0" Jan 22 09:19:50 crc kubenswrapper[4811]: I0122 09:19:50.742747 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gnm77\" (UniqueName: \"kubernetes.io/projected/d0e5eec8-870c-4be3-8ac6-4ad6698d8a4b-kube-api-access-gnm77\") pod \"ovsdbserver-sb-0\" (UID: \"d0e5eec8-870c-4be3-8ac6-4ad6698d8a4b\") " pod="openstack/ovsdbserver-sb-0" Jan 22 09:19:50 crc kubenswrapper[4811]: I0122 09:19:50.744309 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d0e5eec8-870c-4be3-8ac6-4ad6698d8a4b-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"d0e5eec8-870c-4be3-8ac6-4ad6698d8a4b\") " pod="openstack/ovsdbserver-sb-0" Jan 22 09:19:50 crc kubenswrapper[4811]: I0122 09:19:50.753336 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d0e5eec8-870c-4be3-8ac6-4ad6698d8a4b-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"d0e5eec8-870c-4be3-8ac6-4ad6698d8a4b\") " pod="openstack/ovsdbserver-sb-0" Jan 22 09:19:50 crc kubenswrapper[4811]: I0122 09:19:50.755820 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-sb-0\" (UID: \"d0e5eec8-870c-4be3-8ac6-4ad6698d8a4b\") " pod="openstack/ovsdbserver-sb-0" Jan 22 09:19:50 crc kubenswrapper[4811]: I0122 09:19:50.759741 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0e5eec8-870c-4be3-8ac6-4ad6698d8a4b-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"d0e5eec8-870c-4be3-8ac6-4ad6698d8a4b\") " pod="openstack/ovsdbserver-sb-0" Jan 22 09:19:50 crc kubenswrapper[4811]: I0122 09:19:50.824269 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 22 09:20:01 crc kubenswrapper[4811]: E0122 09:20:01.150693 4811 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33" Jan 22 09:20:01 crc kubenswrapper[4811]: E0122 09:20:01.151099 4811 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kpmnd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-5f854695bc-9dnnl_openstack(5b583653-a583-41a2-a836-0c0783e7be54): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 09:20:01 crc kubenswrapper[4811]: E0122 09:20:01.159669 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-5f854695bc-9dnnl" podUID="5b583653-a583-41a2-a836-0c0783e7be54" Jan 22 09:20:01 crc kubenswrapper[4811]: E0122 09:20:01.166848 4811 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33" Jan 22 09:20:01 crc kubenswrapper[4811]: E0122 09:20:01.167021 4811 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ggqfx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-744ffd65bc-b96wb_openstack(55f4ca47-d6d3-4ec2-9df3-2898ba04d543): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 09:20:01 crc kubenswrapper[4811]: E0122 09:20:01.168681 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-744ffd65bc-b96wb" podUID="55f4ca47-d6d3-4ec2-9df3-2898ba04d543" Jan 22 09:20:01 crc kubenswrapper[4811]: E0122 09:20:01.283700 4811 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33" Jan 22 09:20:01 crc kubenswrapper[4811]: E0122 09:20:01.283809 4811 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tzf4t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-95f5f6995-h2gf4_openstack(56f1c973-5c84-45ef-a32c-82b736796f2c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 09:20:01 crc kubenswrapper[4811]: E0122 09:20:01.286817 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-95f5f6995-h2gf4" podUID="56f1c973-5c84-45ef-a32c-82b736796f2c" Jan 22 09:20:01 crc kubenswrapper[4811]: E0122 09:20:01.288722 4811 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33" Jan 22 09:20:01 crc kubenswrapper[4811]: E0122 09:20:01.288850 4811 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tfkjr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-84bb9d8bd9-n5xpq_openstack(7966b817-cc01-43ca-8024-0e35d83779d1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 09:20:01 crc kubenswrapper[4811]: E0122 09:20:01.291317 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-84bb9d8bd9-n5xpq" podUID="7966b817-cc01-43ca-8024-0e35d83779d1" Jan 22 09:20:01 crc kubenswrapper[4811]: I0122 09:20:01.767809 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 22 09:20:01 crc kubenswrapper[4811]: W0122 09:20:01.842786 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode8d5030f_13fb_403c_9d6d_e9d87f27800f.slice/crio-5dbde7a33fa95447df89da812098341e0df99f6008d808f3350ad11ac16369ce WatchSource:0}: Error finding container 5dbde7a33fa95447df89da812098341e0df99f6008d808f3350ad11ac16369ce: Status 404 returned error can't find the container with id 5dbde7a33fa95447df89da812098341e0df99f6008d808f3350ad11ac16369ce Jan 22 09:20:01 crc kubenswrapper[4811]: I0122 09:20:01.905280 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-nm2bb"] Jan 22 09:20:01 crc kubenswrapper[4811]: W0122 09:20:01.907364 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod29754ede_0901_4bbd_aa87_49a8e93050b9.slice/crio-dc615595f70af8bdeca7e0feaddb7d1d9fca52a33a5d34c2bb912ca3f8c73d2b WatchSource:0}: Error finding container dc615595f70af8bdeca7e0feaddb7d1d9fca52a33a5d34c2bb912ca3f8c73d2b: Status 404 returned error can't find the container with id dc615595f70af8bdeca7e0feaddb7d1d9fca52a33a5d34c2bb912ca3f8c73d2b Jan 22 09:20:01 crc kubenswrapper[4811]: I0122 09:20:01.928090 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"e8d5030f-13fb-403c-9d6d-e9d87f27800f","Type":"ContainerStarted","Data":"5dbde7a33fa95447df89da812098341e0df99f6008d808f3350ad11ac16369ce"} Jan 22 09:20:01 crc kubenswrapper[4811]: I0122 09:20:01.929872 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"db2ecb97-db87-43bf-8ffb-7cbd7460ba19","Type":"ContainerStarted","Data":"ac83705ed9e0404e003e750a253a9f124161158390f11ddf1956c9b7ecea4041"} Jan 22 09:20:01 crc kubenswrapper[4811]: I0122 09:20:01.931514 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"02bd0635-dfd1-4e78-8fbf-57366ce83cdb","Type":"ContainerStarted","Data":"26b84548db6096f40845bb62a50b6a02f7ab61cc17701a992511db943f028a63"} Jan 22 09:20:01 crc kubenswrapper[4811]: I0122 09:20:01.933381 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"864b7037-2c34-48b6-b75d-38110d9816dc","Type":"ContainerStarted","Data":"8160a0f9d1e7bb7acdac79fca0bbba0196d56529b8cf427f79bfeb183f6641e6"} Jan 22 09:20:01 crc kubenswrapper[4811]: I0122 09:20:01.933494 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 22 09:20:01 crc kubenswrapper[4811]: I0122 09:20:01.934314 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-nm2bb" event={"ID":"29754ede-0901-4bbd-aa87-49a8e93050b9","Type":"ContainerStarted","Data":"dc615595f70af8bdeca7e0feaddb7d1d9fca52a33a5d34c2bb912ca3f8c73d2b"} Jan 22 09:20:01 crc kubenswrapper[4811]: E0122 09:20:01.935709 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33\\\"\"" pod="openstack/dnsmasq-dns-744ffd65bc-b96wb" podUID="55f4ca47-d6d3-4ec2-9df3-2898ba04d543" Jan 22 09:20:01 crc kubenswrapper[4811]: E0122 09:20:01.936914 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33\\\"\"" pod="openstack/dnsmasq-dns-95f5f6995-h2gf4" podUID="56f1c973-5c84-45ef-a32c-82b736796f2c" Jan 22 09:20:02 crc kubenswrapper[4811]: I0122 09:20:02.046653 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=2.314168053 podStartE2EDuration="20.046614797s" podCreationTimestamp="2026-01-22 09:19:42 +0000 UTC" firstStartedPulling="2026-01-22 09:19:43.452207389 +0000 UTC m=+827.774394512" lastFinishedPulling="2026-01-22 09:20:01.184654133 +0000 UTC m=+845.506841256" observedRunningTime="2026-01-22 09:20:02.032484004 +0000 UTC m=+846.354671127" watchObservedRunningTime="2026-01-22 09:20:02.046614797 +0000 UTC m=+846.368801920" Jan 22 09:20:02 crc kubenswrapper[4811]: I0122 09:20:02.100512 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 22 09:20:02 crc kubenswrapper[4811]: I0122 09:20:02.222381 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-kdqzr"] Jan 22 09:20:02 crc kubenswrapper[4811]: I0122 09:20:02.400328 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84bb9d8bd9-n5xpq" Jan 22 09:20:02 crc kubenswrapper[4811]: I0122 09:20:02.404685 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f854695bc-9dnnl" Jan 22 09:20:02 crc kubenswrapper[4811]: I0122 09:20:02.530148 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kpmnd\" (UniqueName: \"kubernetes.io/projected/5b583653-a583-41a2-a836-0c0783e7be54-kube-api-access-kpmnd\") pod \"5b583653-a583-41a2-a836-0c0783e7be54\" (UID: \"5b583653-a583-41a2-a836-0c0783e7be54\") " Jan 22 09:20:02 crc kubenswrapper[4811]: I0122 09:20:02.530223 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b583653-a583-41a2-a836-0c0783e7be54-config\") pod \"5b583653-a583-41a2-a836-0c0783e7be54\" (UID: \"5b583653-a583-41a2-a836-0c0783e7be54\") " Jan 22 09:20:02 crc kubenswrapper[4811]: I0122 09:20:02.530245 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5b583653-a583-41a2-a836-0c0783e7be54-dns-svc\") pod \"5b583653-a583-41a2-a836-0c0783e7be54\" (UID: \"5b583653-a583-41a2-a836-0c0783e7be54\") " Jan 22 09:20:02 crc kubenswrapper[4811]: I0122 09:20:02.530307 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tfkjr\" (UniqueName: \"kubernetes.io/projected/7966b817-cc01-43ca-8024-0e35d83779d1-kube-api-access-tfkjr\") pod \"7966b817-cc01-43ca-8024-0e35d83779d1\" (UID: \"7966b817-cc01-43ca-8024-0e35d83779d1\") " Jan 22 09:20:02 crc kubenswrapper[4811]: I0122 09:20:02.530352 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7966b817-cc01-43ca-8024-0e35d83779d1-config\") pod \"7966b817-cc01-43ca-8024-0e35d83779d1\" (UID: \"7966b817-cc01-43ca-8024-0e35d83779d1\") " Jan 22 09:20:02 crc kubenswrapper[4811]: I0122 09:20:02.531033 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7966b817-cc01-43ca-8024-0e35d83779d1-config" (OuterVolumeSpecName: "config") pod "7966b817-cc01-43ca-8024-0e35d83779d1" (UID: "7966b817-cc01-43ca-8024-0e35d83779d1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:20:02 crc kubenswrapper[4811]: I0122 09:20:02.531862 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b583653-a583-41a2-a836-0c0783e7be54-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5b583653-a583-41a2-a836-0c0783e7be54" (UID: "5b583653-a583-41a2-a836-0c0783e7be54"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:20:02 crc kubenswrapper[4811]: I0122 09:20:02.532170 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b583653-a583-41a2-a836-0c0783e7be54-config" (OuterVolumeSpecName: "config") pod "5b583653-a583-41a2-a836-0c0783e7be54" (UID: "5b583653-a583-41a2-a836-0c0783e7be54"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:20:02 crc kubenswrapper[4811]: I0122 09:20:02.535070 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b583653-a583-41a2-a836-0c0783e7be54-kube-api-access-kpmnd" (OuterVolumeSpecName: "kube-api-access-kpmnd") pod "5b583653-a583-41a2-a836-0c0783e7be54" (UID: "5b583653-a583-41a2-a836-0c0783e7be54"). InnerVolumeSpecName "kube-api-access-kpmnd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:20:02 crc kubenswrapper[4811]: I0122 09:20:02.535279 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7966b817-cc01-43ca-8024-0e35d83779d1-kube-api-access-tfkjr" (OuterVolumeSpecName: "kube-api-access-tfkjr") pod "7966b817-cc01-43ca-8024-0e35d83779d1" (UID: "7966b817-cc01-43ca-8024-0e35d83779d1"). InnerVolumeSpecName "kube-api-access-tfkjr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:20:02 crc kubenswrapper[4811]: I0122 09:20:02.632800 4811 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b583653-a583-41a2-a836-0c0783e7be54-config\") on node \"crc\" DevicePath \"\"" Jan 22 09:20:02 crc kubenswrapper[4811]: I0122 09:20:02.632838 4811 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5b583653-a583-41a2-a836-0c0783e7be54-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 22 09:20:02 crc kubenswrapper[4811]: I0122 09:20:02.632848 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tfkjr\" (UniqueName: \"kubernetes.io/projected/7966b817-cc01-43ca-8024-0e35d83779d1-kube-api-access-tfkjr\") on node \"crc\" DevicePath \"\"" Jan 22 09:20:02 crc kubenswrapper[4811]: I0122 09:20:02.632873 4811 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7966b817-cc01-43ca-8024-0e35d83779d1-config\") on node \"crc\" DevicePath \"\"" Jan 22 09:20:02 crc kubenswrapper[4811]: I0122 09:20:02.632884 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kpmnd\" (UniqueName: \"kubernetes.io/projected/5b583653-a583-41a2-a836-0c0783e7be54-kube-api-access-kpmnd\") on node \"crc\" DevicePath \"\"" Jan 22 09:20:02 crc kubenswrapper[4811]: I0122 09:20:02.932471 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 22 09:20:02 crc kubenswrapper[4811]: I0122 09:20:02.940672 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-kdqzr" event={"ID":"cbf54ce8-3114-43c1-a1ce-6a13dd41297a","Type":"ContainerStarted","Data":"1b95562371a040f694c773c049dd30636d4b1b7cdf2cc6a08692b21934f66a1f"} Jan 22 09:20:02 crc kubenswrapper[4811]: I0122 09:20:02.941847 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"d0e5eec8-870c-4be3-8ac6-4ad6698d8a4b","Type":"ContainerStarted","Data":"9a031f81ff5223a71ed0f610217119e20fca42e1aa5dd3d5874523c065073caa"} Jan 22 09:20:02 crc kubenswrapper[4811]: I0122 09:20:02.944558 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f854695bc-9dnnl" event={"ID":"5b583653-a583-41a2-a836-0c0783e7be54","Type":"ContainerDied","Data":"06e8a17cecaec375945d784c45633298feec31eebb114d079eb4ee3e74dfb4a4"} Jan 22 09:20:02 crc kubenswrapper[4811]: I0122 09:20:02.944640 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f854695bc-9dnnl" Jan 22 09:20:02 crc kubenswrapper[4811]: I0122 09:20:02.948214 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"0241fc5c-fa26-44a1-9db3-006b438b9123","Type":"ContainerStarted","Data":"7a0909ca18cf916fd8bc5743542561fad1746c91028910b1139cffb297ae2099"} Jan 22 09:20:02 crc kubenswrapper[4811]: I0122 09:20:02.950140 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84bb9d8bd9-n5xpq" Jan 22 09:20:02 crc kubenswrapper[4811]: I0122 09:20:02.950146 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84bb9d8bd9-n5xpq" event={"ID":"7966b817-cc01-43ca-8024-0e35d83779d1","Type":"ContainerDied","Data":"7ec577e6807a4e9f00bbcb8846c8ee5041e68439673ab1da6b9dbdc3ceb32bf4"} Jan 22 09:20:02 crc kubenswrapper[4811]: I0122 09:20:02.951898 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"9e61948b-1761-46b8-9ab3-e776224f335a","Type":"ContainerStarted","Data":"1e0f56dd8bb31f9c39584ee804cbf93abc9a85a0b2b1fa91f78ccc97ea695f74"} Jan 22 09:20:03 crc kubenswrapper[4811]: I0122 09:20:03.041011 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-n5xpq"] Jan 22 09:20:03 crc kubenswrapper[4811]: I0122 09:20:03.047805 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-n5xpq"] Jan 22 09:20:03 crc kubenswrapper[4811]: I0122 09:20:03.056062 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-9dnnl"] Jan 22 09:20:03 crc kubenswrapper[4811]: I0122 09:20:03.060352 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-9dnnl"] Jan 22 09:20:03 crc kubenswrapper[4811]: I0122 09:20:03.958731 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"4344c0dd-b6d6-4448-b943-0e036ee2098b","Type":"ContainerStarted","Data":"44833c40ee4cc774cfcc6dcc345032250b9f009e9742e6478f9330f24b0b8b00"} Jan 22 09:20:03 crc kubenswrapper[4811]: I0122 09:20:03.960751 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"e8d5030f-13fb-403c-9d6d-e9d87f27800f","Type":"ContainerStarted","Data":"224cfb4359b41b0f0bd2110a732e39b7aed6ee95383fdf01c01a1c6ef1567804"} Jan 22 09:20:03 crc kubenswrapper[4811]: I0122 09:20:03.961012 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 22 09:20:03 crc kubenswrapper[4811]: I0122 09:20:03.972540 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=18.138917001 podStartE2EDuration="19.972526256s" podCreationTimestamp="2026-01-22 09:19:44 +0000 UTC" firstStartedPulling="2026-01-22 09:20:01.844195246 +0000 UTC m=+846.166382369" lastFinishedPulling="2026-01-22 09:20:03.6778045 +0000 UTC m=+847.999991624" observedRunningTime="2026-01-22 09:20:03.969439928 +0000 UTC m=+848.291627051" watchObservedRunningTime="2026-01-22 09:20:03.972526256 +0000 UTC m=+848.294713379" Jan 22 09:20:04 crc kubenswrapper[4811]: I0122 09:20:04.009333 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b583653-a583-41a2-a836-0c0783e7be54" path="/var/lib/kubelet/pods/5b583653-a583-41a2-a836-0c0783e7be54/volumes" Jan 22 09:20:04 crc kubenswrapper[4811]: I0122 09:20:04.009716 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7966b817-cc01-43ca-8024-0e35d83779d1" path="/var/lib/kubelet/pods/7966b817-cc01-43ca-8024-0e35d83779d1/volumes" Jan 22 09:20:04 crc kubenswrapper[4811]: E0122 09:20:04.800535 4811 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddb2ecb97_db87_43bf_8ffb_7cbd7460ba19.slice/crio-conmon-ac83705ed9e0404e003e750a253a9f124161158390f11ddf1956c9b7ecea4041.scope\": RecentStats: unable to find data in memory cache]" Jan 22 09:20:04 crc kubenswrapper[4811]: I0122 09:20:04.970156 4811 generic.go:334] "Generic (PLEG): container finished" podID="02bd0635-dfd1-4e78-8fbf-57366ce83cdb" containerID="26b84548db6096f40845bb62a50b6a02f7ab61cc17701a992511db943f028a63" exitCode=0 Jan 22 09:20:04 crc kubenswrapper[4811]: I0122 09:20:04.970246 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"02bd0635-dfd1-4e78-8fbf-57366ce83cdb","Type":"ContainerDied","Data":"26b84548db6096f40845bb62a50b6a02f7ab61cc17701a992511db943f028a63"} Jan 22 09:20:04 crc kubenswrapper[4811]: I0122 09:20:04.972689 4811 generic.go:334] "Generic (PLEG): container finished" podID="db2ecb97-db87-43bf-8ffb-7cbd7460ba19" containerID="ac83705ed9e0404e003e750a253a9f124161158390f11ddf1956c9b7ecea4041" exitCode=0 Jan 22 09:20:04 crc kubenswrapper[4811]: I0122 09:20:04.972912 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"db2ecb97-db87-43bf-8ffb-7cbd7460ba19","Type":"ContainerDied","Data":"ac83705ed9e0404e003e750a253a9f124161158390f11ddf1956c9b7ecea4041"} Jan 22 09:20:07 crc kubenswrapper[4811]: I0122 09:20:07.000799 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"db2ecb97-db87-43bf-8ffb-7cbd7460ba19","Type":"ContainerStarted","Data":"30fa2527fa4b878b9935e574706599290bcb7c489cea6afc6f03406d7582c6c4"} Jan 22 09:20:07 crc kubenswrapper[4811]: I0122 09:20:07.003113 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"4344c0dd-b6d6-4448-b943-0e036ee2098b","Type":"ContainerStarted","Data":"5c7b2e959ef497ced91bddceb4f4210f6a34a1bb56ef868262405ddf850cd41f"} Jan 22 09:20:07 crc kubenswrapper[4811]: I0122 09:20:07.005447 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"02bd0635-dfd1-4e78-8fbf-57366ce83cdb","Type":"ContainerStarted","Data":"8ffb26c0989033c6125af5def6dd64f396625f1c230839802bde616f61cca7df"} Jan 22 09:20:07 crc kubenswrapper[4811]: I0122 09:20:07.007703 4811 generic.go:334] "Generic (PLEG): container finished" podID="cbf54ce8-3114-43c1-a1ce-6a13dd41297a" containerID="e71ceb5d4cd4c9dfeb78410b7fc173484d87cc1cc04e3227608f9d5b39b8aeec" exitCode=0 Jan 22 09:20:07 crc kubenswrapper[4811]: I0122 09:20:07.007754 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-kdqzr" event={"ID":"cbf54ce8-3114-43c1-a1ce-6a13dd41297a","Type":"ContainerDied","Data":"e71ceb5d4cd4c9dfeb78410b7fc173484d87cc1cc04e3227608f9d5b39b8aeec"} Jan 22 09:20:07 crc kubenswrapper[4811]: I0122 09:20:07.008943 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"d0e5eec8-870c-4be3-8ac6-4ad6698d8a4b","Type":"ContainerStarted","Data":"006d48c876f6df7cd540aad6791e3a7f658db887c17a7d4c3992c2af946c2238"} Jan 22 09:20:07 crc kubenswrapper[4811]: I0122 09:20:07.010300 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-nm2bb" event={"ID":"29754ede-0901-4bbd-aa87-49a8e93050b9","Type":"ContainerStarted","Data":"d2a47097233120433ef3b1c53700cbca3f2f3cd299daba65367665b46c429674"} Jan 22 09:20:07 crc kubenswrapper[4811]: I0122 09:20:07.010445 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-nm2bb" Jan 22 09:20:07 crc kubenswrapper[4811]: I0122 09:20:07.036230 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=8.222747722 podStartE2EDuration="26.036215352s" podCreationTimestamp="2026-01-22 09:19:41 +0000 UTC" firstStartedPulling="2026-01-22 09:19:43.532444956 +0000 UTC m=+827.854632079" lastFinishedPulling="2026-01-22 09:20:01.345912586 +0000 UTC m=+845.668099709" observedRunningTime="2026-01-22 09:20:07.019931108 +0000 UTC m=+851.342118231" watchObservedRunningTime="2026-01-22 09:20:07.036215352 +0000 UTC m=+851.358402475" Jan 22 09:20:07 crc kubenswrapper[4811]: I0122 09:20:07.054494 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=8.522260635 podStartE2EDuration="28.054482234s" podCreationTimestamp="2026-01-22 09:19:39 +0000 UTC" firstStartedPulling="2026-01-22 09:19:41.657499218 +0000 UTC m=+825.979686341" lastFinishedPulling="2026-01-22 09:20:01.189720817 +0000 UTC m=+845.511907940" observedRunningTime="2026-01-22 09:20:07.04822673 +0000 UTC m=+851.370413852" watchObservedRunningTime="2026-01-22 09:20:07.054482234 +0000 UTC m=+851.376669357" Jan 22 09:20:07 crc kubenswrapper[4811]: I0122 09:20:07.077010 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-nm2bb" podStartSLOduration=15.140686056 podStartE2EDuration="19.076993569s" podCreationTimestamp="2026-01-22 09:19:48 +0000 UTC" firstStartedPulling="2026-01-22 09:20:01.90921467 +0000 UTC m=+846.231401794" lastFinishedPulling="2026-01-22 09:20:05.845522184 +0000 UTC m=+850.167709307" observedRunningTime="2026-01-22 09:20:07.073764922 +0000 UTC m=+851.395952045" watchObservedRunningTime="2026-01-22 09:20:07.076993569 +0000 UTC m=+851.399180692" Jan 22 09:20:07 crc kubenswrapper[4811]: I0122 09:20:07.763951 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 22 09:20:08 crc kubenswrapper[4811]: I0122 09:20:08.017438 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-kdqzr" event={"ID":"cbf54ce8-3114-43c1-a1ce-6a13dd41297a","Type":"ContainerStarted","Data":"79c226bf08bbdec4532062aca3fcab5072272ef7e753a685ed04175fcbc496f3"} Jan 22 09:20:08 crc kubenswrapper[4811]: I0122 09:20:08.017482 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-kdqzr" event={"ID":"cbf54ce8-3114-43c1-a1ce-6a13dd41297a","Type":"ContainerStarted","Data":"54448faa600603017d5b4c17c66882a06603ab8e5fb8cc499c7236b1d93ede25"} Jan 22 09:20:08 crc kubenswrapper[4811]: I0122 09:20:08.017850 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-kdqzr" Jan 22 09:20:08 crc kubenswrapper[4811]: I0122 09:20:08.033033 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-kdqzr" podStartSLOduration=16.434555001 podStartE2EDuration="20.033007701s" podCreationTimestamp="2026-01-22 09:19:48 +0000 UTC" firstStartedPulling="2026-01-22 09:20:02.23221716 +0000 UTC m=+846.554404283" lastFinishedPulling="2026-01-22 09:20:05.830669859 +0000 UTC m=+850.152856983" observedRunningTime="2026-01-22 09:20:08.030583841 +0000 UTC m=+852.352770963" watchObservedRunningTime="2026-01-22 09:20:08.033007701 +0000 UTC m=+852.355194824" Jan 22 09:20:08 crc kubenswrapper[4811]: I0122 09:20:08.564856 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-kdqzr" Jan 22 09:20:11 crc kubenswrapper[4811]: I0122 09:20:11.036521 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 22 09:20:11 crc kubenswrapper[4811]: I0122 09:20:11.036910 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 22 09:20:11 crc kubenswrapper[4811]: I0122 09:20:11.039476 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"4344c0dd-b6d6-4448-b943-0e036ee2098b","Type":"ContainerStarted","Data":"0bac45b35b6469939f475fa54981ab7a5483cade94207ac6de5ff95bc7f6dbc7"} Jan 22 09:20:11 crc kubenswrapper[4811]: I0122 09:20:11.041437 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"d0e5eec8-870c-4be3-8ac6-4ad6698d8a4b","Type":"ContainerStarted","Data":"7af5197726312e738851a0e2e22f0fe78d9008da090382975e5d4a014ed9cf8b"} Jan 22 09:20:11 crc kubenswrapper[4811]: I0122 09:20:11.058001 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=17.166282864 podStartE2EDuration="24.057971148s" podCreationTimestamp="2026-01-22 09:19:47 +0000 UTC" firstStartedPulling="2026-01-22 09:20:03.005530991 +0000 UTC m=+847.327718114" lastFinishedPulling="2026-01-22 09:20:09.897219275 +0000 UTC m=+854.219406398" observedRunningTime="2026-01-22 09:20:11.057082432 +0000 UTC m=+855.379269556" watchObservedRunningTime="2026-01-22 09:20:11.057971148 +0000 UTC m=+855.380158271" Jan 22 09:20:11 crc kubenswrapper[4811]: I0122 09:20:11.072336 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=14.287823562 podStartE2EDuration="22.07232356s" podCreationTimestamp="2026-01-22 09:19:49 +0000 UTC" firstStartedPulling="2026-01-22 09:20:02.108958015 +0000 UTC m=+846.431145138" lastFinishedPulling="2026-01-22 09:20:09.893458013 +0000 UTC m=+854.215645136" observedRunningTime="2026-01-22 09:20:11.069546755 +0000 UTC m=+855.391733877" watchObservedRunningTime="2026-01-22 09:20:11.07232356 +0000 UTC m=+855.394510683" Jan 22 09:20:11 crc kubenswrapper[4811]: I0122 09:20:11.100487 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 22 09:20:11 crc kubenswrapper[4811]: I0122 09:20:11.824884 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 22 09:20:11 crc kubenswrapper[4811]: I0122 09:20:11.879545 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 22 09:20:12 crc kubenswrapper[4811]: I0122 09:20:12.046196 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 22 09:20:12 crc kubenswrapper[4811]: I0122 09:20:12.070450 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 22 09:20:12 crc kubenswrapper[4811]: I0122 09:20:12.104032 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 22 09:20:12 crc kubenswrapper[4811]: I0122 09:20:12.274567 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-744ffd65bc-b96wb"] Jan 22 09:20:12 crc kubenswrapper[4811]: I0122 09:20:12.287431 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-794868bd45-25t27"] Jan 22 09:20:12 crc kubenswrapper[4811]: I0122 09:20:12.303971 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-794868bd45-25t27" Jan 22 09:20:12 crc kubenswrapper[4811]: I0122 09:20:12.305792 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 22 09:20:12 crc kubenswrapper[4811]: I0122 09:20:12.306440 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-kz6sk"] Jan 22 09:20:12 crc kubenswrapper[4811]: I0122 09:20:12.307414 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-kz6sk" Jan 22 09:20:12 crc kubenswrapper[4811]: I0122 09:20:12.310670 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-794868bd45-25t27"] Jan 22 09:20:12 crc kubenswrapper[4811]: I0122 09:20:12.315817 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 22 09:20:12 crc kubenswrapper[4811]: I0122 09:20:12.316037 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-kz6sk"] Jan 22 09:20:12 crc kubenswrapper[4811]: I0122 09:20:12.389858 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 22 09:20:12 crc kubenswrapper[4811]: I0122 09:20:12.390206 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 22 09:20:12 crc kubenswrapper[4811]: I0122 09:20:12.481833 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-bmklk"] Jan 22 09:20:12 crc kubenswrapper[4811]: I0122 09:20:12.482588 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-bmklk" Jan 22 09:20:12 crc kubenswrapper[4811]: I0122 09:20:12.487961 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dddbae5c-b9c5-44f9-9236-28c880f47a53-ovsdbserver-sb\") pod \"dnsmasq-dns-794868bd45-25t27\" (UID: \"dddbae5c-b9c5-44f9-9236-28c880f47a53\") " pod="openstack/dnsmasq-dns-794868bd45-25t27" Jan 22 09:20:12 crc kubenswrapper[4811]: I0122 09:20:12.488013 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/374f91f7-413d-4830-afc1-0d75c2946fc3-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-kz6sk\" (UID: \"374f91f7-413d-4830-afc1-0d75c2946fc3\") " pod="openstack/ovn-controller-metrics-kz6sk" Jan 22 09:20:12 crc kubenswrapper[4811]: I0122 09:20:12.488049 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/374f91f7-413d-4830-afc1-0d75c2946fc3-ovs-rundir\") pod \"ovn-controller-metrics-kz6sk\" (UID: \"374f91f7-413d-4830-afc1-0d75c2946fc3\") " pod="openstack/ovn-controller-metrics-kz6sk" Jan 22 09:20:12 crc kubenswrapper[4811]: I0122 09:20:12.488080 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpvgt\" (UniqueName: \"kubernetes.io/projected/374f91f7-413d-4830-afc1-0d75c2946fc3-kube-api-access-hpvgt\") pod \"ovn-controller-metrics-kz6sk\" (UID: \"374f91f7-413d-4830-afc1-0d75c2946fc3\") " pod="openstack/ovn-controller-metrics-kz6sk" Jan 22 09:20:12 crc kubenswrapper[4811]: I0122 09:20:12.488107 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/374f91f7-413d-4830-afc1-0d75c2946fc3-combined-ca-bundle\") pod \"ovn-controller-metrics-kz6sk\" (UID: \"374f91f7-413d-4830-afc1-0d75c2946fc3\") " pod="openstack/ovn-controller-metrics-kz6sk" Jan 22 09:20:12 crc kubenswrapper[4811]: I0122 09:20:12.488156 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dddbae5c-b9c5-44f9-9236-28c880f47a53-config\") pod \"dnsmasq-dns-794868bd45-25t27\" (UID: \"dddbae5c-b9c5-44f9-9236-28c880f47a53\") " pod="openstack/dnsmasq-dns-794868bd45-25t27" Jan 22 09:20:12 crc kubenswrapper[4811]: I0122 09:20:12.488171 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fx8w\" (UniqueName: \"kubernetes.io/projected/dddbae5c-b9c5-44f9-9236-28c880f47a53-kube-api-access-7fx8w\") pod \"dnsmasq-dns-794868bd45-25t27\" (UID: \"dddbae5c-b9c5-44f9-9236-28c880f47a53\") " pod="openstack/dnsmasq-dns-794868bd45-25t27" Jan 22 09:20:12 crc kubenswrapper[4811]: I0122 09:20:12.488206 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/374f91f7-413d-4830-afc1-0d75c2946fc3-config\") pod \"ovn-controller-metrics-kz6sk\" (UID: \"374f91f7-413d-4830-afc1-0d75c2946fc3\") " pod="openstack/ovn-controller-metrics-kz6sk" Jan 22 09:20:12 crc kubenswrapper[4811]: I0122 09:20:12.488235 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dddbae5c-b9c5-44f9-9236-28c880f47a53-dns-svc\") pod \"dnsmasq-dns-794868bd45-25t27\" (UID: \"dddbae5c-b9c5-44f9-9236-28c880f47a53\") " pod="openstack/dnsmasq-dns-794868bd45-25t27" Jan 22 09:20:12 crc kubenswrapper[4811]: I0122 09:20:12.488286 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/374f91f7-413d-4830-afc1-0d75c2946fc3-ovn-rundir\") pod \"ovn-controller-metrics-kz6sk\" (UID: \"374f91f7-413d-4830-afc1-0d75c2946fc3\") " pod="openstack/ovn-controller-metrics-kz6sk" Jan 22 09:20:12 crc kubenswrapper[4811]: I0122 09:20:12.498784 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-bmklk"] Jan 22 09:20:12 crc kubenswrapper[4811]: I0122 09:20:12.532717 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-9e0d-account-create-update-8vx5t"] Jan 22 09:20:12 crc kubenswrapper[4811]: I0122 09:20:12.533589 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-9e0d-account-create-update-8vx5t" Jan 22 09:20:12 crc kubenswrapper[4811]: I0122 09:20:12.535512 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 22 09:20:12 crc kubenswrapper[4811]: I0122 09:20:12.539378 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-9e0d-account-create-update-8vx5t"] Jan 22 09:20:12 crc kubenswrapper[4811]: I0122 09:20:12.594250 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/374f91f7-413d-4830-afc1-0d75c2946fc3-ovn-rundir\") pod \"ovn-controller-metrics-kz6sk\" (UID: \"374f91f7-413d-4830-afc1-0d75c2946fc3\") " pod="openstack/ovn-controller-metrics-kz6sk" Jan 22 09:20:12 crc kubenswrapper[4811]: I0122 09:20:12.595301 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/374f91f7-413d-4830-afc1-0d75c2946fc3-ovn-rundir\") pod \"ovn-controller-metrics-kz6sk\" (UID: \"374f91f7-413d-4830-afc1-0d75c2946fc3\") " pod="openstack/ovn-controller-metrics-kz6sk" Jan 22 09:20:12 crc kubenswrapper[4811]: I0122 09:20:12.595356 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/00f9806b-10cf-424e-bf6e-9e2e3e66833a-operator-scripts\") pod \"keystone-db-create-bmklk\" (UID: \"00f9806b-10cf-424e-bf6e-9e2e3e66833a\") " pod="openstack/keystone-db-create-bmklk" Jan 22 09:20:12 crc kubenswrapper[4811]: I0122 09:20:12.595417 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dddbae5c-b9c5-44f9-9236-28c880f47a53-ovsdbserver-sb\") pod \"dnsmasq-dns-794868bd45-25t27\" (UID: \"dddbae5c-b9c5-44f9-9236-28c880f47a53\") " pod="openstack/dnsmasq-dns-794868bd45-25t27" Jan 22 09:20:12 crc kubenswrapper[4811]: I0122 09:20:12.595460 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/374f91f7-413d-4830-afc1-0d75c2946fc3-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-kz6sk\" (UID: \"374f91f7-413d-4830-afc1-0d75c2946fc3\") " pod="openstack/ovn-controller-metrics-kz6sk" Jan 22 09:20:12 crc kubenswrapper[4811]: I0122 09:20:12.595479 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/374f91f7-413d-4830-afc1-0d75c2946fc3-ovs-rundir\") pod \"ovn-controller-metrics-kz6sk\" (UID: \"374f91f7-413d-4830-afc1-0d75c2946fc3\") " pod="openstack/ovn-controller-metrics-kz6sk" Jan 22 09:20:12 crc kubenswrapper[4811]: I0122 09:20:12.595507 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hpvgt\" (UniqueName: \"kubernetes.io/projected/374f91f7-413d-4830-afc1-0d75c2946fc3-kube-api-access-hpvgt\") pod \"ovn-controller-metrics-kz6sk\" (UID: \"374f91f7-413d-4830-afc1-0d75c2946fc3\") " pod="openstack/ovn-controller-metrics-kz6sk" Jan 22 09:20:12 crc kubenswrapper[4811]: I0122 09:20:12.595530 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/374f91f7-413d-4830-afc1-0d75c2946fc3-combined-ca-bundle\") pod \"ovn-controller-metrics-kz6sk\" (UID: \"374f91f7-413d-4830-afc1-0d75c2946fc3\") " pod="openstack/ovn-controller-metrics-kz6sk" Jan 22 09:20:12 crc kubenswrapper[4811]: I0122 09:20:12.595574 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dddbae5c-b9c5-44f9-9236-28c880f47a53-config\") pod \"dnsmasq-dns-794868bd45-25t27\" (UID: \"dddbae5c-b9c5-44f9-9236-28c880f47a53\") " pod="openstack/dnsmasq-dns-794868bd45-25t27" Jan 22 09:20:12 crc kubenswrapper[4811]: I0122 09:20:12.595597 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7fx8w\" (UniqueName: \"kubernetes.io/projected/dddbae5c-b9c5-44f9-9236-28c880f47a53-kube-api-access-7fx8w\") pod \"dnsmasq-dns-794868bd45-25t27\" (UID: \"dddbae5c-b9c5-44f9-9236-28c880f47a53\") " pod="openstack/dnsmasq-dns-794868bd45-25t27" Jan 22 09:20:12 crc kubenswrapper[4811]: I0122 09:20:12.595615 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpbm6\" (UniqueName: \"kubernetes.io/projected/00f9806b-10cf-424e-bf6e-9e2e3e66833a-kube-api-access-bpbm6\") pod \"keystone-db-create-bmklk\" (UID: \"00f9806b-10cf-424e-bf6e-9e2e3e66833a\") " pod="openstack/keystone-db-create-bmklk" Jan 22 09:20:12 crc kubenswrapper[4811]: I0122 09:20:12.595665 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/374f91f7-413d-4830-afc1-0d75c2946fc3-config\") pod \"ovn-controller-metrics-kz6sk\" (UID: \"374f91f7-413d-4830-afc1-0d75c2946fc3\") " pod="openstack/ovn-controller-metrics-kz6sk" Jan 22 09:20:12 crc kubenswrapper[4811]: I0122 09:20:12.595694 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dddbae5c-b9c5-44f9-9236-28c880f47a53-dns-svc\") pod \"dnsmasq-dns-794868bd45-25t27\" (UID: \"dddbae5c-b9c5-44f9-9236-28c880f47a53\") " pod="openstack/dnsmasq-dns-794868bd45-25t27" Jan 22 09:20:12 crc kubenswrapper[4811]: I0122 09:20:12.597808 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dddbae5c-b9c5-44f9-9236-28c880f47a53-config\") pod \"dnsmasq-dns-794868bd45-25t27\" (UID: \"dddbae5c-b9c5-44f9-9236-28c880f47a53\") " pod="openstack/dnsmasq-dns-794868bd45-25t27" Jan 22 09:20:12 crc kubenswrapper[4811]: I0122 09:20:12.597816 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/374f91f7-413d-4830-afc1-0d75c2946fc3-config\") pod \"ovn-controller-metrics-kz6sk\" (UID: \"374f91f7-413d-4830-afc1-0d75c2946fc3\") " pod="openstack/ovn-controller-metrics-kz6sk" Jan 22 09:20:12 crc kubenswrapper[4811]: I0122 09:20:12.598079 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/374f91f7-413d-4830-afc1-0d75c2946fc3-ovs-rundir\") pod \"ovn-controller-metrics-kz6sk\" (UID: \"374f91f7-413d-4830-afc1-0d75c2946fc3\") " pod="openstack/ovn-controller-metrics-kz6sk" Jan 22 09:20:12 crc kubenswrapper[4811]: I0122 09:20:12.598196 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dddbae5c-b9c5-44f9-9236-28c880f47a53-ovsdbserver-sb\") pod \"dnsmasq-dns-794868bd45-25t27\" (UID: \"dddbae5c-b9c5-44f9-9236-28c880f47a53\") " pod="openstack/dnsmasq-dns-794868bd45-25t27" Jan 22 09:20:12 crc kubenswrapper[4811]: I0122 09:20:12.598455 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dddbae5c-b9c5-44f9-9236-28c880f47a53-dns-svc\") pod \"dnsmasq-dns-794868bd45-25t27\" (UID: \"dddbae5c-b9c5-44f9-9236-28c880f47a53\") " pod="openstack/dnsmasq-dns-794868bd45-25t27" Jan 22 09:20:12 crc kubenswrapper[4811]: I0122 09:20:12.604464 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/374f91f7-413d-4830-afc1-0d75c2946fc3-combined-ca-bundle\") pod \"ovn-controller-metrics-kz6sk\" (UID: \"374f91f7-413d-4830-afc1-0d75c2946fc3\") " pod="openstack/ovn-controller-metrics-kz6sk" Jan 22 09:20:12 crc kubenswrapper[4811]: I0122 09:20:12.604939 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/374f91f7-413d-4830-afc1-0d75c2946fc3-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-kz6sk\" (UID: \"374f91f7-413d-4830-afc1-0d75c2946fc3\") " pod="openstack/ovn-controller-metrics-kz6sk" Jan 22 09:20:12 crc kubenswrapper[4811]: I0122 09:20:12.618347 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7fx8w\" (UniqueName: \"kubernetes.io/projected/dddbae5c-b9c5-44f9-9236-28c880f47a53-kube-api-access-7fx8w\") pod \"dnsmasq-dns-794868bd45-25t27\" (UID: \"dddbae5c-b9c5-44f9-9236-28c880f47a53\") " pod="openstack/dnsmasq-dns-794868bd45-25t27" Jan 22 09:20:12 crc kubenswrapper[4811]: I0122 09:20:12.621313 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hpvgt\" (UniqueName: \"kubernetes.io/projected/374f91f7-413d-4830-afc1-0d75c2946fc3-kube-api-access-hpvgt\") pod \"ovn-controller-metrics-kz6sk\" (UID: \"374f91f7-413d-4830-afc1-0d75c2946fc3\") " pod="openstack/ovn-controller-metrics-kz6sk" Jan 22 09:20:12 crc kubenswrapper[4811]: I0122 09:20:12.625804 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 22 09:20:12 crc kubenswrapper[4811]: I0122 09:20:12.627226 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-794868bd45-25t27" Jan 22 09:20:12 crc kubenswrapper[4811]: I0122 09:20:12.637713 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-kz6sk" Jan 22 09:20:12 crc kubenswrapper[4811]: I0122 09:20:12.665942 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-h2gf4"] Jan 22 09:20:12 crc kubenswrapper[4811]: I0122 09:20:12.674863 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 22 09:20:12 crc kubenswrapper[4811]: I0122 09:20:12.680205 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-757dc6fff9-bllfc"] Jan 22 09:20:12 crc kubenswrapper[4811]: I0122 09:20:12.681309 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757dc6fff9-bllfc" Jan 22 09:20:12 crc kubenswrapper[4811]: I0122 09:20:12.685121 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 22 09:20:12 crc kubenswrapper[4811]: I0122 09:20:12.696799 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxgq2\" (UniqueName: \"kubernetes.io/projected/3d4fef27-55fc-4467-91f6-a89ecbae6198-kube-api-access-sxgq2\") pod \"keystone-9e0d-account-create-update-8vx5t\" (UID: \"3d4fef27-55fc-4467-91f6-a89ecbae6198\") " pod="openstack/keystone-9e0d-account-create-update-8vx5t" Jan 22 09:20:12 crc kubenswrapper[4811]: I0122 09:20:12.696833 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/11f33cf8-5f50-416a-a6f0-f6647a331d40-ovsdbserver-nb\") pod \"dnsmasq-dns-757dc6fff9-bllfc\" (UID: \"11f33cf8-5f50-416a-a6f0-f6647a331d40\") " pod="openstack/dnsmasq-dns-757dc6fff9-bllfc" Jan 22 09:20:12 crc kubenswrapper[4811]: I0122 09:20:12.696859 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/11f33cf8-5f50-416a-a6f0-f6647a331d40-dns-svc\") pod \"dnsmasq-dns-757dc6fff9-bllfc\" (UID: \"11f33cf8-5f50-416a-a6f0-f6647a331d40\") " pod="openstack/dnsmasq-dns-757dc6fff9-bllfc" Jan 22 09:20:12 crc kubenswrapper[4811]: I0122 09:20:12.696879 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11f33cf8-5f50-416a-a6f0-f6647a331d40-config\") pod \"dnsmasq-dns-757dc6fff9-bllfc\" (UID: \"11f33cf8-5f50-416a-a6f0-f6647a331d40\") " pod="openstack/dnsmasq-dns-757dc6fff9-bllfc" Jan 22 09:20:12 crc kubenswrapper[4811]: I0122 09:20:12.696927 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bpbm6\" (UniqueName: \"kubernetes.io/projected/00f9806b-10cf-424e-bf6e-9e2e3e66833a-kube-api-access-bpbm6\") pod \"keystone-db-create-bmklk\" (UID: \"00f9806b-10cf-424e-bf6e-9e2e3e66833a\") " pod="openstack/keystone-db-create-bmklk" Jan 22 09:20:12 crc kubenswrapper[4811]: I0122 09:20:12.696984 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d4fef27-55fc-4467-91f6-a89ecbae6198-operator-scripts\") pod \"keystone-9e0d-account-create-update-8vx5t\" (UID: \"3d4fef27-55fc-4467-91f6-a89ecbae6198\") " pod="openstack/keystone-9e0d-account-create-update-8vx5t" Jan 22 09:20:12 crc kubenswrapper[4811]: I0122 09:20:12.697026 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/00f9806b-10cf-424e-bf6e-9e2e3e66833a-operator-scripts\") pod \"keystone-db-create-bmklk\" (UID: \"00f9806b-10cf-424e-bf6e-9e2e3e66833a\") " pod="openstack/keystone-db-create-bmklk" Jan 22 09:20:12 crc kubenswrapper[4811]: I0122 09:20:12.697047 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/11f33cf8-5f50-416a-a6f0-f6647a331d40-ovsdbserver-sb\") pod \"dnsmasq-dns-757dc6fff9-bllfc\" (UID: \"11f33cf8-5f50-416a-a6f0-f6647a331d40\") " pod="openstack/dnsmasq-dns-757dc6fff9-bllfc" Jan 22 09:20:12 crc kubenswrapper[4811]: I0122 09:20:12.697064 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pm9bw\" (UniqueName: \"kubernetes.io/projected/11f33cf8-5f50-416a-a6f0-f6647a331d40-kube-api-access-pm9bw\") pod \"dnsmasq-dns-757dc6fff9-bllfc\" (UID: \"11f33cf8-5f50-416a-a6f0-f6647a331d40\") " pod="openstack/dnsmasq-dns-757dc6fff9-bllfc" Jan 22 09:20:12 crc kubenswrapper[4811]: I0122 09:20:12.697662 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/00f9806b-10cf-424e-bf6e-9e2e3e66833a-operator-scripts\") pod \"keystone-db-create-bmklk\" (UID: \"00f9806b-10cf-424e-bf6e-9e2e3e66833a\") " pod="openstack/keystone-db-create-bmklk" Jan 22 09:20:12 crc kubenswrapper[4811]: I0122 09:20:12.698366 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-744ffd65bc-b96wb" Jan 22 09:20:12 crc kubenswrapper[4811]: I0122 09:20:12.713549 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-69hvx"] Jan 22 09:20:12 crc kubenswrapper[4811]: I0122 09:20:12.714609 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-69hvx" Jan 22 09:20:12 crc kubenswrapper[4811]: I0122 09:20:12.723876 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-757dc6fff9-bllfc"] Jan 22 09:20:12 crc kubenswrapper[4811]: I0122 09:20:12.741673 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-69hvx"] Jan 22 09:20:12 crc kubenswrapper[4811]: I0122 09:20:12.742712 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bpbm6\" (UniqueName: \"kubernetes.io/projected/00f9806b-10cf-424e-bf6e-9e2e3e66833a-kube-api-access-bpbm6\") pod \"keystone-db-create-bmklk\" (UID: \"00f9806b-10cf-424e-bf6e-9e2e3e66833a\") " pod="openstack/keystone-db-create-bmklk" Jan 22 09:20:12 crc kubenswrapper[4811]: I0122 09:20:12.799104 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/11f33cf8-5f50-416a-a6f0-f6647a331d40-ovsdbserver-sb\") pod \"dnsmasq-dns-757dc6fff9-bllfc\" (UID: \"11f33cf8-5f50-416a-a6f0-f6647a331d40\") " pod="openstack/dnsmasq-dns-757dc6fff9-bllfc" Jan 22 09:20:12 crc kubenswrapper[4811]: I0122 09:20:12.799401 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pm9bw\" (UniqueName: \"kubernetes.io/projected/11f33cf8-5f50-416a-a6f0-f6647a331d40-kube-api-access-pm9bw\") pod \"dnsmasq-dns-757dc6fff9-bllfc\" (UID: \"11f33cf8-5f50-416a-a6f0-f6647a331d40\") " pod="openstack/dnsmasq-dns-757dc6fff9-bllfc" Jan 22 09:20:12 crc kubenswrapper[4811]: I0122 09:20:12.799532 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxgq2\" (UniqueName: \"kubernetes.io/projected/3d4fef27-55fc-4467-91f6-a89ecbae6198-kube-api-access-sxgq2\") pod \"keystone-9e0d-account-create-update-8vx5t\" (UID: \"3d4fef27-55fc-4467-91f6-a89ecbae6198\") " pod="openstack/keystone-9e0d-account-create-update-8vx5t" Jan 22 09:20:12 crc kubenswrapper[4811]: I0122 09:20:12.799561 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/11f33cf8-5f50-416a-a6f0-f6647a331d40-ovsdbserver-nb\") pod \"dnsmasq-dns-757dc6fff9-bllfc\" (UID: \"11f33cf8-5f50-416a-a6f0-f6647a331d40\") " pod="openstack/dnsmasq-dns-757dc6fff9-bllfc" Jan 22 09:20:12 crc kubenswrapper[4811]: I0122 09:20:12.799581 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/11f33cf8-5f50-416a-a6f0-f6647a331d40-dns-svc\") pod \"dnsmasq-dns-757dc6fff9-bllfc\" (UID: \"11f33cf8-5f50-416a-a6f0-f6647a331d40\") " pod="openstack/dnsmasq-dns-757dc6fff9-bllfc" Jan 22 09:20:12 crc kubenswrapper[4811]: I0122 09:20:12.799608 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11f33cf8-5f50-416a-a6f0-f6647a331d40-config\") pod \"dnsmasq-dns-757dc6fff9-bllfc\" (UID: \"11f33cf8-5f50-416a-a6f0-f6647a331d40\") " pod="openstack/dnsmasq-dns-757dc6fff9-bllfc" Jan 22 09:20:12 crc kubenswrapper[4811]: I0122 09:20:12.799750 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d4fef27-55fc-4467-91f6-a89ecbae6198-operator-scripts\") pod \"keystone-9e0d-account-create-update-8vx5t\" (UID: \"3d4fef27-55fc-4467-91f6-a89ecbae6198\") " pod="openstack/keystone-9e0d-account-create-update-8vx5t" Jan 22 09:20:12 crc kubenswrapper[4811]: I0122 09:20:12.800369 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/11f33cf8-5f50-416a-a6f0-f6647a331d40-ovsdbserver-nb\") pod \"dnsmasq-dns-757dc6fff9-bllfc\" (UID: \"11f33cf8-5f50-416a-a6f0-f6647a331d40\") " pod="openstack/dnsmasq-dns-757dc6fff9-bllfc" Jan 22 09:20:12 crc kubenswrapper[4811]: I0122 09:20:12.800413 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d4fef27-55fc-4467-91f6-a89ecbae6198-operator-scripts\") pod \"keystone-9e0d-account-create-update-8vx5t\" (UID: \"3d4fef27-55fc-4467-91f6-a89ecbae6198\") " pod="openstack/keystone-9e0d-account-create-update-8vx5t" Jan 22 09:20:12 crc kubenswrapper[4811]: I0122 09:20:12.800450 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/11f33cf8-5f50-416a-a6f0-f6647a331d40-dns-svc\") pod \"dnsmasq-dns-757dc6fff9-bllfc\" (UID: \"11f33cf8-5f50-416a-a6f0-f6647a331d40\") " pod="openstack/dnsmasq-dns-757dc6fff9-bllfc" Jan 22 09:20:12 crc kubenswrapper[4811]: I0122 09:20:12.800576 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11f33cf8-5f50-416a-a6f0-f6647a331d40-config\") pod \"dnsmasq-dns-757dc6fff9-bllfc\" (UID: \"11f33cf8-5f50-416a-a6f0-f6647a331d40\") " pod="openstack/dnsmasq-dns-757dc6fff9-bllfc" Jan 22 09:20:12 crc kubenswrapper[4811]: I0122 09:20:12.800801 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/11f33cf8-5f50-416a-a6f0-f6647a331d40-ovsdbserver-sb\") pod \"dnsmasq-dns-757dc6fff9-bllfc\" (UID: \"11f33cf8-5f50-416a-a6f0-f6647a331d40\") " pod="openstack/dnsmasq-dns-757dc6fff9-bllfc" Jan 22 09:20:12 crc kubenswrapper[4811]: I0122 09:20:12.809866 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-bmklk" Jan 22 09:20:12 crc kubenswrapper[4811]: I0122 09:20:12.816949 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxgq2\" (UniqueName: \"kubernetes.io/projected/3d4fef27-55fc-4467-91f6-a89ecbae6198-kube-api-access-sxgq2\") pod \"keystone-9e0d-account-create-update-8vx5t\" (UID: \"3d4fef27-55fc-4467-91f6-a89ecbae6198\") " pod="openstack/keystone-9e0d-account-create-update-8vx5t" Jan 22 09:20:12 crc kubenswrapper[4811]: I0122 09:20:12.819115 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pm9bw\" (UniqueName: \"kubernetes.io/projected/11f33cf8-5f50-416a-a6f0-f6647a331d40-kube-api-access-pm9bw\") pod \"dnsmasq-dns-757dc6fff9-bllfc\" (UID: \"11f33cf8-5f50-416a-a6f0-f6647a331d40\") " pod="openstack/dnsmasq-dns-757dc6fff9-bllfc" Jan 22 09:20:12 crc kubenswrapper[4811]: I0122 09:20:12.864577 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 22 09:20:12 crc kubenswrapper[4811]: I0122 09:20:12.868889 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-9e0d-account-create-update-8vx5t" Jan 22 09:20:12 crc kubenswrapper[4811]: I0122 09:20:12.905003 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55f4ca47-d6d3-4ec2-9df3-2898ba04d543-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "55f4ca47-d6d3-4ec2-9df3-2898ba04d543" (UID: "55f4ca47-d6d3-4ec2-9df3-2898ba04d543"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:20:12 crc kubenswrapper[4811]: I0122 09:20:12.905188 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/55f4ca47-d6d3-4ec2-9df3-2898ba04d543-dns-svc\") pod \"55f4ca47-d6d3-4ec2-9df3-2898ba04d543\" (UID: \"55f4ca47-d6d3-4ec2-9df3-2898ba04d543\") " Jan 22 09:20:12 crc kubenswrapper[4811]: I0122 09:20:12.905427 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55f4ca47-d6d3-4ec2-9df3-2898ba04d543-config\") pod \"55f4ca47-d6d3-4ec2-9df3-2898ba04d543\" (UID: \"55f4ca47-d6d3-4ec2-9df3-2898ba04d543\") " Jan 22 09:20:12 crc kubenswrapper[4811]: I0122 09:20:12.905549 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ggqfx\" (UniqueName: \"kubernetes.io/projected/55f4ca47-d6d3-4ec2-9df3-2898ba04d543-kube-api-access-ggqfx\") pod \"55f4ca47-d6d3-4ec2-9df3-2898ba04d543\" (UID: \"55f4ca47-d6d3-4ec2-9df3-2898ba04d543\") " Jan 22 09:20:12 crc kubenswrapper[4811]: I0122 09:20:12.907451 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55f4ca47-d6d3-4ec2-9df3-2898ba04d543-config" (OuterVolumeSpecName: "config") pod "55f4ca47-d6d3-4ec2-9df3-2898ba04d543" (UID: "55f4ca47-d6d3-4ec2-9df3-2898ba04d543"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:20:12 crc kubenswrapper[4811]: I0122 09:20:12.909488 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtkbd\" (UniqueName: \"kubernetes.io/projected/b58a74ba-a371-4f9b-a3e2-ca7a946895af-kube-api-access-xtkbd\") pod \"placement-db-create-69hvx\" (UID: \"b58a74ba-a371-4f9b-a3e2-ca7a946895af\") " pod="openstack/placement-db-create-69hvx" Jan 22 09:20:12 crc kubenswrapper[4811]: I0122 09:20:12.909610 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b58a74ba-a371-4f9b-a3e2-ca7a946895af-operator-scripts\") pod \"placement-db-create-69hvx\" (UID: \"b58a74ba-a371-4f9b-a3e2-ca7a946895af\") " pod="openstack/placement-db-create-69hvx" Jan 22 09:20:12 crc kubenswrapper[4811]: I0122 09:20:12.909813 4811 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55f4ca47-d6d3-4ec2-9df3-2898ba04d543-config\") on node \"crc\" DevicePath \"\"" Jan 22 09:20:12 crc kubenswrapper[4811]: I0122 09:20:12.909826 4811 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/55f4ca47-d6d3-4ec2-9df3-2898ba04d543-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 22 09:20:12 crc kubenswrapper[4811]: I0122 09:20:12.915769 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55f4ca47-d6d3-4ec2-9df3-2898ba04d543-kube-api-access-ggqfx" (OuterVolumeSpecName: "kube-api-access-ggqfx") pod "55f4ca47-d6d3-4ec2-9df3-2898ba04d543" (UID: "55f4ca47-d6d3-4ec2-9df3-2898ba04d543"). InnerVolumeSpecName "kube-api-access-ggqfx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:20:12 crc kubenswrapper[4811]: I0122 09:20:12.935293 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-9b2c-account-create-update-sgwxp"] Jan 22 09:20:12 crc kubenswrapper[4811]: I0122 09:20:12.939561 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-9b2c-account-create-update-sgwxp" Jan 22 09:20:12 crc kubenswrapper[4811]: I0122 09:20:12.941151 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 22 09:20:12 crc kubenswrapper[4811]: I0122 09:20:12.944681 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-9b2c-account-create-update-sgwxp"] Jan 22 09:20:13 crc kubenswrapper[4811]: I0122 09:20:13.007870 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757dc6fff9-bllfc" Jan 22 09:20:13 crc kubenswrapper[4811]: I0122 09:20:13.015207 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d5af3fb6-92bb-4f1d-9b7c-d3e4dc44ab35-operator-scripts\") pod \"placement-9b2c-account-create-update-sgwxp\" (UID: \"d5af3fb6-92bb-4f1d-9b7c-d3e4dc44ab35\") " pod="openstack/placement-9b2c-account-create-update-sgwxp" Jan 22 09:20:13 crc kubenswrapper[4811]: I0122 09:20:13.015274 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtkbd\" (UniqueName: \"kubernetes.io/projected/b58a74ba-a371-4f9b-a3e2-ca7a946895af-kube-api-access-xtkbd\") pod \"placement-db-create-69hvx\" (UID: \"b58a74ba-a371-4f9b-a3e2-ca7a946895af\") " pod="openstack/placement-db-create-69hvx" Jan 22 09:20:13 crc kubenswrapper[4811]: I0122 09:20:13.015314 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkhrd\" (UniqueName: \"kubernetes.io/projected/d5af3fb6-92bb-4f1d-9b7c-d3e4dc44ab35-kube-api-access-vkhrd\") pod \"placement-9b2c-account-create-update-sgwxp\" (UID: \"d5af3fb6-92bb-4f1d-9b7c-d3e4dc44ab35\") " pod="openstack/placement-9b2c-account-create-update-sgwxp" Jan 22 09:20:13 crc kubenswrapper[4811]: I0122 09:20:13.015377 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b58a74ba-a371-4f9b-a3e2-ca7a946895af-operator-scripts\") pod \"placement-db-create-69hvx\" (UID: \"b58a74ba-a371-4f9b-a3e2-ca7a946895af\") " pod="openstack/placement-db-create-69hvx" Jan 22 09:20:13 crc kubenswrapper[4811]: I0122 09:20:13.015442 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ggqfx\" (UniqueName: \"kubernetes.io/projected/55f4ca47-d6d3-4ec2-9df3-2898ba04d543-kube-api-access-ggqfx\") on node \"crc\" DevicePath \"\"" Jan 22 09:20:13 crc kubenswrapper[4811]: I0122 09:20:13.016022 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b58a74ba-a371-4f9b-a3e2-ca7a946895af-operator-scripts\") pod \"placement-db-create-69hvx\" (UID: \"b58a74ba-a371-4f9b-a3e2-ca7a946895af\") " pod="openstack/placement-db-create-69hvx" Jan 22 09:20:13 crc kubenswrapper[4811]: I0122 09:20:13.030906 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xtkbd\" (UniqueName: \"kubernetes.io/projected/b58a74ba-a371-4f9b-a3e2-ca7a946895af-kube-api-access-xtkbd\") pod \"placement-db-create-69hvx\" (UID: \"b58a74ba-a371-4f9b-a3e2-ca7a946895af\") " pod="openstack/placement-db-create-69hvx" Jan 22 09:20:13 crc kubenswrapper[4811]: I0122 09:20:13.034296 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-69hvx" Jan 22 09:20:13 crc kubenswrapper[4811]: I0122 09:20:13.065668 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-744ffd65bc-b96wb" event={"ID":"55f4ca47-d6d3-4ec2-9df3-2898ba04d543","Type":"ContainerDied","Data":"da6df1aaa86a44ee16cf125d31d2231bcc843fad5b3624632f418ebf1b05221b"} Jan 22 09:20:13 crc kubenswrapper[4811]: I0122 09:20:13.065740 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-744ffd65bc-b96wb" Jan 22 09:20:13 crc kubenswrapper[4811]: I0122 09:20:13.066731 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 22 09:20:13 crc kubenswrapper[4811]: I0122 09:20:13.117154 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d5af3fb6-92bb-4f1d-9b7c-d3e4dc44ab35-operator-scripts\") pod \"placement-9b2c-account-create-update-sgwxp\" (UID: \"d5af3fb6-92bb-4f1d-9b7c-d3e4dc44ab35\") " pod="openstack/placement-9b2c-account-create-update-sgwxp" Jan 22 09:20:13 crc kubenswrapper[4811]: I0122 09:20:13.117241 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkhrd\" (UniqueName: \"kubernetes.io/projected/d5af3fb6-92bb-4f1d-9b7c-d3e4dc44ab35-kube-api-access-vkhrd\") pod \"placement-9b2c-account-create-update-sgwxp\" (UID: \"d5af3fb6-92bb-4f1d-9b7c-d3e4dc44ab35\") " pod="openstack/placement-9b2c-account-create-update-sgwxp" Jan 22 09:20:13 crc kubenswrapper[4811]: I0122 09:20:13.118107 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d5af3fb6-92bb-4f1d-9b7c-d3e4dc44ab35-operator-scripts\") pod \"placement-9b2c-account-create-update-sgwxp\" (UID: \"d5af3fb6-92bb-4f1d-9b7c-d3e4dc44ab35\") " pod="openstack/placement-9b2c-account-create-update-sgwxp" Jan 22 09:20:13 crc kubenswrapper[4811]: I0122 09:20:13.136925 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkhrd\" (UniqueName: \"kubernetes.io/projected/d5af3fb6-92bb-4f1d-9b7c-d3e4dc44ab35-kube-api-access-vkhrd\") pod \"placement-9b2c-account-create-update-sgwxp\" (UID: \"d5af3fb6-92bb-4f1d-9b7c-d3e4dc44ab35\") " pod="openstack/placement-9b2c-account-create-update-sgwxp" Jan 22 09:20:13 crc kubenswrapper[4811]: I0122 09:20:13.137188 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95f5f6995-h2gf4" Jan 22 09:20:13 crc kubenswrapper[4811]: I0122 09:20:13.156075 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 22 09:20:13 crc kubenswrapper[4811]: I0122 09:20:13.198977 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-744ffd65bc-b96wb"] Jan 22 09:20:13 crc kubenswrapper[4811]: I0122 09:20:13.203342 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-744ffd65bc-b96wb"] Jan 22 09:20:13 crc kubenswrapper[4811]: I0122 09:20:13.220603 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tzf4t\" (UniqueName: \"kubernetes.io/projected/56f1c973-5c84-45ef-a32c-82b736796f2c-kube-api-access-tzf4t\") pod \"56f1c973-5c84-45ef-a32c-82b736796f2c\" (UID: \"56f1c973-5c84-45ef-a32c-82b736796f2c\") " Jan 22 09:20:13 crc kubenswrapper[4811]: I0122 09:20:13.220648 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56f1c973-5c84-45ef-a32c-82b736796f2c-config\") pod \"56f1c973-5c84-45ef-a32c-82b736796f2c\" (UID: \"56f1c973-5c84-45ef-a32c-82b736796f2c\") " Jan 22 09:20:13 crc kubenswrapper[4811]: I0122 09:20:13.220679 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/56f1c973-5c84-45ef-a32c-82b736796f2c-dns-svc\") pod \"56f1c973-5c84-45ef-a32c-82b736796f2c\" (UID: \"56f1c973-5c84-45ef-a32c-82b736796f2c\") " Jan 22 09:20:13 crc kubenswrapper[4811]: I0122 09:20:13.226825 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56f1c973-5c84-45ef-a32c-82b736796f2c-config" (OuterVolumeSpecName: "config") pod "56f1c973-5c84-45ef-a32c-82b736796f2c" (UID: "56f1c973-5c84-45ef-a32c-82b736796f2c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:20:13 crc kubenswrapper[4811]: I0122 09:20:13.230801 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56f1c973-5c84-45ef-a32c-82b736796f2c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "56f1c973-5c84-45ef-a32c-82b736796f2c" (UID: "56f1c973-5c84-45ef-a32c-82b736796f2c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:20:13 crc kubenswrapper[4811]: I0122 09:20:13.241933 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56f1c973-5c84-45ef-a32c-82b736796f2c-kube-api-access-tzf4t" (OuterVolumeSpecName: "kube-api-access-tzf4t") pod "56f1c973-5c84-45ef-a32c-82b736796f2c" (UID: "56f1c973-5c84-45ef-a32c-82b736796f2c"). InnerVolumeSpecName "kube-api-access-tzf4t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:20:13 crc kubenswrapper[4811]: I0122 09:20:13.279591 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-9b2c-account-create-update-sgwxp" Jan 22 09:20:13 crc kubenswrapper[4811]: I0122 09:20:13.329689 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tzf4t\" (UniqueName: \"kubernetes.io/projected/56f1c973-5c84-45ef-a32c-82b736796f2c-kube-api-access-tzf4t\") on node \"crc\" DevicePath \"\"" Jan 22 09:20:13 crc kubenswrapper[4811]: I0122 09:20:13.329727 4811 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56f1c973-5c84-45ef-a32c-82b736796f2c-config\") on node \"crc\" DevicePath \"\"" Jan 22 09:20:13 crc kubenswrapper[4811]: I0122 09:20:13.329737 4811 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/56f1c973-5c84-45ef-a32c-82b736796f2c-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 22 09:20:13 crc kubenswrapper[4811]: I0122 09:20:13.420333 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-794868bd45-25t27"] Jan 22 09:20:13 crc kubenswrapper[4811]: I0122 09:20:13.469214 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-kz6sk"] Jan 22 09:20:13 crc kubenswrapper[4811]: I0122 09:20:13.541205 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 22 09:20:13 crc kubenswrapper[4811]: I0122 09:20:13.628858 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 22 09:20:13 crc kubenswrapper[4811]: I0122 09:20:13.630284 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 22 09:20:13 crc kubenswrapper[4811]: I0122 09:20:13.645795 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 22 09:20:13 crc kubenswrapper[4811]: I0122 09:20:13.645980 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-xlnkk" Jan 22 09:20:13 crc kubenswrapper[4811]: I0122 09:20:13.646113 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 22 09:20:13 crc kubenswrapper[4811]: I0122 09:20:13.646117 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 22 09:20:13 crc kubenswrapper[4811]: I0122 09:20:13.651441 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-9e0d-account-create-update-8vx5t"] Jan 22 09:20:13 crc kubenswrapper[4811]: I0122 09:20:13.692321 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 22 09:20:13 crc kubenswrapper[4811]: I0122 09:20:13.747069 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68ff0a82-cf02-4e4e-bf49-b46f3e0f361a-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"68ff0a82-cf02-4e4e-bf49-b46f3e0f361a\") " pod="openstack/ovn-northd-0" Jan 22 09:20:13 crc kubenswrapper[4811]: I0122 09:20:13.747251 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/68ff0a82-cf02-4e4e-bf49-b46f3e0f361a-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"68ff0a82-cf02-4e4e-bf49-b46f3e0f361a\") " pod="openstack/ovn-northd-0" Jan 22 09:20:13 crc kubenswrapper[4811]: I0122 09:20:13.747274 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzh9b\" (UniqueName: \"kubernetes.io/projected/68ff0a82-cf02-4e4e-bf49-b46f3e0f361a-kube-api-access-zzh9b\") pod \"ovn-northd-0\" (UID: \"68ff0a82-cf02-4e4e-bf49-b46f3e0f361a\") " pod="openstack/ovn-northd-0" Jan 22 09:20:13 crc kubenswrapper[4811]: I0122 09:20:13.747290 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/68ff0a82-cf02-4e4e-bf49-b46f3e0f361a-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"68ff0a82-cf02-4e4e-bf49-b46f3e0f361a\") " pod="openstack/ovn-northd-0" Jan 22 09:20:13 crc kubenswrapper[4811]: I0122 09:20:13.747311 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68ff0a82-cf02-4e4e-bf49-b46f3e0f361a-config\") pod \"ovn-northd-0\" (UID: \"68ff0a82-cf02-4e4e-bf49-b46f3e0f361a\") " pod="openstack/ovn-northd-0" Jan 22 09:20:13 crc kubenswrapper[4811]: I0122 09:20:13.747329 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/68ff0a82-cf02-4e4e-bf49-b46f3e0f361a-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"68ff0a82-cf02-4e4e-bf49-b46f3e0f361a\") " pod="openstack/ovn-northd-0" Jan 22 09:20:13 crc kubenswrapper[4811]: I0122 09:20:13.747357 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/68ff0a82-cf02-4e4e-bf49-b46f3e0f361a-scripts\") pod \"ovn-northd-0\" (UID: \"68ff0a82-cf02-4e4e-bf49-b46f3e0f361a\") " pod="openstack/ovn-northd-0" Jan 22 09:20:13 crc kubenswrapper[4811]: I0122 09:20:13.803097 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-bmklk"] Jan 22 09:20:13 crc kubenswrapper[4811]: I0122 09:20:13.848984 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68ff0a82-cf02-4e4e-bf49-b46f3e0f361a-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"68ff0a82-cf02-4e4e-bf49-b46f3e0f361a\") " pod="openstack/ovn-northd-0" Jan 22 09:20:13 crc kubenswrapper[4811]: I0122 09:20:13.849032 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/68ff0a82-cf02-4e4e-bf49-b46f3e0f361a-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"68ff0a82-cf02-4e4e-bf49-b46f3e0f361a\") " pod="openstack/ovn-northd-0" Jan 22 09:20:13 crc kubenswrapper[4811]: I0122 09:20:13.849053 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zzh9b\" (UniqueName: \"kubernetes.io/projected/68ff0a82-cf02-4e4e-bf49-b46f3e0f361a-kube-api-access-zzh9b\") pod \"ovn-northd-0\" (UID: \"68ff0a82-cf02-4e4e-bf49-b46f3e0f361a\") " pod="openstack/ovn-northd-0" Jan 22 09:20:13 crc kubenswrapper[4811]: I0122 09:20:13.849070 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/68ff0a82-cf02-4e4e-bf49-b46f3e0f361a-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"68ff0a82-cf02-4e4e-bf49-b46f3e0f361a\") " pod="openstack/ovn-northd-0" Jan 22 09:20:13 crc kubenswrapper[4811]: I0122 09:20:13.849088 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68ff0a82-cf02-4e4e-bf49-b46f3e0f361a-config\") pod \"ovn-northd-0\" (UID: \"68ff0a82-cf02-4e4e-bf49-b46f3e0f361a\") " pod="openstack/ovn-northd-0" Jan 22 09:20:13 crc kubenswrapper[4811]: I0122 09:20:13.849106 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/68ff0a82-cf02-4e4e-bf49-b46f3e0f361a-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"68ff0a82-cf02-4e4e-bf49-b46f3e0f361a\") " pod="openstack/ovn-northd-0" Jan 22 09:20:13 crc kubenswrapper[4811]: I0122 09:20:13.849133 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/68ff0a82-cf02-4e4e-bf49-b46f3e0f361a-scripts\") pod \"ovn-northd-0\" (UID: \"68ff0a82-cf02-4e4e-bf49-b46f3e0f361a\") " pod="openstack/ovn-northd-0" Jan 22 09:20:13 crc kubenswrapper[4811]: I0122 09:20:13.850538 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68ff0a82-cf02-4e4e-bf49-b46f3e0f361a-config\") pod \"ovn-northd-0\" (UID: \"68ff0a82-cf02-4e4e-bf49-b46f3e0f361a\") " pod="openstack/ovn-northd-0" Jan 22 09:20:13 crc kubenswrapper[4811]: I0122 09:20:13.850612 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/68ff0a82-cf02-4e4e-bf49-b46f3e0f361a-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"68ff0a82-cf02-4e4e-bf49-b46f3e0f361a\") " pod="openstack/ovn-northd-0" Jan 22 09:20:13 crc kubenswrapper[4811]: I0122 09:20:13.851779 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/68ff0a82-cf02-4e4e-bf49-b46f3e0f361a-scripts\") pod \"ovn-northd-0\" (UID: \"68ff0a82-cf02-4e4e-bf49-b46f3e0f361a\") " pod="openstack/ovn-northd-0" Jan 22 09:20:13 crc kubenswrapper[4811]: I0122 09:20:13.854849 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68ff0a82-cf02-4e4e-bf49-b46f3e0f361a-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"68ff0a82-cf02-4e4e-bf49-b46f3e0f361a\") " pod="openstack/ovn-northd-0" Jan 22 09:20:13 crc kubenswrapper[4811]: I0122 09:20:13.855267 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/68ff0a82-cf02-4e4e-bf49-b46f3e0f361a-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"68ff0a82-cf02-4e4e-bf49-b46f3e0f361a\") " pod="openstack/ovn-northd-0" Jan 22 09:20:13 crc kubenswrapper[4811]: I0122 09:20:13.856567 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/68ff0a82-cf02-4e4e-bf49-b46f3e0f361a-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"68ff0a82-cf02-4e4e-bf49-b46f3e0f361a\") " pod="openstack/ovn-northd-0" Jan 22 09:20:13 crc kubenswrapper[4811]: I0122 09:20:13.865702 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zzh9b\" (UniqueName: \"kubernetes.io/projected/68ff0a82-cf02-4e4e-bf49-b46f3e0f361a-kube-api-access-zzh9b\") pod \"ovn-northd-0\" (UID: \"68ff0a82-cf02-4e4e-bf49-b46f3e0f361a\") " pod="openstack/ovn-northd-0" Jan 22 09:20:13 crc kubenswrapper[4811]: I0122 09:20:13.922340 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-69hvx"] Jan 22 09:20:13 crc kubenswrapper[4811]: I0122 09:20:13.982470 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-757dc6fff9-bllfc"] Jan 22 09:20:13 crc kubenswrapper[4811]: W0122 09:20:13.986028 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod11f33cf8_5f50_416a_a6f0_f6647a331d40.slice/crio-953db159302a13efed3a050b39de463115e9565e4da867bc79d6ce232d16ca11 WatchSource:0}: Error finding container 953db159302a13efed3a050b39de463115e9565e4da867bc79d6ce232d16ca11: Status 404 returned error can't find the container with id 953db159302a13efed3a050b39de463115e9565e4da867bc79d6ce232d16ca11 Jan 22 09:20:14 crc kubenswrapper[4811]: I0122 09:20:14.001746 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55f4ca47-d6d3-4ec2-9df3-2898ba04d543" path="/var/lib/kubelet/pods/55f4ca47-d6d3-4ec2-9df3-2898ba04d543/volumes" Jan 22 09:20:14 crc kubenswrapper[4811]: I0122 09:20:14.015909 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 22 09:20:14 crc kubenswrapper[4811]: I0122 09:20:14.072780 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-9e0d-account-create-update-8vx5t" event={"ID":"3d4fef27-55fc-4467-91f6-a89ecbae6198","Type":"ContainerStarted","Data":"fad76c9fdddbce877ce0d9659a8357d8c1a7689680dae10bd9991c8174c30d83"} Jan 22 09:20:14 crc kubenswrapper[4811]: I0122 09:20:14.072814 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-9e0d-account-create-update-8vx5t" event={"ID":"3d4fef27-55fc-4467-91f6-a89ecbae6198","Type":"ContainerStarted","Data":"161725d7a30988b4396baf5447e94f03d0ee7d8603134d03ce2a5730ddb9e758"} Jan 22 09:20:14 crc kubenswrapper[4811]: I0122 09:20:14.075479 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-bmklk" event={"ID":"00f9806b-10cf-424e-bf6e-9e2e3e66833a","Type":"ContainerStarted","Data":"2b11417e3d1b04cb3b22462d21334268df956b88143f9954bde83dd399fa8c01"} Jan 22 09:20:14 crc kubenswrapper[4811]: I0122 09:20:14.075503 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-bmklk" event={"ID":"00f9806b-10cf-424e-bf6e-9e2e3e66833a","Type":"ContainerStarted","Data":"4c9b93b957effae465a77537f97a9466aa138df60e25d3453c67d427cc32e4b2"} Jan 22 09:20:14 crc kubenswrapper[4811]: I0122 09:20:14.078064 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95f5f6995-h2gf4" event={"ID":"56f1c973-5c84-45ef-a32c-82b736796f2c","Type":"ContainerDied","Data":"33cc12ffeba6168150472993db930414f96ce478945eae6f949e8cb405eb46f3"} Jan 22 09:20:14 crc kubenswrapper[4811]: I0122 09:20:14.078130 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95f5f6995-h2gf4" Jan 22 09:20:14 crc kubenswrapper[4811]: I0122 09:20:14.079363 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757dc6fff9-bllfc" event={"ID":"11f33cf8-5f50-416a-a6f0-f6647a331d40","Type":"ContainerStarted","Data":"953db159302a13efed3a050b39de463115e9565e4da867bc79d6ce232d16ca11"} Jan 22 09:20:14 crc kubenswrapper[4811]: I0122 09:20:14.081580 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-69hvx" event={"ID":"b58a74ba-a371-4f9b-a3e2-ca7a946895af","Type":"ContainerStarted","Data":"3223af79e75985e05dd1108c36532623409fc29feadf9cc24b0da9b981084399"} Jan 22 09:20:14 crc kubenswrapper[4811]: I0122 09:20:14.081619 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-69hvx" event={"ID":"b58a74ba-a371-4f9b-a3e2-ca7a946895af","Type":"ContainerStarted","Data":"a4b37af9f6998ceef0515df9eb90b25f4e5e504e16eb6b596a5000e879558f7d"} Jan 22 09:20:14 crc kubenswrapper[4811]: I0122 09:20:14.082381 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-794868bd45-25t27" event={"ID":"dddbae5c-b9c5-44f9-9236-28c880f47a53","Type":"ContainerStarted","Data":"7482770ac8e0d5f8d72305d3321b49efb5c14c4e1c99c7b61815d4d616ca0a85"} Jan 22 09:20:14 crc kubenswrapper[4811]: I0122 09:20:14.084607 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-kz6sk" event={"ID":"374f91f7-413d-4830-afc1-0d75c2946fc3","Type":"ContainerStarted","Data":"6bf524f074c69ebfbf8439466e02a1d0e3e665c97b01648f44e56465694ec78f"} Jan 22 09:20:14 crc kubenswrapper[4811]: I0122 09:20:14.084647 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-kz6sk" event={"ID":"374f91f7-413d-4830-afc1-0d75c2946fc3","Type":"ContainerStarted","Data":"ed7da6b4721d5aff44008b2498db17a0fb02005a74ae0041787d48ac3705e945"} Jan 22 09:20:14 crc kubenswrapper[4811]: I0122 09:20:14.090340 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-9e0d-account-create-update-8vx5t" podStartSLOduration=2.090331643 podStartE2EDuration="2.090331643s" podCreationTimestamp="2026-01-22 09:20:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:20:14.085541882 +0000 UTC m=+858.407728995" watchObservedRunningTime="2026-01-22 09:20:14.090331643 +0000 UTC m=+858.412518767" Jan 22 09:20:14 crc kubenswrapper[4811]: I0122 09:20:14.101787 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-bmklk" podStartSLOduration=2.10177015 podStartE2EDuration="2.10177015s" podCreationTimestamp="2026-01-22 09:20:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:20:14.100936328 +0000 UTC m=+858.423123451" watchObservedRunningTime="2026-01-22 09:20:14.10177015 +0000 UTC m=+858.423957273" Jan 22 09:20:14 crc kubenswrapper[4811]: I0122 09:20:14.117726 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-create-69hvx" podStartSLOduration=2.117711017 podStartE2EDuration="2.117711017s" podCreationTimestamp="2026-01-22 09:20:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:20:14.113965174 +0000 UTC m=+858.436152297" watchObservedRunningTime="2026-01-22 09:20:14.117711017 +0000 UTC m=+858.439898140" Jan 22 09:20:14 crc kubenswrapper[4811]: I0122 09:20:14.132011 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-9b2c-account-create-update-sgwxp"] Jan 22 09:20:14 crc kubenswrapper[4811]: I0122 09:20:14.134692 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-kz6sk" podStartSLOduration=2.134678309 podStartE2EDuration="2.134678309s" podCreationTimestamp="2026-01-22 09:20:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:20:14.125098827 +0000 UTC m=+858.447285940" watchObservedRunningTime="2026-01-22 09:20:14.134678309 +0000 UTC m=+858.456865432" Jan 22 09:20:14 crc kubenswrapper[4811]: W0122 09:20:14.136330 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd5af3fb6_92bb_4f1d_9b7c_d3e4dc44ab35.slice/crio-07c47955cc89fae058a921c96b744bc3189c1b09849e74ee5c0c823546d736db WatchSource:0}: Error finding container 07c47955cc89fae058a921c96b744bc3189c1b09849e74ee5c0c823546d736db: Status 404 returned error can't find the container with id 07c47955cc89fae058a921c96b744bc3189c1b09849e74ee5c0c823546d736db Jan 22 09:20:14 crc kubenswrapper[4811]: I0122 09:20:14.163810 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-h2gf4"] Jan 22 09:20:14 crc kubenswrapper[4811]: I0122 09:20:14.171088 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-h2gf4"] Jan 22 09:20:14 crc kubenswrapper[4811]: I0122 09:20:14.406578 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 22 09:20:14 crc kubenswrapper[4811]: I0122 09:20:14.626896 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 22 09:20:15 crc kubenswrapper[4811]: I0122 09:20:15.090838 4811 generic.go:334] "Generic (PLEG): container finished" podID="b58a74ba-a371-4f9b-a3e2-ca7a946895af" containerID="3223af79e75985e05dd1108c36532623409fc29feadf9cc24b0da9b981084399" exitCode=0 Jan 22 09:20:15 crc kubenswrapper[4811]: I0122 09:20:15.090918 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-69hvx" event={"ID":"b58a74ba-a371-4f9b-a3e2-ca7a946895af","Type":"ContainerDied","Data":"3223af79e75985e05dd1108c36532623409fc29feadf9cc24b0da9b981084399"} Jan 22 09:20:15 crc kubenswrapper[4811]: I0122 09:20:15.092501 4811 generic.go:334] "Generic (PLEG): container finished" podID="dddbae5c-b9c5-44f9-9236-28c880f47a53" containerID="f6f20be424f0f3347d07429c96d1073ae8709f06ea75f5401359a199cf8f04a4" exitCode=0 Jan 22 09:20:15 crc kubenswrapper[4811]: I0122 09:20:15.093188 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-794868bd45-25t27" event={"ID":"dddbae5c-b9c5-44f9-9236-28c880f47a53","Type":"ContainerDied","Data":"f6f20be424f0f3347d07429c96d1073ae8709f06ea75f5401359a199cf8f04a4"} Jan 22 09:20:15 crc kubenswrapper[4811]: I0122 09:20:15.095699 4811 generic.go:334] "Generic (PLEG): container finished" podID="d5af3fb6-92bb-4f1d-9b7c-d3e4dc44ab35" containerID="41f7ddb8612eced29a432d621cd3db2d536ae9efece2b4f681e720938d77e73a" exitCode=0 Jan 22 09:20:15 crc kubenswrapper[4811]: I0122 09:20:15.095769 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-9b2c-account-create-update-sgwxp" event={"ID":"d5af3fb6-92bb-4f1d-9b7c-d3e4dc44ab35","Type":"ContainerDied","Data":"41f7ddb8612eced29a432d621cd3db2d536ae9efece2b4f681e720938d77e73a"} Jan 22 09:20:15 crc kubenswrapper[4811]: I0122 09:20:15.096010 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-9b2c-account-create-update-sgwxp" event={"ID":"d5af3fb6-92bb-4f1d-9b7c-d3e4dc44ab35","Type":"ContainerStarted","Data":"07c47955cc89fae058a921c96b744bc3189c1b09849e74ee5c0c823546d736db"} Jan 22 09:20:15 crc kubenswrapper[4811]: I0122 09:20:15.097550 4811 generic.go:334] "Generic (PLEG): container finished" podID="3d4fef27-55fc-4467-91f6-a89ecbae6198" containerID="fad76c9fdddbce877ce0d9659a8357d8c1a7689680dae10bd9991c8174c30d83" exitCode=0 Jan 22 09:20:15 crc kubenswrapper[4811]: I0122 09:20:15.097600 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-9e0d-account-create-update-8vx5t" event={"ID":"3d4fef27-55fc-4467-91f6-a89ecbae6198","Type":"ContainerDied","Data":"fad76c9fdddbce877ce0d9659a8357d8c1a7689680dae10bd9991c8174c30d83"} Jan 22 09:20:15 crc kubenswrapper[4811]: I0122 09:20:15.098691 4811 generic.go:334] "Generic (PLEG): container finished" podID="00f9806b-10cf-424e-bf6e-9e2e3e66833a" containerID="2b11417e3d1b04cb3b22462d21334268df956b88143f9954bde83dd399fa8c01" exitCode=0 Jan 22 09:20:15 crc kubenswrapper[4811]: I0122 09:20:15.098747 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-bmklk" event={"ID":"00f9806b-10cf-424e-bf6e-9e2e3e66833a","Type":"ContainerDied","Data":"2b11417e3d1b04cb3b22462d21334268df956b88143f9954bde83dd399fa8c01"} Jan 22 09:20:15 crc kubenswrapper[4811]: I0122 09:20:15.100289 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"68ff0a82-cf02-4e4e-bf49-b46f3e0f361a","Type":"ContainerStarted","Data":"c538cfac0eaacdf4b97bdf7975e6a48cb5b8cdd658742b8d012db4f3c23ef56c"} Jan 22 09:20:15 crc kubenswrapper[4811]: I0122 09:20:15.998875 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56f1c973-5c84-45ef-a32c-82b736796f2c" path="/var/lib/kubelet/pods/56f1c973-5c84-45ef-a32c-82b736796f2c/volumes" Jan 22 09:20:16 crc kubenswrapper[4811]: I0122 09:20:16.393128 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-9e0d-account-create-update-8vx5t" Jan 22 09:20:16 crc kubenswrapper[4811]: I0122 09:20:16.474888 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-9b2c-account-create-update-sgwxp" Jan 22 09:20:16 crc kubenswrapper[4811]: I0122 09:20:16.479180 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-bmklk" Jan 22 09:20:16 crc kubenswrapper[4811]: I0122 09:20:16.487799 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-69hvx" Jan 22 09:20:16 crc kubenswrapper[4811]: I0122 09:20:16.583463 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/00f9806b-10cf-424e-bf6e-9e2e3e66833a-operator-scripts\") pod \"00f9806b-10cf-424e-bf6e-9e2e3e66833a\" (UID: \"00f9806b-10cf-424e-bf6e-9e2e3e66833a\") " Jan 22 09:20:16 crc kubenswrapper[4811]: I0122 09:20:16.583558 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vkhrd\" (UniqueName: \"kubernetes.io/projected/d5af3fb6-92bb-4f1d-9b7c-d3e4dc44ab35-kube-api-access-vkhrd\") pod \"d5af3fb6-92bb-4f1d-9b7c-d3e4dc44ab35\" (UID: \"d5af3fb6-92bb-4f1d-9b7c-d3e4dc44ab35\") " Jan 22 09:20:16 crc kubenswrapper[4811]: I0122 09:20:16.583609 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d4fef27-55fc-4467-91f6-a89ecbae6198-operator-scripts\") pod \"3d4fef27-55fc-4467-91f6-a89ecbae6198\" (UID: \"3d4fef27-55fc-4467-91f6-a89ecbae6198\") " Jan 22 09:20:16 crc kubenswrapper[4811]: I0122 09:20:16.583642 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bpbm6\" (UniqueName: \"kubernetes.io/projected/00f9806b-10cf-424e-bf6e-9e2e3e66833a-kube-api-access-bpbm6\") pod \"00f9806b-10cf-424e-bf6e-9e2e3e66833a\" (UID: \"00f9806b-10cf-424e-bf6e-9e2e3e66833a\") " Jan 22 09:20:16 crc kubenswrapper[4811]: I0122 09:20:16.583670 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d5af3fb6-92bb-4f1d-9b7c-d3e4dc44ab35-operator-scripts\") pod \"d5af3fb6-92bb-4f1d-9b7c-d3e4dc44ab35\" (UID: \"d5af3fb6-92bb-4f1d-9b7c-d3e4dc44ab35\") " Jan 22 09:20:16 crc kubenswrapper[4811]: I0122 09:20:16.583691 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sxgq2\" (UniqueName: \"kubernetes.io/projected/3d4fef27-55fc-4467-91f6-a89ecbae6198-kube-api-access-sxgq2\") pod \"3d4fef27-55fc-4467-91f6-a89ecbae6198\" (UID: \"3d4fef27-55fc-4467-91f6-a89ecbae6198\") " Jan 22 09:20:16 crc kubenswrapper[4811]: I0122 09:20:16.584257 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d4fef27-55fc-4467-91f6-a89ecbae6198-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3d4fef27-55fc-4467-91f6-a89ecbae6198" (UID: "3d4fef27-55fc-4467-91f6-a89ecbae6198"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:20:16 crc kubenswrapper[4811]: I0122 09:20:16.584646 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00f9806b-10cf-424e-bf6e-9e2e3e66833a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "00f9806b-10cf-424e-bf6e-9e2e3e66833a" (UID: "00f9806b-10cf-424e-bf6e-9e2e3e66833a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:20:16 crc kubenswrapper[4811]: I0122 09:20:16.584757 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5af3fb6-92bb-4f1d-9b7c-d3e4dc44ab35-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d5af3fb6-92bb-4f1d-9b7c-d3e4dc44ab35" (UID: "d5af3fb6-92bb-4f1d-9b7c-d3e4dc44ab35"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:20:16 crc kubenswrapper[4811]: I0122 09:20:16.588210 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d4fef27-55fc-4467-91f6-a89ecbae6198-kube-api-access-sxgq2" (OuterVolumeSpecName: "kube-api-access-sxgq2") pod "3d4fef27-55fc-4467-91f6-a89ecbae6198" (UID: "3d4fef27-55fc-4467-91f6-a89ecbae6198"). InnerVolumeSpecName "kube-api-access-sxgq2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:20:16 crc kubenswrapper[4811]: I0122 09:20:16.588735 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00f9806b-10cf-424e-bf6e-9e2e3e66833a-kube-api-access-bpbm6" (OuterVolumeSpecName: "kube-api-access-bpbm6") pod "00f9806b-10cf-424e-bf6e-9e2e3e66833a" (UID: "00f9806b-10cf-424e-bf6e-9e2e3e66833a"). InnerVolumeSpecName "kube-api-access-bpbm6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:20:16 crc kubenswrapper[4811]: I0122 09:20:16.588912 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5af3fb6-92bb-4f1d-9b7c-d3e4dc44ab35-kube-api-access-vkhrd" (OuterVolumeSpecName: "kube-api-access-vkhrd") pod "d5af3fb6-92bb-4f1d-9b7c-d3e4dc44ab35" (UID: "d5af3fb6-92bb-4f1d-9b7c-d3e4dc44ab35"). InnerVolumeSpecName "kube-api-access-vkhrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:20:16 crc kubenswrapper[4811]: I0122 09:20:16.684739 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b58a74ba-a371-4f9b-a3e2-ca7a946895af-operator-scripts\") pod \"b58a74ba-a371-4f9b-a3e2-ca7a946895af\" (UID: \"b58a74ba-a371-4f9b-a3e2-ca7a946895af\") " Jan 22 09:20:16 crc kubenswrapper[4811]: I0122 09:20:16.684879 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xtkbd\" (UniqueName: \"kubernetes.io/projected/b58a74ba-a371-4f9b-a3e2-ca7a946895af-kube-api-access-xtkbd\") pod \"b58a74ba-a371-4f9b-a3e2-ca7a946895af\" (UID: \"b58a74ba-a371-4f9b-a3e2-ca7a946895af\") " Jan 22 09:20:16 crc kubenswrapper[4811]: I0122 09:20:16.685066 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b58a74ba-a371-4f9b-a3e2-ca7a946895af-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b58a74ba-a371-4f9b-a3e2-ca7a946895af" (UID: "b58a74ba-a371-4f9b-a3e2-ca7a946895af"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:20:16 crc kubenswrapper[4811]: I0122 09:20:16.685298 4811 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/00f9806b-10cf-424e-bf6e-9e2e3e66833a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 09:20:16 crc kubenswrapper[4811]: I0122 09:20:16.685310 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vkhrd\" (UniqueName: \"kubernetes.io/projected/d5af3fb6-92bb-4f1d-9b7c-d3e4dc44ab35-kube-api-access-vkhrd\") on node \"crc\" DevicePath \"\"" Jan 22 09:20:16 crc kubenswrapper[4811]: I0122 09:20:16.685330 4811 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d4fef27-55fc-4467-91f6-a89ecbae6198-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 09:20:16 crc kubenswrapper[4811]: I0122 09:20:16.685338 4811 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b58a74ba-a371-4f9b-a3e2-ca7a946895af-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 09:20:16 crc kubenswrapper[4811]: I0122 09:20:16.685346 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bpbm6\" (UniqueName: \"kubernetes.io/projected/00f9806b-10cf-424e-bf6e-9e2e3e66833a-kube-api-access-bpbm6\") on node \"crc\" DevicePath \"\"" Jan 22 09:20:16 crc kubenswrapper[4811]: I0122 09:20:16.685355 4811 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d5af3fb6-92bb-4f1d-9b7c-d3e4dc44ab35-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 09:20:16 crc kubenswrapper[4811]: I0122 09:20:16.685362 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sxgq2\" (UniqueName: \"kubernetes.io/projected/3d4fef27-55fc-4467-91f6-a89ecbae6198-kube-api-access-sxgq2\") on node \"crc\" DevicePath \"\"" Jan 22 09:20:16 crc kubenswrapper[4811]: I0122 09:20:16.687312 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b58a74ba-a371-4f9b-a3e2-ca7a946895af-kube-api-access-xtkbd" (OuterVolumeSpecName: "kube-api-access-xtkbd") pod "b58a74ba-a371-4f9b-a3e2-ca7a946895af" (UID: "b58a74ba-a371-4f9b-a3e2-ca7a946895af"). InnerVolumeSpecName "kube-api-access-xtkbd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:20:16 crc kubenswrapper[4811]: I0122 09:20:16.786274 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xtkbd\" (UniqueName: \"kubernetes.io/projected/b58a74ba-a371-4f9b-a3e2-ca7a946895af-kube-api-access-xtkbd\") on node \"crc\" DevicePath \"\"" Jan 22 09:20:17 crc kubenswrapper[4811]: I0122 09:20:17.113282 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-bmklk" event={"ID":"00f9806b-10cf-424e-bf6e-9e2e3e66833a","Type":"ContainerDied","Data":"4c9b93b957effae465a77537f97a9466aa138df60e25d3453c67d427cc32e4b2"} Jan 22 09:20:17 crc kubenswrapper[4811]: I0122 09:20:17.113488 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4c9b93b957effae465a77537f97a9466aa138df60e25d3453c67d427cc32e4b2" Jan 22 09:20:17 crc kubenswrapper[4811]: I0122 09:20:17.113293 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-bmklk" Jan 22 09:20:17 crc kubenswrapper[4811]: I0122 09:20:17.114442 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-69hvx" event={"ID":"b58a74ba-a371-4f9b-a3e2-ca7a946895af","Type":"ContainerDied","Data":"a4b37af9f6998ceef0515df9eb90b25f4e5e504e16eb6b596a5000e879558f7d"} Jan 22 09:20:17 crc kubenswrapper[4811]: I0122 09:20:17.114561 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a4b37af9f6998ceef0515df9eb90b25f4e5e504e16eb6b596a5000e879558f7d" Jan 22 09:20:17 crc kubenswrapper[4811]: I0122 09:20:17.115462 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-9b2c-account-create-update-sgwxp" event={"ID":"d5af3fb6-92bb-4f1d-9b7c-d3e4dc44ab35","Type":"ContainerDied","Data":"07c47955cc89fae058a921c96b744bc3189c1b09849e74ee5c0c823546d736db"} Jan 22 09:20:17 crc kubenswrapper[4811]: I0122 09:20:17.115482 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-9b2c-account-create-update-sgwxp" Jan 22 09:20:17 crc kubenswrapper[4811]: I0122 09:20:17.115486 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="07c47955cc89fae058a921c96b744bc3189c1b09849e74ee5c0c823546d736db" Jan 22 09:20:17 crc kubenswrapper[4811]: I0122 09:20:17.116522 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-9e0d-account-create-update-8vx5t" event={"ID":"3d4fef27-55fc-4467-91f6-a89ecbae6198","Type":"ContainerDied","Data":"161725d7a30988b4396baf5447e94f03d0ee7d8603134d03ce2a5730ddb9e758"} Jan 22 09:20:17 crc kubenswrapper[4811]: I0122 09:20:17.116603 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="161725d7a30988b4396baf5447e94f03d0ee7d8603134d03ce2a5730ddb9e758" Jan 22 09:20:17 crc kubenswrapper[4811]: I0122 09:20:17.116613 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-9e0d-account-create-update-8vx5t" Jan 22 09:20:17 crc kubenswrapper[4811]: I0122 09:20:17.116902 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-69hvx" Jan 22 09:20:17 crc kubenswrapper[4811]: I0122 09:20:17.899014 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-lnx59"] Jan 22 09:20:17 crc kubenswrapper[4811]: E0122 09:20:17.899262 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b58a74ba-a371-4f9b-a3e2-ca7a946895af" containerName="mariadb-database-create" Jan 22 09:20:17 crc kubenswrapper[4811]: I0122 09:20:17.899274 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="b58a74ba-a371-4f9b-a3e2-ca7a946895af" containerName="mariadb-database-create" Jan 22 09:20:17 crc kubenswrapper[4811]: E0122 09:20:17.899289 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5af3fb6-92bb-4f1d-9b7c-d3e4dc44ab35" containerName="mariadb-account-create-update" Jan 22 09:20:17 crc kubenswrapper[4811]: I0122 09:20:17.899295 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5af3fb6-92bb-4f1d-9b7c-d3e4dc44ab35" containerName="mariadb-account-create-update" Jan 22 09:20:17 crc kubenswrapper[4811]: E0122 09:20:17.899303 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00f9806b-10cf-424e-bf6e-9e2e3e66833a" containerName="mariadb-database-create" Jan 22 09:20:17 crc kubenswrapper[4811]: I0122 09:20:17.899309 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="00f9806b-10cf-424e-bf6e-9e2e3e66833a" containerName="mariadb-database-create" Jan 22 09:20:17 crc kubenswrapper[4811]: E0122 09:20:17.899317 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d4fef27-55fc-4467-91f6-a89ecbae6198" containerName="mariadb-account-create-update" Jan 22 09:20:17 crc kubenswrapper[4811]: I0122 09:20:17.899322 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d4fef27-55fc-4467-91f6-a89ecbae6198" containerName="mariadb-account-create-update" Jan 22 09:20:17 crc kubenswrapper[4811]: I0122 09:20:17.899453 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d4fef27-55fc-4467-91f6-a89ecbae6198" containerName="mariadb-account-create-update" Jan 22 09:20:17 crc kubenswrapper[4811]: I0122 09:20:17.899464 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5af3fb6-92bb-4f1d-9b7c-d3e4dc44ab35" containerName="mariadb-account-create-update" Jan 22 09:20:17 crc kubenswrapper[4811]: I0122 09:20:17.899473 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="00f9806b-10cf-424e-bf6e-9e2e3e66833a" containerName="mariadb-database-create" Jan 22 09:20:17 crc kubenswrapper[4811]: I0122 09:20:17.899484 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="b58a74ba-a371-4f9b-a3e2-ca7a946895af" containerName="mariadb-database-create" Jan 22 09:20:17 crc kubenswrapper[4811]: I0122 09:20:17.899912 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-lnx59" Jan 22 09:20:17 crc kubenswrapper[4811]: I0122 09:20:17.914411 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-lnx59"] Jan 22 09:20:18 crc kubenswrapper[4811]: I0122 09:20:18.000184 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba2c649b-b430-4002-bd27-ad8d65dd2137-operator-scripts\") pod \"glance-db-create-lnx59\" (UID: \"ba2c649b-b430-4002-bd27-ad8d65dd2137\") " pod="openstack/glance-db-create-lnx59" Jan 22 09:20:18 crc kubenswrapper[4811]: I0122 09:20:18.000219 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zh25z\" (UniqueName: \"kubernetes.io/projected/ba2c649b-b430-4002-bd27-ad8d65dd2137-kube-api-access-zh25z\") pod \"glance-db-create-lnx59\" (UID: \"ba2c649b-b430-4002-bd27-ad8d65dd2137\") " pod="openstack/glance-db-create-lnx59" Jan 22 09:20:18 crc kubenswrapper[4811]: I0122 09:20:18.099218 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-2e4a-account-create-update-j6gzm"] Jan 22 09:20:18 crc kubenswrapper[4811]: I0122 09:20:18.100029 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-2e4a-account-create-update-j6gzm" Jan 22 09:20:18 crc kubenswrapper[4811]: I0122 09:20:18.101173 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba2c649b-b430-4002-bd27-ad8d65dd2137-operator-scripts\") pod \"glance-db-create-lnx59\" (UID: \"ba2c649b-b430-4002-bd27-ad8d65dd2137\") " pod="openstack/glance-db-create-lnx59" Jan 22 09:20:18 crc kubenswrapper[4811]: I0122 09:20:18.101213 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zh25z\" (UniqueName: \"kubernetes.io/projected/ba2c649b-b430-4002-bd27-ad8d65dd2137-kube-api-access-zh25z\") pod \"glance-db-create-lnx59\" (UID: \"ba2c649b-b430-4002-bd27-ad8d65dd2137\") " pod="openstack/glance-db-create-lnx59" Jan 22 09:20:18 crc kubenswrapper[4811]: I0122 09:20:18.101813 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba2c649b-b430-4002-bd27-ad8d65dd2137-operator-scripts\") pod \"glance-db-create-lnx59\" (UID: \"ba2c649b-b430-4002-bd27-ad8d65dd2137\") " pod="openstack/glance-db-create-lnx59" Jan 22 09:20:18 crc kubenswrapper[4811]: I0122 09:20:18.104714 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 22 09:20:18 crc kubenswrapper[4811]: I0122 09:20:18.113963 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-2e4a-account-create-update-j6gzm"] Jan 22 09:20:18 crc kubenswrapper[4811]: I0122 09:20:18.120092 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zh25z\" (UniqueName: \"kubernetes.io/projected/ba2c649b-b430-4002-bd27-ad8d65dd2137-kube-api-access-zh25z\") pod \"glance-db-create-lnx59\" (UID: \"ba2c649b-b430-4002-bd27-ad8d65dd2137\") " pod="openstack/glance-db-create-lnx59" Jan 22 09:20:18 crc kubenswrapper[4811]: I0122 09:20:18.203201 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzmkh\" (UniqueName: \"kubernetes.io/projected/1240e634-6792-476a-b85b-f4034920913e-kube-api-access-tzmkh\") pod \"glance-2e4a-account-create-update-j6gzm\" (UID: \"1240e634-6792-476a-b85b-f4034920913e\") " pod="openstack/glance-2e4a-account-create-update-j6gzm" Jan 22 09:20:18 crc kubenswrapper[4811]: I0122 09:20:18.203351 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1240e634-6792-476a-b85b-f4034920913e-operator-scripts\") pod \"glance-2e4a-account-create-update-j6gzm\" (UID: \"1240e634-6792-476a-b85b-f4034920913e\") " pod="openstack/glance-2e4a-account-create-update-j6gzm" Jan 22 09:20:18 crc kubenswrapper[4811]: I0122 09:20:18.215141 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-lnx59" Jan 22 09:20:18 crc kubenswrapper[4811]: I0122 09:20:18.304282 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tzmkh\" (UniqueName: \"kubernetes.io/projected/1240e634-6792-476a-b85b-f4034920913e-kube-api-access-tzmkh\") pod \"glance-2e4a-account-create-update-j6gzm\" (UID: \"1240e634-6792-476a-b85b-f4034920913e\") " pod="openstack/glance-2e4a-account-create-update-j6gzm" Jan 22 09:20:18 crc kubenswrapper[4811]: I0122 09:20:18.304513 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1240e634-6792-476a-b85b-f4034920913e-operator-scripts\") pod \"glance-2e4a-account-create-update-j6gzm\" (UID: \"1240e634-6792-476a-b85b-f4034920913e\") " pod="openstack/glance-2e4a-account-create-update-j6gzm" Jan 22 09:20:18 crc kubenswrapper[4811]: I0122 09:20:18.305155 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1240e634-6792-476a-b85b-f4034920913e-operator-scripts\") pod \"glance-2e4a-account-create-update-j6gzm\" (UID: \"1240e634-6792-476a-b85b-f4034920913e\") " pod="openstack/glance-2e4a-account-create-update-j6gzm" Jan 22 09:20:18 crc kubenswrapper[4811]: I0122 09:20:18.322400 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzmkh\" (UniqueName: \"kubernetes.io/projected/1240e634-6792-476a-b85b-f4034920913e-kube-api-access-tzmkh\") pod \"glance-2e4a-account-create-update-j6gzm\" (UID: \"1240e634-6792-476a-b85b-f4034920913e\") " pod="openstack/glance-2e4a-account-create-update-j6gzm" Jan 22 09:20:18 crc kubenswrapper[4811]: I0122 09:20:18.411181 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-2e4a-account-create-update-j6gzm" Jan 22 09:20:18 crc kubenswrapper[4811]: I0122 09:20:18.627058 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-lnx59"] Jan 22 09:20:18 crc kubenswrapper[4811]: I0122 09:20:18.784673 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-2e4a-account-create-update-j6gzm"] Jan 22 09:20:18 crc kubenswrapper[4811]: W0122 09:20:18.789561 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1240e634_6792_476a_b85b_f4034920913e.slice/crio-38de7602e5760fb82480854675d1f58e7a576345ab1996ad263f3a61c8526d8b WatchSource:0}: Error finding container 38de7602e5760fb82480854675d1f58e7a576345ab1996ad263f3a61c8526d8b: Status 404 returned error can't find the container with id 38de7602e5760fb82480854675d1f58e7a576345ab1996ad263f3a61c8526d8b Jan 22 09:20:19 crc kubenswrapper[4811]: I0122 09:20:19.129000 4811 generic.go:334] "Generic (PLEG): container finished" podID="ba2c649b-b430-4002-bd27-ad8d65dd2137" containerID="f689a28902f2e1a8a1e4e1df4b3892d6cd31ceb754785b27ddb58825203040a4" exitCode=0 Jan 22 09:20:19 crc kubenswrapper[4811]: I0122 09:20:19.129126 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-lnx59" event={"ID":"ba2c649b-b430-4002-bd27-ad8d65dd2137","Type":"ContainerDied","Data":"f689a28902f2e1a8a1e4e1df4b3892d6cd31ceb754785b27ddb58825203040a4"} Jan 22 09:20:19 crc kubenswrapper[4811]: I0122 09:20:19.129318 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-lnx59" event={"ID":"ba2c649b-b430-4002-bd27-ad8d65dd2137","Type":"ContainerStarted","Data":"b4f0e1ebd9ecf0792fa16073404089b1391927f6d8c5ce35bab6ce92da5624dd"} Jan 22 09:20:19 crc kubenswrapper[4811]: I0122 09:20:19.132996 4811 generic.go:334] "Generic (PLEG): container finished" podID="1240e634-6792-476a-b85b-f4034920913e" containerID="0ee0b25a364d73fee0dce4d95835f46da987348ccac3ab643026320ed9fa7bf4" exitCode=0 Jan 22 09:20:19 crc kubenswrapper[4811]: I0122 09:20:19.133052 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-2e4a-account-create-update-j6gzm" event={"ID":"1240e634-6792-476a-b85b-f4034920913e","Type":"ContainerDied","Data":"0ee0b25a364d73fee0dce4d95835f46da987348ccac3ab643026320ed9fa7bf4"} Jan 22 09:20:19 crc kubenswrapper[4811]: I0122 09:20:19.133077 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-2e4a-account-create-update-j6gzm" event={"ID":"1240e634-6792-476a-b85b-f4034920913e","Type":"ContainerStarted","Data":"38de7602e5760fb82480854675d1f58e7a576345ab1996ad263f3a61c8526d8b"} Jan 22 09:20:19 crc kubenswrapper[4811]: I0122 09:20:19.139429 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-794868bd45-25t27" event={"ID":"dddbae5c-b9c5-44f9-9236-28c880f47a53","Type":"ContainerStarted","Data":"d81b9f2fef54fe2bd25b4df332ea94177e0fdd42ef71f795e973cc340c95a2e6"} Jan 22 09:20:19 crc kubenswrapper[4811]: I0122 09:20:19.139553 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-794868bd45-25t27" Jan 22 09:20:19 crc kubenswrapper[4811]: I0122 09:20:19.142233 4811 generic.go:334] "Generic (PLEG): container finished" podID="11f33cf8-5f50-416a-a6f0-f6647a331d40" containerID="bb35895d17c6891f722ea2fef3ca06fb73b41a98ad821b293f2bf45fdf3d3351" exitCode=0 Jan 22 09:20:19 crc kubenswrapper[4811]: I0122 09:20:19.142269 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757dc6fff9-bllfc" event={"ID":"11f33cf8-5f50-416a-a6f0-f6647a331d40","Type":"ContainerDied","Data":"bb35895d17c6891f722ea2fef3ca06fb73b41a98ad821b293f2bf45fdf3d3351"} Jan 22 09:20:19 crc kubenswrapper[4811]: I0122 09:20:19.193329 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-794868bd45-25t27" podStartSLOduration=6.469990395 podStartE2EDuration="7.193298069s" podCreationTimestamp="2026-01-22 09:20:12 +0000 UTC" firstStartedPulling="2026-01-22 09:20:13.445489245 +0000 UTC m=+857.767676368" lastFinishedPulling="2026-01-22 09:20:14.168796919 +0000 UTC m=+858.490984042" observedRunningTime="2026-01-22 09:20:19.18639323 +0000 UTC m=+863.508580353" watchObservedRunningTime="2026-01-22 09:20:19.193298069 +0000 UTC m=+863.515485192" Jan 22 09:20:19 crc kubenswrapper[4811]: I0122 09:20:19.673705 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-92flm"] Jan 22 09:20:19 crc kubenswrapper[4811]: I0122 09:20:19.674723 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-92flm" Jan 22 09:20:19 crc kubenswrapper[4811]: I0122 09:20:19.676495 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 22 09:20:19 crc kubenswrapper[4811]: I0122 09:20:19.690235 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-92flm"] Jan 22 09:20:19 crc kubenswrapper[4811]: I0122 09:20:19.827121 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m29qw\" (UniqueName: \"kubernetes.io/projected/811bb52f-41ad-4b91-8b9e-3af290e2d893-kube-api-access-m29qw\") pod \"root-account-create-update-92flm\" (UID: \"811bb52f-41ad-4b91-8b9e-3af290e2d893\") " pod="openstack/root-account-create-update-92flm" Jan 22 09:20:19 crc kubenswrapper[4811]: I0122 09:20:19.827993 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/811bb52f-41ad-4b91-8b9e-3af290e2d893-operator-scripts\") pod \"root-account-create-update-92flm\" (UID: \"811bb52f-41ad-4b91-8b9e-3af290e2d893\") " pod="openstack/root-account-create-update-92flm" Jan 22 09:20:19 crc kubenswrapper[4811]: I0122 09:20:19.929580 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m29qw\" (UniqueName: \"kubernetes.io/projected/811bb52f-41ad-4b91-8b9e-3af290e2d893-kube-api-access-m29qw\") pod \"root-account-create-update-92flm\" (UID: \"811bb52f-41ad-4b91-8b9e-3af290e2d893\") " pod="openstack/root-account-create-update-92flm" Jan 22 09:20:19 crc kubenswrapper[4811]: I0122 09:20:19.929660 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/811bb52f-41ad-4b91-8b9e-3af290e2d893-operator-scripts\") pod \"root-account-create-update-92flm\" (UID: \"811bb52f-41ad-4b91-8b9e-3af290e2d893\") " pod="openstack/root-account-create-update-92flm" Jan 22 09:20:19 crc kubenswrapper[4811]: I0122 09:20:19.930383 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/811bb52f-41ad-4b91-8b9e-3af290e2d893-operator-scripts\") pod \"root-account-create-update-92flm\" (UID: \"811bb52f-41ad-4b91-8b9e-3af290e2d893\") " pod="openstack/root-account-create-update-92flm" Jan 22 09:20:19 crc kubenswrapper[4811]: I0122 09:20:19.943580 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m29qw\" (UniqueName: \"kubernetes.io/projected/811bb52f-41ad-4b91-8b9e-3af290e2d893-kube-api-access-m29qw\") pod \"root-account-create-update-92flm\" (UID: \"811bb52f-41ad-4b91-8b9e-3af290e2d893\") " pod="openstack/root-account-create-update-92flm" Jan 22 09:20:19 crc kubenswrapper[4811]: I0122 09:20:19.990757 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-92flm" Jan 22 09:20:20 crc kubenswrapper[4811]: I0122 09:20:20.149079 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"68ff0a82-cf02-4e4e-bf49-b46f3e0f361a","Type":"ContainerStarted","Data":"a076cfa1cffba4fab494617c2fa68abeef6c8b338dfa6a716431d5413a09697a"} Jan 22 09:20:20 crc kubenswrapper[4811]: I0122 09:20:20.149263 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"68ff0a82-cf02-4e4e-bf49-b46f3e0f361a","Type":"ContainerStarted","Data":"1970ea7b377f482cd838a793823433f83ab5f856d25870432d6322bd9345610c"} Jan 22 09:20:20 crc kubenswrapper[4811]: I0122 09:20:20.149280 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 22 09:20:20 crc kubenswrapper[4811]: I0122 09:20:20.150901 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757dc6fff9-bllfc" event={"ID":"11f33cf8-5f50-416a-a6f0-f6647a331d40","Type":"ContainerStarted","Data":"2561738be79bd08de502ce49d42b888237050084091d8ff4d97fae5b89955548"} Jan 22 09:20:20 crc kubenswrapper[4811]: I0122 09:20:20.168942 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.370770807 podStartE2EDuration="7.168924629s" podCreationTimestamp="2026-01-22 09:20:13 +0000 UTC" firstStartedPulling="2026-01-22 09:20:14.643321884 +0000 UTC m=+858.965509008" lastFinishedPulling="2026-01-22 09:20:19.441475707 +0000 UTC m=+863.763662830" observedRunningTime="2026-01-22 09:20:20.166440304 +0000 UTC m=+864.488627428" watchObservedRunningTime="2026-01-22 09:20:20.168924629 +0000 UTC m=+864.491111752" Jan 22 09:20:20 crc kubenswrapper[4811]: I0122 09:20:20.181471 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-757dc6fff9-bllfc" podStartSLOduration=3.771697905 podStartE2EDuration="8.181458172s" podCreationTimestamp="2026-01-22 09:20:12 +0000 UTC" firstStartedPulling="2026-01-22 09:20:13.996041179 +0000 UTC m=+858.318228303" lastFinishedPulling="2026-01-22 09:20:18.405801457 +0000 UTC m=+862.727988570" observedRunningTime="2026-01-22 09:20:20.179895915 +0000 UTC m=+864.502083039" watchObservedRunningTime="2026-01-22 09:20:20.181458172 +0000 UTC m=+864.503645295" Jan 22 09:20:20 crc kubenswrapper[4811]: I0122 09:20:20.353588 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-92flm"] Jan 22 09:20:20 crc kubenswrapper[4811]: W0122 09:20:20.356369 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod811bb52f_41ad_4b91_8b9e_3af290e2d893.slice/crio-2324dd0c7f2e2faaffd6aa50724c3729f6203fb824dab1ba4ca23df70481abbf WatchSource:0}: Error finding container 2324dd0c7f2e2faaffd6aa50724c3729f6203fb824dab1ba4ca23df70481abbf: Status 404 returned error can't find the container with id 2324dd0c7f2e2faaffd6aa50724c3729f6203fb824dab1ba4ca23df70481abbf Jan 22 09:20:20 crc kubenswrapper[4811]: I0122 09:20:20.407956 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-lnx59" Jan 22 09:20:20 crc kubenswrapper[4811]: I0122 09:20:20.436410 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zh25z\" (UniqueName: \"kubernetes.io/projected/ba2c649b-b430-4002-bd27-ad8d65dd2137-kube-api-access-zh25z\") pod \"ba2c649b-b430-4002-bd27-ad8d65dd2137\" (UID: \"ba2c649b-b430-4002-bd27-ad8d65dd2137\") " Jan 22 09:20:20 crc kubenswrapper[4811]: I0122 09:20:20.436603 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba2c649b-b430-4002-bd27-ad8d65dd2137-operator-scripts\") pod \"ba2c649b-b430-4002-bd27-ad8d65dd2137\" (UID: \"ba2c649b-b430-4002-bd27-ad8d65dd2137\") " Jan 22 09:20:20 crc kubenswrapper[4811]: I0122 09:20:20.437230 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba2c649b-b430-4002-bd27-ad8d65dd2137-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ba2c649b-b430-4002-bd27-ad8d65dd2137" (UID: "ba2c649b-b430-4002-bd27-ad8d65dd2137"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:20:20 crc kubenswrapper[4811]: I0122 09:20:20.442594 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba2c649b-b430-4002-bd27-ad8d65dd2137-kube-api-access-zh25z" (OuterVolumeSpecName: "kube-api-access-zh25z") pod "ba2c649b-b430-4002-bd27-ad8d65dd2137" (UID: "ba2c649b-b430-4002-bd27-ad8d65dd2137"). InnerVolumeSpecName "kube-api-access-zh25z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:20:20 crc kubenswrapper[4811]: I0122 09:20:20.481144 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-2e4a-account-create-update-j6gzm" Jan 22 09:20:20 crc kubenswrapper[4811]: I0122 09:20:20.537930 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tzmkh\" (UniqueName: \"kubernetes.io/projected/1240e634-6792-476a-b85b-f4034920913e-kube-api-access-tzmkh\") pod \"1240e634-6792-476a-b85b-f4034920913e\" (UID: \"1240e634-6792-476a-b85b-f4034920913e\") " Jan 22 09:20:20 crc kubenswrapper[4811]: I0122 09:20:20.537987 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1240e634-6792-476a-b85b-f4034920913e-operator-scripts\") pod \"1240e634-6792-476a-b85b-f4034920913e\" (UID: \"1240e634-6792-476a-b85b-f4034920913e\") " Jan 22 09:20:20 crc kubenswrapper[4811]: I0122 09:20:20.538380 4811 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba2c649b-b430-4002-bd27-ad8d65dd2137-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 09:20:20 crc kubenswrapper[4811]: I0122 09:20:20.538397 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zh25z\" (UniqueName: \"kubernetes.io/projected/ba2c649b-b430-4002-bd27-ad8d65dd2137-kube-api-access-zh25z\") on node \"crc\" DevicePath \"\"" Jan 22 09:20:20 crc kubenswrapper[4811]: I0122 09:20:20.538406 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1240e634-6792-476a-b85b-f4034920913e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1240e634-6792-476a-b85b-f4034920913e" (UID: "1240e634-6792-476a-b85b-f4034920913e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:20:20 crc kubenswrapper[4811]: I0122 09:20:20.540378 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1240e634-6792-476a-b85b-f4034920913e-kube-api-access-tzmkh" (OuterVolumeSpecName: "kube-api-access-tzmkh") pod "1240e634-6792-476a-b85b-f4034920913e" (UID: "1240e634-6792-476a-b85b-f4034920913e"). InnerVolumeSpecName "kube-api-access-tzmkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:20:20 crc kubenswrapper[4811]: I0122 09:20:20.639438 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tzmkh\" (UniqueName: \"kubernetes.io/projected/1240e634-6792-476a-b85b-f4034920913e-kube-api-access-tzmkh\") on node \"crc\" DevicePath \"\"" Jan 22 09:20:20 crc kubenswrapper[4811]: I0122 09:20:20.639467 4811 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1240e634-6792-476a-b85b-f4034920913e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 09:20:21 crc kubenswrapper[4811]: I0122 09:20:21.157155 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-lnx59" Jan 22 09:20:21 crc kubenswrapper[4811]: I0122 09:20:21.157155 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-lnx59" event={"ID":"ba2c649b-b430-4002-bd27-ad8d65dd2137","Type":"ContainerDied","Data":"b4f0e1ebd9ecf0792fa16073404089b1391927f6d8c5ce35bab6ce92da5624dd"} Jan 22 09:20:21 crc kubenswrapper[4811]: I0122 09:20:21.157248 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b4f0e1ebd9ecf0792fa16073404089b1391927f6d8c5ce35bab6ce92da5624dd" Jan 22 09:20:21 crc kubenswrapper[4811]: I0122 09:20:21.158293 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-2e4a-account-create-update-j6gzm" Jan 22 09:20:21 crc kubenswrapper[4811]: I0122 09:20:21.158287 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-2e4a-account-create-update-j6gzm" event={"ID":"1240e634-6792-476a-b85b-f4034920913e","Type":"ContainerDied","Data":"38de7602e5760fb82480854675d1f58e7a576345ab1996ad263f3a61c8526d8b"} Jan 22 09:20:21 crc kubenswrapper[4811]: I0122 09:20:21.158425 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="38de7602e5760fb82480854675d1f58e7a576345ab1996ad263f3a61c8526d8b" Jan 22 09:20:21 crc kubenswrapper[4811]: I0122 09:20:21.159195 4811 generic.go:334] "Generic (PLEG): container finished" podID="811bb52f-41ad-4b91-8b9e-3af290e2d893" containerID="7b01bf0b71a90d7ca81248cb5e6e3c974fce8e8551f1e29e9ada4c211659001d" exitCode=0 Jan 22 09:20:21 crc kubenswrapper[4811]: I0122 09:20:21.159490 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-92flm" event={"ID":"811bb52f-41ad-4b91-8b9e-3af290e2d893","Type":"ContainerDied","Data":"7b01bf0b71a90d7ca81248cb5e6e3c974fce8e8551f1e29e9ada4c211659001d"} Jan 22 09:20:21 crc kubenswrapper[4811]: I0122 09:20:21.159526 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-92flm" event={"ID":"811bb52f-41ad-4b91-8b9e-3af290e2d893","Type":"ContainerStarted","Data":"2324dd0c7f2e2faaffd6aa50724c3729f6203fb824dab1ba4ca23df70481abbf"} Jan 22 09:20:21 crc kubenswrapper[4811]: I0122 09:20:21.159765 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-757dc6fff9-bllfc" Jan 22 09:20:22 crc kubenswrapper[4811]: I0122 09:20:22.389022 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-92flm" Jan 22 09:20:22 crc kubenswrapper[4811]: I0122 09:20:22.564290 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/811bb52f-41ad-4b91-8b9e-3af290e2d893-operator-scripts\") pod \"811bb52f-41ad-4b91-8b9e-3af290e2d893\" (UID: \"811bb52f-41ad-4b91-8b9e-3af290e2d893\") " Jan 22 09:20:22 crc kubenswrapper[4811]: I0122 09:20:22.564471 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m29qw\" (UniqueName: \"kubernetes.io/projected/811bb52f-41ad-4b91-8b9e-3af290e2d893-kube-api-access-m29qw\") pod \"811bb52f-41ad-4b91-8b9e-3af290e2d893\" (UID: \"811bb52f-41ad-4b91-8b9e-3af290e2d893\") " Jan 22 09:20:22 crc kubenswrapper[4811]: I0122 09:20:22.564812 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/811bb52f-41ad-4b91-8b9e-3af290e2d893-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "811bb52f-41ad-4b91-8b9e-3af290e2d893" (UID: "811bb52f-41ad-4b91-8b9e-3af290e2d893"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:20:22 crc kubenswrapper[4811]: I0122 09:20:22.575655 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/811bb52f-41ad-4b91-8b9e-3af290e2d893-kube-api-access-m29qw" (OuterVolumeSpecName: "kube-api-access-m29qw") pod "811bb52f-41ad-4b91-8b9e-3af290e2d893" (UID: "811bb52f-41ad-4b91-8b9e-3af290e2d893"). InnerVolumeSpecName "kube-api-access-m29qw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:20:22 crc kubenswrapper[4811]: I0122 09:20:22.665694 4811 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/811bb52f-41ad-4b91-8b9e-3af290e2d893-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 09:20:22 crc kubenswrapper[4811]: I0122 09:20:22.665721 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m29qw\" (UniqueName: \"kubernetes.io/projected/811bb52f-41ad-4b91-8b9e-3af290e2d893-kube-api-access-m29qw\") on node \"crc\" DevicePath \"\"" Jan 22 09:20:23 crc kubenswrapper[4811]: I0122 09:20:23.170232 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-92flm" event={"ID":"811bb52f-41ad-4b91-8b9e-3af290e2d893","Type":"ContainerDied","Data":"2324dd0c7f2e2faaffd6aa50724c3729f6203fb824dab1ba4ca23df70481abbf"} Jan 22 09:20:23 crc kubenswrapper[4811]: I0122 09:20:23.170256 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-92flm" Jan 22 09:20:23 crc kubenswrapper[4811]: I0122 09:20:23.170265 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2324dd0c7f2e2faaffd6aa50724c3729f6203fb824dab1ba4ca23df70481abbf" Jan 22 09:20:23 crc kubenswrapper[4811]: I0122 09:20:23.248719 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-fdrpp"] Jan 22 09:20:23 crc kubenswrapper[4811]: E0122 09:20:23.249252 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba2c649b-b430-4002-bd27-ad8d65dd2137" containerName="mariadb-database-create" Jan 22 09:20:23 crc kubenswrapper[4811]: I0122 09:20:23.249343 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba2c649b-b430-4002-bd27-ad8d65dd2137" containerName="mariadb-database-create" Jan 22 09:20:23 crc kubenswrapper[4811]: E0122 09:20:23.249410 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1240e634-6792-476a-b85b-f4034920913e" containerName="mariadb-account-create-update" Jan 22 09:20:23 crc kubenswrapper[4811]: I0122 09:20:23.249458 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="1240e634-6792-476a-b85b-f4034920913e" containerName="mariadb-account-create-update" Jan 22 09:20:23 crc kubenswrapper[4811]: E0122 09:20:23.249522 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="811bb52f-41ad-4b91-8b9e-3af290e2d893" containerName="mariadb-account-create-update" Jan 22 09:20:23 crc kubenswrapper[4811]: I0122 09:20:23.249576 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="811bb52f-41ad-4b91-8b9e-3af290e2d893" containerName="mariadb-account-create-update" Jan 22 09:20:23 crc kubenswrapper[4811]: I0122 09:20:23.249800 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="1240e634-6792-476a-b85b-f4034920913e" containerName="mariadb-account-create-update" Jan 22 09:20:23 crc kubenswrapper[4811]: I0122 09:20:23.249884 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba2c649b-b430-4002-bd27-ad8d65dd2137" containerName="mariadb-database-create" Jan 22 09:20:23 crc kubenswrapper[4811]: I0122 09:20:23.249936 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="811bb52f-41ad-4b91-8b9e-3af290e2d893" containerName="mariadb-account-create-update" Jan 22 09:20:23 crc kubenswrapper[4811]: I0122 09:20:23.250398 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-fdrpp" Jan 22 09:20:23 crc kubenswrapper[4811]: I0122 09:20:23.252712 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 22 09:20:23 crc kubenswrapper[4811]: I0122 09:20:23.252754 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-92nf4" Jan 22 09:20:23 crc kubenswrapper[4811]: I0122 09:20:23.260657 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-fdrpp"] Jan 22 09:20:23 crc kubenswrapper[4811]: I0122 09:20:23.273297 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5f5df03-a29f-4805-b750-8d360d832019-combined-ca-bundle\") pod \"glance-db-sync-fdrpp\" (UID: \"f5f5df03-a29f-4805-b750-8d360d832019\") " pod="openstack/glance-db-sync-fdrpp" Jan 22 09:20:23 crc kubenswrapper[4811]: I0122 09:20:23.273367 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hgn2\" (UniqueName: \"kubernetes.io/projected/f5f5df03-a29f-4805-b750-8d360d832019-kube-api-access-4hgn2\") pod \"glance-db-sync-fdrpp\" (UID: \"f5f5df03-a29f-4805-b750-8d360d832019\") " pod="openstack/glance-db-sync-fdrpp" Jan 22 09:20:23 crc kubenswrapper[4811]: I0122 09:20:23.273404 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f5f5df03-a29f-4805-b750-8d360d832019-db-sync-config-data\") pod \"glance-db-sync-fdrpp\" (UID: \"f5f5df03-a29f-4805-b750-8d360d832019\") " pod="openstack/glance-db-sync-fdrpp" Jan 22 09:20:23 crc kubenswrapper[4811]: I0122 09:20:23.273420 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5f5df03-a29f-4805-b750-8d360d832019-config-data\") pod \"glance-db-sync-fdrpp\" (UID: \"f5f5df03-a29f-4805-b750-8d360d832019\") " pod="openstack/glance-db-sync-fdrpp" Jan 22 09:20:23 crc kubenswrapper[4811]: I0122 09:20:23.374446 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5f5df03-a29f-4805-b750-8d360d832019-combined-ca-bundle\") pod \"glance-db-sync-fdrpp\" (UID: \"f5f5df03-a29f-4805-b750-8d360d832019\") " pod="openstack/glance-db-sync-fdrpp" Jan 22 09:20:23 crc kubenswrapper[4811]: I0122 09:20:23.375129 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hgn2\" (UniqueName: \"kubernetes.io/projected/f5f5df03-a29f-4805-b750-8d360d832019-kube-api-access-4hgn2\") pod \"glance-db-sync-fdrpp\" (UID: \"f5f5df03-a29f-4805-b750-8d360d832019\") " pod="openstack/glance-db-sync-fdrpp" Jan 22 09:20:23 crc kubenswrapper[4811]: I0122 09:20:23.375261 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f5f5df03-a29f-4805-b750-8d360d832019-db-sync-config-data\") pod \"glance-db-sync-fdrpp\" (UID: \"f5f5df03-a29f-4805-b750-8d360d832019\") " pod="openstack/glance-db-sync-fdrpp" Jan 22 09:20:23 crc kubenswrapper[4811]: I0122 09:20:23.375339 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5f5df03-a29f-4805-b750-8d360d832019-config-data\") pod \"glance-db-sync-fdrpp\" (UID: \"f5f5df03-a29f-4805-b750-8d360d832019\") " pod="openstack/glance-db-sync-fdrpp" Jan 22 09:20:23 crc kubenswrapper[4811]: I0122 09:20:23.378141 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5f5df03-a29f-4805-b750-8d360d832019-combined-ca-bundle\") pod \"glance-db-sync-fdrpp\" (UID: \"f5f5df03-a29f-4805-b750-8d360d832019\") " pod="openstack/glance-db-sync-fdrpp" Jan 22 09:20:23 crc kubenswrapper[4811]: I0122 09:20:23.378557 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5f5df03-a29f-4805-b750-8d360d832019-config-data\") pod \"glance-db-sync-fdrpp\" (UID: \"f5f5df03-a29f-4805-b750-8d360d832019\") " pod="openstack/glance-db-sync-fdrpp" Jan 22 09:20:23 crc kubenswrapper[4811]: I0122 09:20:23.378914 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f5f5df03-a29f-4805-b750-8d360d832019-db-sync-config-data\") pod \"glance-db-sync-fdrpp\" (UID: \"f5f5df03-a29f-4805-b750-8d360d832019\") " pod="openstack/glance-db-sync-fdrpp" Jan 22 09:20:23 crc kubenswrapper[4811]: I0122 09:20:23.401726 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hgn2\" (UniqueName: \"kubernetes.io/projected/f5f5df03-a29f-4805-b750-8d360d832019-kube-api-access-4hgn2\") pod \"glance-db-sync-fdrpp\" (UID: \"f5f5df03-a29f-4805-b750-8d360d832019\") " pod="openstack/glance-db-sync-fdrpp" Jan 22 09:20:23 crc kubenswrapper[4811]: I0122 09:20:23.564973 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-fdrpp" Jan 22 09:20:24 crc kubenswrapper[4811]: I0122 09:20:24.000868 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-fdrpp"] Jan 22 09:20:24 crc kubenswrapper[4811]: W0122 09:20:24.011729 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf5f5df03_a29f_4805_b750_8d360d832019.slice/crio-ca2841beb6d29b7e6f3b0ddd4eb6211069d30b9c8e8f60e76710a0d36d486e27 WatchSource:0}: Error finding container ca2841beb6d29b7e6f3b0ddd4eb6211069d30b9c8e8f60e76710a0d36d486e27: Status 404 returned error can't find the container with id ca2841beb6d29b7e6f3b0ddd4eb6211069d30b9c8e8f60e76710a0d36d486e27 Jan 22 09:20:24 crc kubenswrapper[4811]: I0122 09:20:24.176198 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-fdrpp" event={"ID":"f5f5df03-a29f-4805-b750-8d360d832019","Type":"ContainerStarted","Data":"ca2841beb6d29b7e6f3b0ddd4eb6211069d30b9c8e8f60e76710a0d36d486e27"} Jan 22 09:20:25 crc kubenswrapper[4811]: I0122 09:20:25.989063 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-92flm"] Jan 22 09:20:26 crc kubenswrapper[4811]: I0122 09:20:26.000535 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-92flm"] Jan 22 09:20:27 crc kubenswrapper[4811]: I0122 09:20:27.628829 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-794868bd45-25t27" Jan 22 09:20:27 crc kubenswrapper[4811]: I0122 09:20:27.998813 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="811bb52f-41ad-4b91-8b9e-3af290e2d893" path="/var/lib/kubelet/pods/811bb52f-41ad-4b91-8b9e-3af290e2d893/volumes" Jan 22 09:20:28 crc kubenswrapper[4811]: I0122 09:20:28.009742 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-757dc6fff9-bllfc" Jan 22 09:20:28 crc kubenswrapper[4811]: I0122 09:20:28.063911 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-794868bd45-25t27"] Jan 22 09:20:28 crc kubenswrapper[4811]: I0122 09:20:28.197960 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-794868bd45-25t27" podUID="dddbae5c-b9c5-44f9-9236-28c880f47a53" containerName="dnsmasq-dns" containerID="cri-o://d81b9f2fef54fe2bd25b4df332ea94177e0fdd42ef71f795e973cc340c95a2e6" gracePeriod=10 Jan 22 09:20:28 crc kubenswrapper[4811]: I0122 09:20:28.552258 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-794868bd45-25t27" Jan 22 09:20:28 crc kubenswrapper[4811]: I0122 09:20:28.746509 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dddbae5c-b9c5-44f9-9236-28c880f47a53-config\") pod \"dddbae5c-b9c5-44f9-9236-28c880f47a53\" (UID: \"dddbae5c-b9c5-44f9-9236-28c880f47a53\") " Jan 22 09:20:28 crc kubenswrapper[4811]: I0122 09:20:28.746550 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dddbae5c-b9c5-44f9-9236-28c880f47a53-dns-svc\") pod \"dddbae5c-b9c5-44f9-9236-28c880f47a53\" (UID: \"dddbae5c-b9c5-44f9-9236-28c880f47a53\") " Jan 22 09:20:28 crc kubenswrapper[4811]: I0122 09:20:28.746745 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dddbae5c-b9c5-44f9-9236-28c880f47a53-ovsdbserver-sb\") pod \"dddbae5c-b9c5-44f9-9236-28c880f47a53\" (UID: \"dddbae5c-b9c5-44f9-9236-28c880f47a53\") " Jan 22 09:20:28 crc kubenswrapper[4811]: I0122 09:20:28.746783 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7fx8w\" (UniqueName: \"kubernetes.io/projected/dddbae5c-b9c5-44f9-9236-28c880f47a53-kube-api-access-7fx8w\") pod \"dddbae5c-b9c5-44f9-9236-28c880f47a53\" (UID: \"dddbae5c-b9c5-44f9-9236-28c880f47a53\") " Jan 22 09:20:28 crc kubenswrapper[4811]: I0122 09:20:28.757981 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dddbae5c-b9c5-44f9-9236-28c880f47a53-kube-api-access-7fx8w" (OuterVolumeSpecName: "kube-api-access-7fx8w") pod "dddbae5c-b9c5-44f9-9236-28c880f47a53" (UID: "dddbae5c-b9c5-44f9-9236-28c880f47a53"). InnerVolumeSpecName "kube-api-access-7fx8w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:20:28 crc kubenswrapper[4811]: I0122 09:20:28.782072 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dddbae5c-b9c5-44f9-9236-28c880f47a53-config" (OuterVolumeSpecName: "config") pod "dddbae5c-b9c5-44f9-9236-28c880f47a53" (UID: "dddbae5c-b9c5-44f9-9236-28c880f47a53"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:20:28 crc kubenswrapper[4811]: I0122 09:20:28.784853 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dddbae5c-b9c5-44f9-9236-28c880f47a53-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "dddbae5c-b9c5-44f9-9236-28c880f47a53" (UID: "dddbae5c-b9c5-44f9-9236-28c880f47a53"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:20:28 crc kubenswrapper[4811]: I0122 09:20:28.788837 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dddbae5c-b9c5-44f9-9236-28c880f47a53-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "dddbae5c-b9c5-44f9-9236-28c880f47a53" (UID: "dddbae5c-b9c5-44f9-9236-28c880f47a53"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:20:28 crc kubenswrapper[4811]: I0122 09:20:28.848359 4811 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dddbae5c-b9c5-44f9-9236-28c880f47a53-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 22 09:20:28 crc kubenswrapper[4811]: I0122 09:20:28.848395 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7fx8w\" (UniqueName: \"kubernetes.io/projected/dddbae5c-b9c5-44f9-9236-28c880f47a53-kube-api-access-7fx8w\") on node \"crc\" DevicePath \"\"" Jan 22 09:20:28 crc kubenswrapper[4811]: I0122 09:20:28.848410 4811 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dddbae5c-b9c5-44f9-9236-28c880f47a53-config\") on node \"crc\" DevicePath \"\"" Jan 22 09:20:28 crc kubenswrapper[4811]: I0122 09:20:28.848423 4811 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dddbae5c-b9c5-44f9-9236-28c880f47a53-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 22 09:20:29 crc kubenswrapper[4811]: I0122 09:20:29.066598 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 22 09:20:29 crc kubenswrapper[4811]: I0122 09:20:29.224393 4811 generic.go:334] "Generic (PLEG): container finished" podID="dddbae5c-b9c5-44f9-9236-28c880f47a53" containerID="d81b9f2fef54fe2bd25b4df332ea94177e0fdd42ef71f795e973cc340c95a2e6" exitCode=0 Jan 22 09:20:29 crc kubenswrapper[4811]: I0122 09:20:29.224449 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-794868bd45-25t27" event={"ID":"dddbae5c-b9c5-44f9-9236-28c880f47a53","Type":"ContainerDied","Data":"d81b9f2fef54fe2bd25b4df332ea94177e0fdd42ef71f795e973cc340c95a2e6"} Jan 22 09:20:29 crc kubenswrapper[4811]: I0122 09:20:29.224481 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-794868bd45-25t27" event={"ID":"dddbae5c-b9c5-44f9-9236-28c880f47a53","Type":"ContainerDied","Data":"7482770ac8e0d5f8d72305d3321b49efb5c14c4e1c99c7b61815d4d616ca0a85"} Jan 22 09:20:29 crc kubenswrapper[4811]: I0122 09:20:29.224511 4811 scope.go:117] "RemoveContainer" containerID="d81b9f2fef54fe2bd25b4df332ea94177e0fdd42ef71f795e973cc340c95a2e6" Jan 22 09:20:29 crc kubenswrapper[4811]: I0122 09:20:29.224735 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-794868bd45-25t27" Jan 22 09:20:29 crc kubenswrapper[4811]: I0122 09:20:29.247489 4811 scope.go:117] "RemoveContainer" containerID="f6f20be424f0f3347d07429c96d1073ae8709f06ea75f5401359a199cf8f04a4" Jan 22 09:20:29 crc kubenswrapper[4811]: I0122 09:20:29.260890 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-794868bd45-25t27"] Jan 22 09:20:29 crc kubenswrapper[4811]: I0122 09:20:29.270000 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-794868bd45-25t27"] Jan 22 09:20:29 crc kubenswrapper[4811]: I0122 09:20:29.284103 4811 scope.go:117] "RemoveContainer" containerID="d81b9f2fef54fe2bd25b4df332ea94177e0fdd42ef71f795e973cc340c95a2e6" Jan 22 09:20:29 crc kubenswrapper[4811]: E0122 09:20:29.285517 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d81b9f2fef54fe2bd25b4df332ea94177e0fdd42ef71f795e973cc340c95a2e6\": container with ID starting with d81b9f2fef54fe2bd25b4df332ea94177e0fdd42ef71f795e973cc340c95a2e6 not found: ID does not exist" containerID="d81b9f2fef54fe2bd25b4df332ea94177e0fdd42ef71f795e973cc340c95a2e6" Jan 22 09:20:29 crc kubenswrapper[4811]: I0122 09:20:29.285570 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d81b9f2fef54fe2bd25b4df332ea94177e0fdd42ef71f795e973cc340c95a2e6"} err="failed to get container status \"d81b9f2fef54fe2bd25b4df332ea94177e0fdd42ef71f795e973cc340c95a2e6\": rpc error: code = NotFound desc = could not find container \"d81b9f2fef54fe2bd25b4df332ea94177e0fdd42ef71f795e973cc340c95a2e6\": container with ID starting with d81b9f2fef54fe2bd25b4df332ea94177e0fdd42ef71f795e973cc340c95a2e6 not found: ID does not exist" Jan 22 09:20:29 crc kubenswrapper[4811]: I0122 09:20:29.285592 4811 scope.go:117] "RemoveContainer" containerID="f6f20be424f0f3347d07429c96d1073ae8709f06ea75f5401359a199cf8f04a4" Jan 22 09:20:29 crc kubenswrapper[4811]: E0122 09:20:29.286096 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6f20be424f0f3347d07429c96d1073ae8709f06ea75f5401359a199cf8f04a4\": container with ID starting with f6f20be424f0f3347d07429c96d1073ae8709f06ea75f5401359a199cf8f04a4 not found: ID does not exist" containerID="f6f20be424f0f3347d07429c96d1073ae8709f06ea75f5401359a199cf8f04a4" Jan 22 09:20:29 crc kubenswrapper[4811]: I0122 09:20:29.286129 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6f20be424f0f3347d07429c96d1073ae8709f06ea75f5401359a199cf8f04a4"} err="failed to get container status \"f6f20be424f0f3347d07429c96d1073ae8709f06ea75f5401359a199cf8f04a4\": rpc error: code = NotFound desc = could not find container \"f6f20be424f0f3347d07429c96d1073ae8709f06ea75f5401359a199cf8f04a4\": container with ID starting with f6f20be424f0f3347d07429c96d1073ae8709f06ea75f5401359a199cf8f04a4 not found: ID does not exist" Jan 22 09:20:29 crc kubenswrapper[4811]: I0122 09:20:29.998683 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dddbae5c-b9c5-44f9-9236-28c880f47a53" path="/var/lib/kubelet/pods/dddbae5c-b9c5-44f9-9236-28c880f47a53/volumes" Jan 22 09:20:30 crc kubenswrapper[4811]: I0122 09:20:30.993647 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-hk9q6"] Jan 22 09:20:30 crc kubenswrapper[4811]: E0122 09:20:30.994229 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dddbae5c-b9c5-44f9-9236-28c880f47a53" containerName="dnsmasq-dns" Jan 22 09:20:30 crc kubenswrapper[4811]: I0122 09:20:30.994250 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="dddbae5c-b9c5-44f9-9236-28c880f47a53" containerName="dnsmasq-dns" Jan 22 09:20:30 crc kubenswrapper[4811]: E0122 09:20:30.994262 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dddbae5c-b9c5-44f9-9236-28c880f47a53" containerName="init" Jan 22 09:20:30 crc kubenswrapper[4811]: I0122 09:20:30.994268 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="dddbae5c-b9c5-44f9-9236-28c880f47a53" containerName="init" Jan 22 09:20:30 crc kubenswrapper[4811]: I0122 09:20:30.994411 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="dddbae5c-b9c5-44f9-9236-28c880f47a53" containerName="dnsmasq-dns" Jan 22 09:20:30 crc kubenswrapper[4811]: I0122 09:20:30.994967 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-hk9q6" Jan 22 09:20:30 crc kubenswrapper[4811]: I0122 09:20:30.996858 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 22 09:20:31 crc kubenswrapper[4811]: I0122 09:20:31.001517 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-hk9q6"] Jan 22 09:20:31 crc kubenswrapper[4811]: I0122 09:20:31.086815 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5njjs\" (UniqueName: \"kubernetes.io/projected/76cb01bc-a79f-48c0-af4b-5afe92f49e25-kube-api-access-5njjs\") pod \"root-account-create-update-hk9q6\" (UID: \"76cb01bc-a79f-48c0-af4b-5afe92f49e25\") " pod="openstack/root-account-create-update-hk9q6" Jan 22 09:20:31 crc kubenswrapper[4811]: I0122 09:20:31.087010 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/76cb01bc-a79f-48c0-af4b-5afe92f49e25-operator-scripts\") pod \"root-account-create-update-hk9q6\" (UID: \"76cb01bc-a79f-48c0-af4b-5afe92f49e25\") " pod="openstack/root-account-create-update-hk9q6" Jan 22 09:20:31 crc kubenswrapper[4811]: I0122 09:20:31.188018 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5njjs\" (UniqueName: \"kubernetes.io/projected/76cb01bc-a79f-48c0-af4b-5afe92f49e25-kube-api-access-5njjs\") pod \"root-account-create-update-hk9q6\" (UID: \"76cb01bc-a79f-48c0-af4b-5afe92f49e25\") " pod="openstack/root-account-create-update-hk9q6" Jan 22 09:20:31 crc kubenswrapper[4811]: I0122 09:20:31.188104 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/76cb01bc-a79f-48c0-af4b-5afe92f49e25-operator-scripts\") pod \"root-account-create-update-hk9q6\" (UID: \"76cb01bc-a79f-48c0-af4b-5afe92f49e25\") " pod="openstack/root-account-create-update-hk9q6" Jan 22 09:20:31 crc kubenswrapper[4811]: I0122 09:20:31.188874 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/76cb01bc-a79f-48c0-af4b-5afe92f49e25-operator-scripts\") pod \"root-account-create-update-hk9q6\" (UID: \"76cb01bc-a79f-48c0-af4b-5afe92f49e25\") " pod="openstack/root-account-create-update-hk9q6" Jan 22 09:20:31 crc kubenswrapper[4811]: I0122 09:20:31.214693 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5njjs\" (UniqueName: \"kubernetes.io/projected/76cb01bc-a79f-48c0-af4b-5afe92f49e25-kube-api-access-5njjs\") pod \"root-account-create-update-hk9q6\" (UID: \"76cb01bc-a79f-48c0-af4b-5afe92f49e25\") " pod="openstack/root-account-create-update-hk9q6" Jan 22 09:20:31 crc kubenswrapper[4811]: I0122 09:20:31.317352 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-hk9q6" Jan 22 09:20:34 crc kubenswrapper[4811]: I0122 09:20:34.254583 4811 generic.go:334] "Generic (PLEG): container finished" podID="0241fc5c-fa26-44a1-9db3-006b438b9123" containerID="7a0909ca18cf916fd8bc5743542561fad1746c91028910b1139cffb297ae2099" exitCode=0 Jan 22 09:20:34 crc kubenswrapper[4811]: I0122 09:20:34.254659 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"0241fc5c-fa26-44a1-9db3-006b438b9123","Type":"ContainerDied","Data":"7a0909ca18cf916fd8bc5743542561fad1746c91028910b1139cffb297ae2099"} Jan 22 09:20:34 crc kubenswrapper[4811]: I0122 09:20:34.257312 4811 generic.go:334] "Generic (PLEG): container finished" podID="9e61948b-1761-46b8-9ab3-e776224f335a" containerID="1e0f56dd8bb31f9c39584ee804cbf93abc9a85a0b2b1fa91f78ccc97ea695f74" exitCode=0 Jan 22 09:20:34 crc kubenswrapper[4811]: I0122 09:20:34.257358 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"9e61948b-1761-46b8-9ab3-e776224f335a","Type":"ContainerDied","Data":"1e0f56dd8bb31f9c39584ee804cbf93abc9a85a0b2b1fa91f78ccc97ea695f74"} Jan 22 09:20:35 crc kubenswrapper[4811]: I0122 09:20:35.263338 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"0241fc5c-fa26-44a1-9db3-006b438b9123","Type":"ContainerStarted","Data":"5a60a75c8f726053514360ec57baed7b88c720089fe931ad5add6184105ab8ff"} Jan 22 09:20:35 crc kubenswrapper[4811]: I0122 09:20:35.264578 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:20:35 crc kubenswrapper[4811]: I0122 09:20:35.276906 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"9e61948b-1761-46b8-9ab3-e776224f335a","Type":"ContainerStarted","Data":"1acf2d1b1ace65c9ae049c114455199e1f10221371de344fd552a4aeabf05033"} Jan 22 09:20:35 crc kubenswrapper[4811]: I0122 09:20:35.277434 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 22 09:20:35 crc kubenswrapper[4811]: I0122 09:20:35.289785 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=36.627165833 podStartE2EDuration="57.289773777s" podCreationTimestamp="2026-01-22 09:19:38 +0000 UTC" firstStartedPulling="2026-01-22 09:19:40.532095247 +0000 UTC m=+824.854282370" lastFinishedPulling="2026-01-22 09:20:01.194703191 +0000 UTC m=+845.516890314" observedRunningTime="2026-01-22 09:20:35.287262332 +0000 UTC m=+879.609449455" watchObservedRunningTime="2026-01-22 09:20:35.289773777 +0000 UTC m=+879.611960900" Jan 22 09:20:35 crc kubenswrapper[4811]: I0122 09:20:35.383963 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=36.592960354 podStartE2EDuration="57.383949556s" podCreationTimestamp="2026-01-22 09:19:38 +0000 UTC" firstStartedPulling="2026-01-22 09:19:40.314854975 +0000 UTC m=+824.637042099" lastFinishedPulling="2026-01-22 09:20:01.105844178 +0000 UTC m=+845.428031301" observedRunningTime="2026-01-22 09:20:35.325439325 +0000 UTC m=+879.647626458" watchObservedRunningTime="2026-01-22 09:20:35.383949556 +0000 UTC m=+879.706136678" Jan 22 09:20:35 crc kubenswrapper[4811]: I0122 09:20:35.385538 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-hk9q6"] Jan 22 09:20:35 crc kubenswrapper[4811]: I0122 09:20:35.501885 4811 patch_prober.go:28] interesting pod/machine-config-daemon-txvcq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 09:20:35 crc kubenswrapper[4811]: I0122 09:20:35.501927 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 09:20:36 crc kubenswrapper[4811]: I0122 09:20:36.283883 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-fdrpp" event={"ID":"f5f5df03-a29f-4805-b750-8d360d832019","Type":"ContainerStarted","Data":"4be8abd3a99dce7c6c000fba5dc42ad42a131229c1bb98105d1f982c5c6a5b5d"} Jan 22 09:20:36 crc kubenswrapper[4811]: I0122 09:20:36.285222 4811 generic.go:334] "Generic (PLEG): container finished" podID="76cb01bc-a79f-48c0-af4b-5afe92f49e25" containerID="3b72e5a01d881c149a6269ce25589b078a76fc12389e3e6c7e79fe4b114e34be" exitCode=0 Jan 22 09:20:36 crc kubenswrapper[4811]: I0122 09:20:36.285302 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-hk9q6" event={"ID":"76cb01bc-a79f-48c0-af4b-5afe92f49e25","Type":"ContainerDied","Data":"3b72e5a01d881c149a6269ce25589b078a76fc12389e3e6c7e79fe4b114e34be"} Jan 22 09:20:36 crc kubenswrapper[4811]: I0122 09:20:36.285344 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-hk9q6" event={"ID":"76cb01bc-a79f-48c0-af4b-5afe92f49e25","Type":"ContainerStarted","Data":"ad7eab2ece82fc0647c45e78d10e8649e2c9b422a0d9cf3f29c6fcdde2dc7109"} Jan 22 09:20:36 crc kubenswrapper[4811]: I0122 09:20:36.313028 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-fdrpp" podStartSLOduration=2.30402895 podStartE2EDuration="13.313003663s" podCreationTimestamp="2026-01-22 09:20:23 +0000 UTC" firstStartedPulling="2026-01-22 09:20:24.012825972 +0000 UTC m=+868.335013095" lastFinishedPulling="2026-01-22 09:20:35.021800685 +0000 UTC m=+879.343987808" observedRunningTime="2026-01-22 09:20:36.308649805 +0000 UTC m=+880.630836928" watchObservedRunningTime="2026-01-22 09:20:36.313003663 +0000 UTC m=+880.635190786" Jan 22 09:20:37 crc kubenswrapper[4811]: I0122 09:20:37.978117 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-hk9q6" Jan 22 09:20:38 crc kubenswrapper[4811]: I0122 09:20:38.076285 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/76cb01bc-a79f-48c0-af4b-5afe92f49e25-operator-scripts\") pod \"76cb01bc-a79f-48c0-af4b-5afe92f49e25\" (UID: \"76cb01bc-a79f-48c0-af4b-5afe92f49e25\") " Jan 22 09:20:38 crc kubenswrapper[4811]: I0122 09:20:38.076339 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5njjs\" (UniqueName: \"kubernetes.io/projected/76cb01bc-a79f-48c0-af4b-5afe92f49e25-kube-api-access-5njjs\") pod \"76cb01bc-a79f-48c0-af4b-5afe92f49e25\" (UID: \"76cb01bc-a79f-48c0-af4b-5afe92f49e25\") " Jan 22 09:20:38 crc kubenswrapper[4811]: I0122 09:20:38.077237 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76cb01bc-a79f-48c0-af4b-5afe92f49e25-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "76cb01bc-a79f-48c0-af4b-5afe92f49e25" (UID: "76cb01bc-a79f-48c0-af4b-5afe92f49e25"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:20:38 crc kubenswrapper[4811]: I0122 09:20:38.094760 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76cb01bc-a79f-48c0-af4b-5afe92f49e25-kube-api-access-5njjs" (OuterVolumeSpecName: "kube-api-access-5njjs") pod "76cb01bc-a79f-48c0-af4b-5afe92f49e25" (UID: "76cb01bc-a79f-48c0-af4b-5afe92f49e25"). InnerVolumeSpecName "kube-api-access-5njjs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:20:38 crc kubenswrapper[4811]: I0122 09:20:38.178414 4811 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/76cb01bc-a79f-48c0-af4b-5afe92f49e25-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 09:20:38 crc kubenswrapper[4811]: I0122 09:20:38.178439 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5njjs\" (UniqueName: \"kubernetes.io/projected/76cb01bc-a79f-48c0-af4b-5afe92f49e25-kube-api-access-5njjs\") on node \"crc\" DevicePath \"\"" Jan 22 09:20:38 crc kubenswrapper[4811]: I0122 09:20:38.297220 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-hk9q6" event={"ID":"76cb01bc-a79f-48c0-af4b-5afe92f49e25","Type":"ContainerDied","Data":"ad7eab2ece82fc0647c45e78d10e8649e2c9b422a0d9cf3f29c6fcdde2dc7109"} Jan 22 09:20:38 crc kubenswrapper[4811]: I0122 09:20:38.297252 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-hk9q6" Jan 22 09:20:38 crc kubenswrapper[4811]: I0122 09:20:38.297255 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad7eab2ece82fc0647c45e78d10e8649e2c9b422a0d9cf3f29c6fcdde2dc7109" Jan 22 09:20:38 crc kubenswrapper[4811]: I0122 09:20:38.582803 4811 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-nm2bb" podUID="29754ede-0901-4bbd-aa87-49a8e93050b9" containerName="ovn-controller" probeResult="failure" output=< Jan 22 09:20:38 crc kubenswrapper[4811]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 22 09:20:38 crc kubenswrapper[4811]: > Jan 22 09:20:38 crc kubenswrapper[4811]: I0122 09:20:38.596158 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-kdqzr" Jan 22 09:20:38 crc kubenswrapper[4811]: I0122 09:20:38.604991 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-kdqzr" Jan 22 09:20:38 crc kubenswrapper[4811]: I0122 09:20:38.788369 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-nm2bb-config-5twdz"] Jan 22 09:20:38 crc kubenswrapper[4811]: E0122 09:20:38.788662 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76cb01bc-a79f-48c0-af4b-5afe92f49e25" containerName="mariadb-account-create-update" Jan 22 09:20:38 crc kubenswrapper[4811]: I0122 09:20:38.788679 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="76cb01bc-a79f-48c0-af4b-5afe92f49e25" containerName="mariadb-account-create-update" Jan 22 09:20:38 crc kubenswrapper[4811]: I0122 09:20:38.788850 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="76cb01bc-a79f-48c0-af4b-5afe92f49e25" containerName="mariadb-account-create-update" Jan 22 09:20:38 crc kubenswrapper[4811]: I0122 09:20:38.789291 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-nm2bb-config-5twdz" Jan 22 09:20:38 crc kubenswrapper[4811]: I0122 09:20:38.793740 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 22 09:20:38 crc kubenswrapper[4811]: I0122 09:20:38.806805 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-nm2bb-config-5twdz"] Jan 22 09:20:38 crc kubenswrapper[4811]: I0122 09:20:38.887193 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b380b37d-8c10-4d50-b3b6-099ebdbbbb76-scripts\") pod \"ovn-controller-nm2bb-config-5twdz\" (UID: \"b380b37d-8c10-4d50-b3b6-099ebdbbbb76\") " pod="openstack/ovn-controller-nm2bb-config-5twdz" Jan 22 09:20:38 crc kubenswrapper[4811]: I0122 09:20:38.887288 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b380b37d-8c10-4d50-b3b6-099ebdbbbb76-var-run\") pod \"ovn-controller-nm2bb-config-5twdz\" (UID: \"b380b37d-8c10-4d50-b3b6-099ebdbbbb76\") " pod="openstack/ovn-controller-nm2bb-config-5twdz" Jan 22 09:20:38 crc kubenswrapper[4811]: I0122 09:20:38.887327 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b380b37d-8c10-4d50-b3b6-099ebdbbbb76-var-log-ovn\") pod \"ovn-controller-nm2bb-config-5twdz\" (UID: \"b380b37d-8c10-4d50-b3b6-099ebdbbbb76\") " pod="openstack/ovn-controller-nm2bb-config-5twdz" Jan 22 09:20:38 crc kubenswrapper[4811]: I0122 09:20:38.887408 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/b380b37d-8c10-4d50-b3b6-099ebdbbbb76-additional-scripts\") pod \"ovn-controller-nm2bb-config-5twdz\" (UID: \"b380b37d-8c10-4d50-b3b6-099ebdbbbb76\") " pod="openstack/ovn-controller-nm2bb-config-5twdz" Jan 22 09:20:38 crc kubenswrapper[4811]: I0122 09:20:38.887468 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hsmx9\" (UniqueName: \"kubernetes.io/projected/b380b37d-8c10-4d50-b3b6-099ebdbbbb76-kube-api-access-hsmx9\") pod \"ovn-controller-nm2bb-config-5twdz\" (UID: \"b380b37d-8c10-4d50-b3b6-099ebdbbbb76\") " pod="openstack/ovn-controller-nm2bb-config-5twdz" Jan 22 09:20:38 crc kubenswrapper[4811]: I0122 09:20:38.887584 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b380b37d-8c10-4d50-b3b6-099ebdbbbb76-var-run-ovn\") pod \"ovn-controller-nm2bb-config-5twdz\" (UID: \"b380b37d-8c10-4d50-b3b6-099ebdbbbb76\") " pod="openstack/ovn-controller-nm2bb-config-5twdz" Jan 22 09:20:38 crc kubenswrapper[4811]: I0122 09:20:38.989052 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b380b37d-8c10-4d50-b3b6-099ebdbbbb76-var-run-ovn\") pod \"ovn-controller-nm2bb-config-5twdz\" (UID: \"b380b37d-8c10-4d50-b3b6-099ebdbbbb76\") " pod="openstack/ovn-controller-nm2bb-config-5twdz" Jan 22 09:20:38 crc kubenswrapper[4811]: I0122 09:20:38.989556 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b380b37d-8c10-4d50-b3b6-099ebdbbbb76-scripts\") pod \"ovn-controller-nm2bb-config-5twdz\" (UID: \"b380b37d-8c10-4d50-b3b6-099ebdbbbb76\") " pod="openstack/ovn-controller-nm2bb-config-5twdz" Jan 22 09:20:38 crc kubenswrapper[4811]: I0122 09:20:38.989705 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b380b37d-8c10-4d50-b3b6-099ebdbbbb76-var-run\") pod \"ovn-controller-nm2bb-config-5twdz\" (UID: \"b380b37d-8c10-4d50-b3b6-099ebdbbbb76\") " pod="openstack/ovn-controller-nm2bb-config-5twdz" Jan 22 09:20:38 crc kubenswrapper[4811]: I0122 09:20:38.989798 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b380b37d-8c10-4d50-b3b6-099ebdbbbb76-var-log-ovn\") pod \"ovn-controller-nm2bb-config-5twdz\" (UID: \"b380b37d-8c10-4d50-b3b6-099ebdbbbb76\") " pod="openstack/ovn-controller-nm2bb-config-5twdz" Jan 22 09:20:38 crc kubenswrapper[4811]: I0122 09:20:38.989871 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b380b37d-8c10-4d50-b3b6-099ebdbbbb76-var-log-ovn\") pod \"ovn-controller-nm2bb-config-5twdz\" (UID: \"b380b37d-8c10-4d50-b3b6-099ebdbbbb76\") " pod="openstack/ovn-controller-nm2bb-config-5twdz" Jan 22 09:20:38 crc kubenswrapper[4811]: I0122 09:20:38.989317 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b380b37d-8c10-4d50-b3b6-099ebdbbbb76-var-run-ovn\") pod \"ovn-controller-nm2bb-config-5twdz\" (UID: \"b380b37d-8c10-4d50-b3b6-099ebdbbbb76\") " pod="openstack/ovn-controller-nm2bb-config-5twdz" Jan 22 09:20:38 crc kubenswrapper[4811]: I0122 09:20:38.989833 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b380b37d-8c10-4d50-b3b6-099ebdbbbb76-var-run\") pod \"ovn-controller-nm2bb-config-5twdz\" (UID: \"b380b37d-8c10-4d50-b3b6-099ebdbbbb76\") " pod="openstack/ovn-controller-nm2bb-config-5twdz" Jan 22 09:20:38 crc kubenswrapper[4811]: I0122 09:20:38.990014 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/b380b37d-8c10-4d50-b3b6-099ebdbbbb76-additional-scripts\") pod \"ovn-controller-nm2bb-config-5twdz\" (UID: \"b380b37d-8c10-4d50-b3b6-099ebdbbbb76\") " pod="openstack/ovn-controller-nm2bb-config-5twdz" Jan 22 09:20:38 crc kubenswrapper[4811]: I0122 09:20:38.990106 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hsmx9\" (UniqueName: \"kubernetes.io/projected/b380b37d-8c10-4d50-b3b6-099ebdbbbb76-kube-api-access-hsmx9\") pod \"ovn-controller-nm2bb-config-5twdz\" (UID: \"b380b37d-8c10-4d50-b3b6-099ebdbbbb76\") " pod="openstack/ovn-controller-nm2bb-config-5twdz" Jan 22 09:20:38 crc kubenswrapper[4811]: I0122 09:20:38.990488 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/b380b37d-8c10-4d50-b3b6-099ebdbbbb76-additional-scripts\") pod \"ovn-controller-nm2bb-config-5twdz\" (UID: \"b380b37d-8c10-4d50-b3b6-099ebdbbbb76\") " pod="openstack/ovn-controller-nm2bb-config-5twdz" Jan 22 09:20:38 crc kubenswrapper[4811]: I0122 09:20:38.991357 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b380b37d-8c10-4d50-b3b6-099ebdbbbb76-scripts\") pod \"ovn-controller-nm2bb-config-5twdz\" (UID: \"b380b37d-8c10-4d50-b3b6-099ebdbbbb76\") " pod="openstack/ovn-controller-nm2bb-config-5twdz" Jan 22 09:20:39 crc kubenswrapper[4811]: I0122 09:20:39.009507 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hsmx9\" (UniqueName: \"kubernetes.io/projected/b380b37d-8c10-4d50-b3b6-099ebdbbbb76-kube-api-access-hsmx9\") pod \"ovn-controller-nm2bb-config-5twdz\" (UID: \"b380b37d-8c10-4d50-b3b6-099ebdbbbb76\") " pod="openstack/ovn-controller-nm2bb-config-5twdz" Jan 22 09:20:39 crc kubenswrapper[4811]: I0122 09:20:39.100600 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-nm2bb-config-5twdz" Jan 22 09:20:39 crc kubenswrapper[4811]: W0122 09:20:39.491365 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb380b37d_8c10_4d50_b3b6_099ebdbbbb76.slice/crio-bb114a311f3d4fa9e43b64400bdf990fedd8e47a80d9523e568fa78b22aae6ad WatchSource:0}: Error finding container bb114a311f3d4fa9e43b64400bdf990fedd8e47a80d9523e568fa78b22aae6ad: Status 404 returned error can't find the container with id bb114a311f3d4fa9e43b64400bdf990fedd8e47a80d9523e568fa78b22aae6ad Jan 22 09:20:39 crc kubenswrapper[4811]: I0122 09:20:39.498320 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-nm2bb-config-5twdz"] Jan 22 09:20:40 crc kubenswrapper[4811]: I0122 09:20:40.310221 4811 generic.go:334] "Generic (PLEG): container finished" podID="b380b37d-8c10-4d50-b3b6-099ebdbbbb76" containerID="b8b4ae49469256e4e15f97123889ff58959f51a1cb7dafd5a390c822d6f9e387" exitCode=0 Jan 22 09:20:40 crc kubenswrapper[4811]: I0122 09:20:40.310687 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-nm2bb-config-5twdz" event={"ID":"b380b37d-8c10-4d50-b3b6-099ebdbbbb76","Type":"ContainerDied","Data":"b8b4ae49469256e4e15f97123889ff58959f51a1cb7dafd5a390c822d6f9e387"} Jan 22 09:20:40 crc kubenswrapper[4811]: I0122 09:20:40.310713 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-nm2bb-config-5twdz" event={"ID":"b380b37d-8c10-4d50-b3b6-099ebdbbbb76","Type":"ContainerStarted","Data":"bb114a311f3d4fa9e43b64400bdf990fedd8e47a80d9523e568fa78b22aae6ad"} Jan 22 09:20:40 crc kubenswrapper[4811]: I0122 09:20:40.311930 4811 generic.go:334] "Generic (PLEG): container finished" podID="f5f5df03-a29f-4805-b750-8d360d832019" containerID="4be8abd3a99dce7c6c000fba5dc42ad42a131229c1bb98105d1f982c5c6a5b5d" exitCode=0 Jan 22 09:20:40 crc kubenswrapper[4811]: I0122 09:20:40.311957 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-fdrpp" event={"ID":"f5f5df03-a29f-4805-b750-8d360d832019","Type":"ContainerDied","Data":"4be8abd3a99dce7c6c000fba5dc42ad42a131229c1bb98105d1f982c5c6a5b5d"} Jan 22 09:20:41 crc kubenswrapper[4811]: I0122 09:20:41.609540 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-nm2bb-config-5twdz" Jan 22 09:20:41 crc kubenswrapper[4811]: I0122 09:20:41.703914 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-fdrpp" Jan 22 09:20:41 crc kubenswrapper[4811]: I0122 09:20:41.732455 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b380b37d-8c10-4d50-b3b6-099ebdbbbb76-var-log-ovn\") pod \"b380b37d-8c10-4d50-b3b6-099ebdbbbb76\" (UID: \"b380b37d-8c10-4d50-b3b6-099ebdbbbb76\") " Jan 22 09:20:41 crc kubenswrapper[4811]: I0122 09:20:41.732484 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b380b37d-8c10-4d50-b3b6-099ebdbbbb76-var-run\") pod \"b380b37d-8c10-4d50-b3b6-099ebdbbbb76\" (UID: \"b380b37d-8c10-4d50-b3b6-099ebdbbbb76\") " Jan 22 09:20:41 crc kubenswrapper[4811]: I0122 09:20:41.732563 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b380b37d-8c10-4d50-b3b6-099ebdbbbb76-scripts\") pod \"b380b37d-8c10-4d50-b3b6-099ebdbbbb76\" (UID: \"b380b37d-8c10-4d50-b3b6-099ebdbbbb76\") " Jan 22 09:20:41 crc kubenswrapper[4811]: I0122 09:20:41.732667 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hsmx9\" (UniqueName: \"kubernetes.io/projected/b380b37d-8c10-4d50-b3b6-099ebdbbbb76-kube-api-access-hsmx9\") pod \"b380b37d-8c10-4d50-b3b6-099ebdbbbb76\" (UID: \"b380b37d-8c10-4d50-b3b6-099ebdbbbb76\") " Jan 22 09:20:41 crc kubenswrapper[4811]: I0122 09:20:41.732707 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b380b37d-8c10-4d50-b3b6-099ebdbbbb76-var-run-ovn\") pod \"b380b37d-8c10-4d50-b3b6-099ebdbbbb76\" (UID: \"b380b37d-8c10-4d50-b3b6-099ebdbbbb76\") " Jan 22 09:20:41 crc kubenswrapper[4811]: I0122 09:20:41.732779 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/b380b37d-8c10-4d50-b3b6-099ebdbbbb76-additional-scripts\") pod \"b380b37d-8c10-4d50-b3b6-099ebdbbbb76\" (UID: \"b380b37d-8c10-4d50-b3b6-099ebdbbbb76\") " Jan 22 09:20:41 crc kubenswrapper[4811]: I0122 09:20:41.734284 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b380b37d-8c10-4d50-b3b6-099ebdbbbb76-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "b380b37d-8c10-4d50-b3b6-099ebdbbbb76" (UID: "b380b37d-8c10-4d50-b3b6-099ebdbbbb76"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:20:41 crc kubenswrapper[4811]: I0122 09:20:41.734319 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b380b37d-8c10-4d50-b3b6-099ebdbbbb76-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "b380b37d-8c10-4d50-b3b6-099ebdbbbb76" (UID: "b380b37d-8c10-4d50-b3b6-099ebdbbbb76"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 09:20:41 crc kubenswrapper[4811]: I0122 09:20:41.734336 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b380b37d-8c10-4d50-b3b6-099ebdbbbb76-var-run" (OuterVolumeSpecName: "var-run") pod "b380b37d-8c10-4d50-b3b6-099ebdbbbb76" (UID: "b380b37d-8c10-4d50-b3b6-099ebdbbbb76"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 09:20:41 crc kubenswrapper[4811]: I0122 09:20:41.734976 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b380b37d-8c10-4d50-b3b6-099ebdbbbb76-scripts" (OuterVolumeSpecName: "scripts") pod "b380b37d-8c10-4d50-b3b6-099ebdbbbb76" (UID: "b380b37d-8c10-4d50-b3b6-099ebdbbbb76"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:20:41 crc kubenswrapper[4811]: I0122 09:20:41.735441 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b380b37d-8c10-4d50-b3b6-099ebdbbbb76-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "b380b37d-8c10-4d50-b3b6-099ebdbbbb76" (UID: "b380b37d-8c10-4d50-b3b6-099ebdbbbb76"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 09:20:41 crc kubenswrapper[4811]: I0122 09:20:41.739396 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b380b37d-8c10-4d50-b3b6-099ebdbbbb76-kube-api-access-hsmx9" (OuterVolumeSpecName: "kube-api-access-hsmx9") pod "b380b37d-8c10-4d50-b3b6-099ebdbbbb76" (UID: "b380b37d-8c10-4d50-b3b6-099ebdbbbb76"). InnerVolumeSpecName "kube-api-access-hsmx9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:20:41 crc kubenswrapper[4811]: I0122 09:20:41.835028 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f5f5df03-a29f-4805-b750-8d360d832019-db-sync-config-data\") pod \"f5f5df03-a29f-4805-b750-8d360d832019\" (UID: \"f5f5df03-a29f-4805-b750-8d360d832019\") " Jan 22 09:20:41 crc kubenswrapper[4811]: I0122 09:20:41.835305 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5f5df03-a29f-4805-b750-8d360d832019-config-data\") pod \"f5f5df03-a29f-4805-b750-8d360d832019\" (UID: \"f5f5df03-a29f-4805-b750-8d360d832019\") " Jan 22 09:20:41 crc kubenswrapper[4811]: I0122 09:20:41.835373 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5f5df03-a29f-4805-b750-8d360d832019-combined-ca-bundle\") pod \"f5f5df03-a29f-4805-b750-8d360d832019\" (UID: \"f5f5df03-a29f-4805-b750-8d360d832019\") " Jan 22 09:20:41 crc kubenswrapper[4811]: I0122 09:20:41.835402 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4hgn2\" (UniqueName: \"kubernetes.io/projected/f5f5df03-a29f-4805-b750-8d360d832019-kube-api-access-4hgn2\") pod \"f5f5df03-a29f-4805-b750-8d360d832019\" (UID: \"f5f5df03-a29f-4805-b750-8d360d832019\") " Jan 22 09:20:41 crc kubenswrapper[4811]: I0122 09:20:41.835843 4811 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/b380b37d-8c10-4d50-b3b6-099ebdbbbb76-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 09:20:41 crc kubenswrapper[4811]: I0122 09:20:41.835858 4811 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b380b37d-8c10-4d50-b3b6-099ebdbbbb76-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 22 09:20:41 crc kubenswrapper[4811]: I0122 09:20:41.835868 4811 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b380b37d-8c10-4d50-b3b6-099ebdbbbb76-var-run\") on node \"crc\" DevicePath \"\"" Jan 22 09:20:41 crc kubenswrapper[4811]: I0122 09:20:41.835876 4811 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b380b37d-8c10-4d50-b3b6-099ebdbbbb76-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 09:20:41 crc kubenswrapper[4811]: I0122 09:20:41.835893 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hsmx9\" (UniqueName: \"kubernetes.io/projected/b380b37d-8c10-4d50-b3b6-099ebdbbbb76-kube-api-access-hsmx9\") on node \"crc\" DevicePath \"\"" Jan 22 09:20:41 crc kubenswrapper[4811]: I0122 09:20:41.835901 4811 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b380b37d-8c10-4d50-b3b6-099ebdbbbb76-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 22 09:20:41 crc kubenswrapper[4811]: I0122 09:20:41.837693 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5f5df03-a29f-4805-b750-8d360d832019-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "f5f5df03-a29f-4805-b750-8d360d832019" (UID: "f5f5df03-a29f-4805-b750-8d360d832019"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:20:41 crc kubenswrapper[4811]: I0122 09:20:41.838178 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5f5df03-a29f-4805-b750-8d360d832019-kube-api-access-4hgn2" (OuterVolumeSpecName: "kube-api-access-4hgn2") pod "f5f5df03-a29f-4805-b750-8d360d832019" (UID: "f5f5df03-a29f-4805-b750-8d360d832019"). InnerVolumeSpecName "kube-api-access-4hgn2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:20:41 crc kubenswrapper[4811]: I0122 09:20:41.851832 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5f5df03-a29f-4805-b750-8d360d832019-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f5f5df03-a29f-4805-b750-8d360d832019" (UID: "f5f5df03-a29f-4805-b750-8d360d832019"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:20:41 crc kubenswrapper[4811]: I0122 09:20:41.865310 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5f5df03-a29f-4805-b750-8d360d832019-config-data" (OuterVolumeSpecName: "config-data") pod "f5f5df03-a29f-4805-b750-8d360d832019" (UID: "f5f5df03-a29f-4805-b750-8d360d832019"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:20:41 crc kubenswrapper[4811]: I0122 09:20:41.937193 4811 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5f5df03-a29f-4805-b750-8d360d832019-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 09:20:41 crc kubenswrapper[4811]: I0122 09:20:41.937219 4811 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5f5df03-a29f-4805-b750-8d360d832019-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 09:20:41 crc kubenswrapper[4811]: I0122 09:20:41.937230 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4hgn2\" (UniqueName: \"kubernetes.io/projected/f5f5df03-a29f-4805-b750-8d360d832019-kube-api-access-4hgn2\") on node \"crc\" DevicePath \"\"" Jan 22 09:20:41 crc kubenswrapper[4811]: I0122 09:20:41.937239 4811 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f5f5df03-a29f-4805-b750-8d360d832019-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 09:20:42 crc kubenswrapper[4811]: I0122 09:20:42.323566 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-fdrpp" event={"ID":"f5f5df03-a29f-4805-b750-8d360d832019","Type":"ContainerDied","Data":"ca2841beb6d29b7e6f3b0ddd4eb6211069d30b9c8e8f60e76710a0d36d486e27"} Jan 22 09:20:42 crc kubenswrapper[4811]: I0122 09:20:42.323790 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ca2841beb6d29b7e6f3b0ddd4eb6211069d30b9c8e8f60e76710a0d36d486e27" Jan 22 09:20:42 crc kubenswrapper[4811]: I0122 09:20:42.323584 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-fdrpp" Jan 22 09:20:42 crc kubenswrapper[4811]: I0122 09:20:42.326726 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-nm2bb-config-5twdz" event={"ID":"b380b37d-8c10-4d50-b3b6-099ebdbbbb76","Type":"ContainerDied","Data":"bb114a311f3d4fa9e43b64400bdf990fedd8e47a80d9523e568fa78b22aae6ad"} Jan 22 09:20:42 crc kubenswrapper[4811]: I0122 09:20:42.326751 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bb114a311f3d4fa9e43b64400bdf990fedd8e47a80d9523e568fa78b22aae6ad" Jan 22 09:20:42 crc kubenswrapper[4811]: I0122 09:20:42.326781 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-nm2bb-config-5twdz" Jan 22 09:20:42 crc kubenswrapper[4811]: I0122 09:20:42.604069 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57768dd7b5-pqcms"] Jan 22 09:20:42 crc kubenswrapper[4811]: E0122 09:20:42.604327 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b380b37d-8c10-4d50-b3b6-099ebdbbbb76" containerName="ovn-config" Jan 22 09:20:42 crc kubenswrapper[4811]: I0122 09:20:42.604345 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="b380b37d-8c10-4d50-b3b6-099ebdbbbb76" containerName="ovn-config" Jan 22 09:20:42 crc kubenswrapper[4811]: E0122 09:20:42.604361 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5f5df03-a29f-4805-b750-8d360d832019" containerName="glance-db-sync" Jan 22 09:20:42 crc kubenswrapper[4811]: I0122 09:20:42.604369 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5f5df03-a29f-4805-b750-8d360d832019" containerName="glance-db-sync" Jan 22 09:20:42 crc kubenswrapper[4811]: I0122 09:20:42.604522 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="b380b37d-8c10-4d50-b3b6-099ebdbbbb76" containerName="ovn-config" Jan 22 09:20:42 crc kubenswrapper[4811]: I0122 09:20:42.604545 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5f5df03-a29f-4805-b750-8d360d832019" containerName="glance-db-sync" Jan 22 09:20:42 crc kubenswrapper[4811]: I0122 09:20:42.605218 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57768dd7b5-pqcms" Jan 22 09:20:42 crc kubenswrapper[4811]: I0122 09:20:42.619776 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57768dd7b5-pqcms"] Jan 22 09:20:42 crc kubenswrapper[4811]: I0122 09:20:42.729857 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-nm2bb-config-5twdz"] Jan 22 09:20:42 crc kubenswrapper[4811]: I0122 09:20:42.734540 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-nm2bb-config-5twdz"] Jan 22 09:20:42 crc kubenswrapper[4811]: I0122 09:20:42.747695 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3c7ecfd5-ce1b-487a-a11e-82f13e1fb1e2-dns-svc\") pod \"dnsmasq-dns-57768dd7b5-pqcms\" (UID: \"3c7ecfd5-ce1b-487a-a11e-82f13e1fb1e2\") " pod="openstack/dnsmasq-dns-57768dd7b5-pqcms" Jan 22 09:20:42 crc kubenswrapper[4811]: I0122 09:20:42.747733 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c7ecfd5-ce1b-487a-a11e-82f13e1fb1e2-config\") pod \"dnsmasq-dns-57768dd7b5-pqcms\" (UID: \"3c7ecfd5-ce1b-487a-a11e-82f13e1fb1e2\") " pod="openstack/dnsmasq-dns-57768dd7b5-pqcms" Jan 22 09:20:42 crc kubenswrapper[4811]: I0122 09:20:42.747829 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sp2cl\" (UniqueName: \"kubernetes.io/projected/3c7ecfd5-ce1b-487a-a11e-82f13e1fb1e2-kube-api-access-sp2cl\") pod \"dnsmasq-dns-57768dd7b5-pqcms\" (UID: \"3c7ecfd5-ce1b-487a-a11e-82f13e1fb1e2\") " pod="openstack/dnsmasq-dns-57768dd7b5-pqcms" Jan 22 09:20:42 crc kubenswrapper[4811]: I0122 09:20:42.747918 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3c7ecfd5-ce1b-487a-a11e-82f13e1fb1e2-ovsdbserver-sb\") pod \"dnsmasq-dns-57768dd7b5-pqcms\" (UID: \"3c7ecfd5-ce1b-487a-a11e-82f13e1fb1e2\") " pod="openstack/dnsmasq-dns-57768dd7b5-pqcms" Jan 22 09:20:42 crc kubenswrapper[4811]: I0122 09:20:42.747957 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3c7ecfd5-ce1b-487a-a11e-82f13e1fb1e2-ovsdbserver-nb\") pod \"dnsmasq-dns-57768dd7b5-pqcms\" (UID: \"3c7ecfd5-ce1b-487a-a11e-82f13e1fb1e2\") " pod="openstack/dnsmasq-dns-57768dd7b5-pqcms" Jan 22 09:20:42 crc kubenswrapper[4811]: I0122 09:20:42.777049 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-nm2bb-config-5s5hl"] Jan 22 09:20:42 crc kubenswrapper[4811]: I0122 09:20:42.777818 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-nm2bb-config-5s5hl" Jan 22 09:20:42 crc kubenswrapper[4811]: I0122 09:20:42.779322 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 22 09:20:42 crc kubenswrapper[4811]: I0122 09:20:42.792150 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-nm2bb-config-5s5hl"] Jan 22 09:20:42 crc kubenswrapper[4811]: I0122 09:20:42.849496 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3c7ecfd5-ce1b-487a-a11e-82f13e1fb1e2-dns-svc\") pod \"dnsmasq-dns-57768dd7b5-pqcms\" (UID: \"3c7ecfd5-ce1b-487a-a11e-82f13e1fb1e2\") " pod="openstack/dnsmasq-dns-57768dd7b5-pqcms" Jan 22 09:20:42 crc kubenswrapper[4811]: I0122 09:20:42.849546 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c7ecfd5-ce1b-487a-a11e-82f13e1fb1e2-config\") pod \"dnsmasq-dns-57768dd7b5-pqcms\" (UID: \"3c7ecfd5-ce1b-487a-a11e-82f13e1fb1e2\") " pod="openstack/dnsmasq-dns-57768dd7b5-pqcms" Jan 22 09:20:42 crc kubenswrapper[4811]: I0122 09:20:42.849665 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sp2cl\" (UniqueName: \"kubernetes.io/projected/3c7ecfd5-ce1b-487a-a11e-82f13e1fb1e2-kube-api-access-sp2cl\") pod \"dnsmasq-dns-57768dd7b5-pqcms\" (UID: \"3c7ecfd5-ce1b-487a-a11e-82f13e1fb1e2\") " pod="openstack/dnsmasq-dns-57768dd7b5-pqcms" Jan 22 09:20:42 crc kubenswrapper[4811]: I0122 09:20:42.849759 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3c7ecfd5-ce1b-487a-a11e-82f13e1fb1e2-ovsdbserver-sb\") pod \"dnsmasq-dns-57768dd7b5-pqcms\" (UID: \"3c7ecfd5-ce1b-487a-a11e-82f13e1fb1e2\") " pod="openstack/dnsmasq-dns-57768dd7b5-pqcms" Jan 22 09:20:42 crc kubenswrapper[4811]: I0122 09:20:42.849794 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3c7ecfd5-ce1b-487a-a11e-82f13e1fb1e2-ovsdbserver-nb\") pod \"dnsmasq-dns-57768dd7b5-pqcms\" (UID: \"3c7ecfd5-ce1b-487a-a11e-82f13e1fb1e2\") " pod="openstack/dnsmasq-dns-57768dd7b5-pqcms" Jan 22 09:20:42 crc kubenswrapper[4811]: I0122 09:20:42.850438 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3c7ecfd5-ce1b-487a-a11e-82f13e1fb1e2-dns-svc\") pod \"dnsmasq-dns-57768dd7b5-pqcms\" (UID: \"3c7ecfd5-ce1b-487a-a11e-82f13e1fb1e2\") " pod="openstack/dnsmasq-dns-57768dd7b5-pqcms" Jan 22 09:20:42 crc kubenswrapper[4811]: I0122 09:20:42.850461 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c7ecfd5-ce1b-487a-a11e-82f13e1fb1e2-config\") pod \"dnsmasq-dns-57768dd7b5-pqcms\" (UID: \"3c7ecfd5-ce1b-487a-a11e-82f13e1fb1e2\") " pod="openstack/dnsmasq-dns-57768dd7b5-pqcms" Jan 22 09:20:42 crc kubenswrapper[4811]: I0122 09:20:42.850488 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3c7ecfd5-ce1b-487a-a11e-82f13e1fb1e2-ovsdbserver-nb\") pod \"dnsmasq-dns-57768dd7b5-pqcms\" (UID: \"3c7ecfd5-ce1b-487a-a11e-82f13e1fb1e2\") " pod="openstack/dnsmasq-dns-57768dd7b5-pqcms" Jan 22 09:20:42 crc kubenswrapper[4811]: I0122 09:20:42.850778 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3c7ecfd5-ce1b-487a-a11e-82f13e1fb1e2-ovsdbserver-sb\") pod \"dnsmasq-dns-57768dd7b5-pqcms\" (UID: \"3c7ecfd5-ce1b-487a-a11e-82f13e1fb1e2\") " pod="openstack/dnsmasq-dns-57768dd7b5-pqcms" Jan 22 09:20:42 crc kubenswrapper[4811]: I0122 09:20:42.869374 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sp2cl\" (UniqueName: \"kubernetes.io/projected/3c7ecfd5-ce1b-487a-a11e-82f13e1fb1e2-kube-api-access-sp2cl\") pod \"dnsmasq-dns-57768dd7b5-pqcms\" (UID: \"3c7ecfd5-ce1b-487a-a11e-82f13e1fb1e2\") " pod="openstack/dnsmasq-dns-57768dd7b5-pqcms" Jan 22 09:20:42 crc kubenswrapper[4811]: I0122 09:20:42.917331 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57768dd7b5-pqcms" Jan 22 09:20:42 crc kubenswrapper[4811]: I0122 09:20:42.950558 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjnzg\" (UniqueName: \"kubernetes.io/projected/d35653bd-1323-42f3-8c97-23b32257ab0d-kube-api-access-sjnzg\") pod \"ovn-controller-nm2bb-config-5s5hl\" (UID: \"d35653bd-1323-42f3-8c97-23b32257ab0d\") " pod="openstack/ovn-controller-nm2bb-config-5s5hl" Jan 22 09:20:42 crc kubenswrapper[4811]: I0122 09:20:42.950676 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d35653bd-1323-42f3-8c97-23b32257ab0d-var-run\") pod \"ovn-controller-nm2bb-config-5s5hl\" (UID: \"d35653bd-1323-42f3-8c97-23b32257ab0d\") " pod="openstack/ovn-controller-nm2bb-config-5s5hl" Jan 22 09:20:42 crc kubenswrapper[4811]: I0122 09:20:42.950702 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d35653bd-1323-42f3-8c97-23b32257ab0d-var-run-ovn\") pod \"ovn-controller-nm2bb-config-5s5hl\" (UID: \"d35653bd-1323-42f3-8c97-23b32257ab0d\") " pod="openstack/ovn-controller-nm2bb-config-5s5hl" Jan 22 09:20:42 crc kubenswrapper[4811]: I0122 09:20:42.950727 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d35653bd-1323-42f3-8c97-23b32257ab0d-var-log-ovn\") pod \"ovn-controller-nm2bb-config-5s5hl\" (UID: \"d35653bd-1323-42f3-8c97-23b32257ab0d\") " pod="openstack/ovn-controller-nm2bb-config-5s5hl" Jan 22 09:20:42 crc kubenswrapper[4811]: I0122 09:20:42.950744 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d35653bd-1323-42f3-8c97-23b32257ab0d-scripts\") pod \"ovn-controller-nm2bb-config-5s5hl\" (UID: \"d35653bd-1323-42f3-8c97-23b32257ab0d\") " pod="openstack/ovn-controller-nm2bb-config-5s5hl" Jan 22 09:20:42 crc kubenswrapper[4811]: I0122 09:20:42.950785 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/d35653bd-1323-42f3-8c97-23b32257ab0d-additional-scripts\") pod \"ovn-controller-nm2bb-config-5s5hl\" (UID: \"d35653bd-1323-42f3-8c97-23b32257ab0d\") " pod="openstack/ovn-controller-nm2bb-config-5s5hl" Jan 22 09:20:43 crc kubenswrapper[4811]: I0122 09:20:43.052335 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjnzg\" (UniqueName: \"kubernetes.io/projected/d35653bd-1323-42f3-8c97-23b32257ab0d-kube-api-access-sjnzg\") pod \"ovn-controller-nm2bb-config-5s5hl\" (UID: \"d35653bd-1323-42f3-8c97-23b32257ab0d\") " pod="openstack/ovn-controller-nm2bb-config-5s5hl" Jan 22 09:20:43 crc kubenswrapper[4811]: I0122 09:20:43.052452 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d35653bd-1323-42f3-8c97-23b32257ab0d-var-run\") pod \"ovn-controller-nm2bb-config-5s5hl\" (UID: \"d35653bd-1323-42f3-8c97-23b32257ab0d\") " pod="openstack/ovn-controller-nm2bb-config-5s5hl" Jan 22 09:20:43 crc kubenswrapper[4811]: I0122 09:20:43.052470 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d35653bd-1323-42f3-8c97-23b32257ab0d-var-run-ovn\") pod \"ovn-controller-nm2bb-config-5s5hl\" (UID: \"d35653bd-1323-42f3-8c97-23b32257ab0d\") " pod="openstack/ovn-controller-nm2bb-config-5s5hl" Jan 22 09:20:43 crc kubenswrapper[4811]: I0122 09:20:43.052498 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d35653bd-1323-42f3-8c97-23b32257ab0d-var-log-ovn\") pod \"ovn-controller-nm2bb-config-5s5hl\" (UID: \"d35653bd-1323-42f3-8c97-23b32257ab0d\") " pod="openstack/ovn-controller-nm2bb-config-5s5hl" Jan 22 09:20:43 crc kubenswrapper[4811]: I0122 09:20:43.052517 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d35653bd-1323-42f3-8c97-23b32257ab0d-scripts\") pod \"ovn-controller-nm2bb-config-5s5hl\" (UID: \"d35653bd-1323-42f3-8c97-23b32257ab0d\") " pod="openstack/ovn-controller-nm2bb-config-5s5hl" Jan 22 09:20:43 crc kubenswrapper[4811]: I0122 09:20:43.052557 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/d35653bd-1323-42f3-8c97-23b32257ab0d-additional-scripts\") pod \"ovn-controller-nm2bb-config-5s5hl\" (UID: \"d35653bd-1323-42f3-8c97-23b32257ab0d\") " pod="openstack/ovn-controller-nm2bb-config-5s5hl" Jan 22 09:20:43 crc kubenswrapper[4811]: I0122 09:20:43.052828 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d35653bd-1323-42f3-8c97-23b32257ab0d-var-run-ovn\") pod \"ovn-controller-nm2bb-config-5s5hl\" (UID: \"d35653bd-1323-42f3-8c97-23b32257ab0d\") " pod="openstack/ovn-controller-nm2bb-config-5s5hl" Jan 22 09:20:43 crc kubenswrapper[4811]: I0122 09:20:43.052900 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d35653bd-1323-42f3-8c97-23b32257ab0d-var-run\") pod \"ovn-controller-nm2bb-config-5s5hl\" (UID: \"d35653bd-1323-42f3-8c97-23b32257ab0d\") " pod="openstack/ovn-controller-nm2bb-config-5s5hl" Jan 22 09:20:43 crc kubenswrapper[4811]: I0122 09:20:43.052939 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d35653bd-1323-42f3-8c97-23b32257ab0d-var-log-ovn\") pod \"ovn-controller-nm2bb-config-5s5hl\" (UID: \"d35653bd-1323-42f3-8c97-23b32257ab0d\") " pod="openstack/ovn-controller-nm2bb-config-5s5hl" Jan 22 09:20:43 crc kubenswrapper[4811]: I0122 09:20:43.053194 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/d35653bd-1323-42f3-8c97-23b32257ab0d-additional-scripts\") pod \"ovn-controller-nm2bb-config-5s5hl\" (UID: \"d35653bd-1323-42f3-8c97-23b32257ab0d\") " pod="openstack/ovn-controller-nm2bb-config-5s5hl" Jan 22 09:20:43 crc kubenswrapper[4811]: I0122 09:20:43.054651 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d35653bd-1323-42f3-8c97-23b32257ab0d-scripts\") pod \"ovn-controller-nm2bb-config-5s5hl\" (UID: \"d35653bd-1323-42f3-8c97-23b32257ab0d\") " pod="openstack/ovn-controller-nm2bb-config-5s5hl" Jan 22 09:20:43 crc kubenswrapper[4811]: I0122 09:20:43.099939 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjnzg\" (UniqueName: \"kubernetes.io/projected/d35653bd-1323-42f3-8c97-23b32257ab0d-kube-api-access-sjnzg\") pod \"ovn-controller-nm2bb-config-5s5hl\" (UID: \"d35653bd-1323-42f3-8c97-23b32257ab0d\") " pod="openstack/ovn-controller-nm2bb-config-5s5hl" Jan 22 09:20:43 crc kubenswrapper[4811]: I0122 09:20:43.283878 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57768dd7b5-pqcms"] Jan 22 09:20:43 crc kubenswrapper[4811]: W0122 09:20:43.285106 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3c7ecfd5_ce1b_487a_a11e_82f13e1fb1e2.slice/crio-02a473aa9b3ccc58e68aee51e2af249215847f7af96b6b095c0ef26fe78e289b WatchSource:0}: Error finding container 02a473aa9b3ccc58e68aee51e2af249215847f7af96b6b095c0ef26fe78e289b: Status 404 returned error can't find the container with id 02a473aa9b3ccc58e68aee51e2af249215847f7af96b6b095c0ef26fe78e289b Jan 22 09:20:43 crc kubenswrapper[4811]: I0122 09:20:43.337682 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57768dd7b5-pqcms" event={"ID":"3c7ecfd5-ce1b-487a-a11e-82f13e1fb1e2","Type":"ContainerStarted","Data":"02a473aa9b3ccc58e68aee51e2af249215847f7af96b6b095c0ef26fe78e289b"} Jan 22 09:20:43 crc kubenswrapper[4811]: I0122 09:20:43.390466 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-nm2bb-config-5s5hl" Jan 22 09:20:43 crc kubenswrapper[4811]: I0122 09:20:43.579191 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-nm2bb" Jan 22 09:20:43 crc kubenswrapper[4811]: I0122 09:20:43.773399 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-nm2bb-config-5s5hl"] Jan 22 09:20:43 crc kubenswrapper[4811]: I0122 09:20:43.999135 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b380b37d-8c10-4d50-b3b6-099ebdbbbb76" path="/var/lib/kubelet/pods/b380b37d-8c10-4d50-b3b6-099ebdbbbb76/volumes" Jan 22 09:20:44 crc kubenswrapper[4811]: I0122 09:20:44.344311 4811 generic.go:334] "Generic (PLEG): container finished" podID="3c7ecfd5-ce1b-487a-a11e-82f13e1fb1e2" containerID="189453dde239ba0028f7d9ee028b4f1313c9e267186c2ea4045d88727c29b923" exitCode=0 Jan 22 09:20:44 crc kubenswrapper[4811]: I0122 09:20:44.344353 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57768dd7b5-pqcms" event={"ID":"3c7ecfd5-ce1b-487a-a11e-82f13e1fb1e2","Type":"ContainerDied","Data":"189453dde239ba0028f7d9ee028b4f1313c9e267186c2ea4045d88727c29b923"} Jan 22 09:20:44 crc kubenswrapper[4811]: I0122 09:20:44.346815 4811 generic.go:334] "Generic (PLEG): container finished" podID="d35653bd-1323-42f3-8c97-23b32257ab0d" containerID="57dd7a3eed668b10480c157954b8aebb31ce6fa4182520fa00d1f992f9ba1086" exitCode=0 Jan 22 09:20:44 crc kubenswrapper[4811]: I0122 09:20:44.346844 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-nm2bb-config-5s5hl" event={"ID":"d35653bd-1323-42f3-8c97-23b32257ab0d","Type":"ContainerDied","Data":"57dd7a3eed668b10480c157954b8aebb31ce6fa4182520fa00d1f992f9ba1086"} Jan 22 09:20:44 crc kubenswrapper[4811]: I0122 09:20:44.346862 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-nm2bb-config-5s5hl" event={"ID":"d35653bd-1323-42f3-8c97-23b32257ab0d","Type":"ContainerStarted","Data":"89b758bbe2ec40cbf869ae92ffb88cb89d7ec44eded7b82703e2fbd261fffbe2"} Jan 22 09:20:45 crc kubenswrapper[4811]: I0122 09:20:45.355552 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57768dd7b5-pqcms" event={"ID":"3c7ecfd5-ce1b-487a-a11e-82f13e1fb1e2","Type":"ContainerStarted","Data":"f91f41c59e3ce5c3c4b0b61b117f817be9ba096ecfdab23959d3ef3fa95d5302"} Jan 22 09:20:45 crc kubenswrapper[4811]: I0122 09:20:45.370102 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57768dd7b5-pqcms" podStartSLOduration=3.370087744 podStartE2EDuration="3.370087744s" podCreationTimestamp="2026-01-22 09:20:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:20:45.368014975 +0000 UTC m=+889.690202098" watchObservedRunningTime="2026-01-22 09:20:45.370087744 +0000 UTC m=+889.692274866" Jan 22 09:20:45 crc kubenswrapper[4811]: I0122 09:20:45.621182 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-nm2bb-config-5s5hl" Jan 22 09:20:45 crc kubenswrapper[4811]: I0122 09:20:45.699902 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d35653bd-1323-42f3-8c97-23b32257ab0d-var-log-ovn\") pod \"d35653bd-1323-42f3-8c97-23b32257ab0d\" (UID: \"d35653bd-1323-42f3-8c97-23b32257ab0d\") " Jan 22 09:20:45 crc kubenswrapper[4811]: I0122 09:20:45.699959 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d35653bd-1323-42f3-8c97-23b32257ab0d-var-run-ovn\") pod \"d35653bd-1323-42f3-8c97-23b32257ab0d\" (UID: \"d35653bd-1323-42f3-8c97-23b32257ab0d\") " Jan 22 09:20:45 crc kubenswrapper[4811]: I0122 09:20:45.700018 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sjnzg\" (UniqueName: \"kubernetes.io/projected/d35653bd-1323-42f3-8c97-23b32257ab0d-kube-api-access-sjnzg\") pod \"d35653bd-1323-42f3-8c97-23b32257ab0d\" (UID: \"d35653bd-1323-42f3-8c97-23b32257ab0d\") " Jan 22 09:20:45 crc kubenswrapper[4811]: I0122 09:20:45.700050 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d35653bd-1323-42f3-8c97-23b32257ab0d-scripts\") pod \"d35653bd-1323-42f3-8c97-23b32257ab0d\" (UID: \"d35653bd-1323-42f3-8c97-23b32257ab0d\") " Jan 22 09:20:45 crc kubenswrapper[4811]: I0122 09:20:45.700137 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/d35653bd-1323-42f3-8c97-23b32257ab0d-additional-scripts\") pod \"d35653bd-1323-42f3-8c97-23b32257ab0d\" (UID: \"d35653bd-1323-42f3-8c97-23b32257ab0d\") " Jan 22 09:20:45 crc kubenswrapper[4811]: I0122 09:20:45.700150 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d35653bd-1323-42f3-8c97-23b32257ab0d-var-run\") pod \"d35653bd-1323-42f3-8c97-23b32257ab0d\" (UID: \"d35653bd-1323-42f3-8c97-23b32257ab0d\") " Jan 22 09:20:45 crc kubenswrapper[4811]: I0122 09:20:45.700216 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d35653bd-1323-42f3-8c97-23b32257ab0d-var-run" (OuterVolumeSpecName: "var-run") pod "d35653bd-1323-42f3-8c97-23b32257ab0d" (UID: "d35653bd-1323-42f3-8c97-23b32257ab0d"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 09:20:45 crc kubenswrapper[4811]: I0122 09:20:45.700246 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d35653bd-1323-42f3-8c97-23b32257ab0d-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "d35653bd-1323-42f3-8c97-23b32257ab0d" (UID: "d35653bd-1323-42f3-8c97-23b32257ab0d"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 09:20:45 crc kubenswrapper[4811]: I0122 09:20:45.700261 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d35653bd-1323-42f3-8c97-23b32257ab0d-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "d35653bd-1323-42f3-8c97-23b32257ab0d" (UID: "d35653bd-1323-42f3-8c97-23b32257ab0d"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 09:20:45 crc kubenswrapper[4811]: I0122 09:20:45.701189 4811 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d35653bd-1323-42f3-8c97-23b32257ab0d-var-run\") on node \"crc\" DevicePath \"\"" Jan 22 09:20:45 crc kubenswrapper[4811]: I0122 09:20:45.701208 4811 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d35653bd-1323-42f3-8c97-23b32257ab0d-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 22 09:20:45 crc kubenswrapper[4811]: I0122 09:20:45.701217 4811 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d35653bd-1323-42f3-8c97-23b32257ab0d-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 22 09:20:45 crc kubenswrapper[4811]: I0122 09:20:45.701453 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d35653bd-1323-42f3-8c97-23b32257ab0d-scripts" (OuterVolumeSpecName: "scripts") pod "d35653bd-1323-42f3-8c97-23b32257ab0d" (UID: "d35653bd-1323-42f3-8c97-23b32257ab0d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:20:45 crc kubenswrapper[4811]: I0122 09:20:45.701534 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d35653bd-1323-42f3-8c97-23b32257ab0d-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "d35653bd-1323-42f3-8c97-23b32257ab0d" (UID: "d35653bd-1323-42f3-8c97-23b32257ab0d"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:20:45 crc kubenswrapper[4811]: I0122 09:20:45.715101 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d35653bd-1323-42f3-8c97-23b32257ab0d-kube-api-access-sjnzg" (OuterVolumeSpecName: "kube-api-access-sjnzg") pod "d35653bd-1323-42f3-8c97-23b32257ab0d" (UID: "d35653bd-1323-42f3-8c97-23b32257ab0d"). InnerVolumeSpecName "kube-api-access-sjnzg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:20:45 crc kubenswrapper[4811]: I0122 09:20:45.802457 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sjnzg\" (UniqueName: \"kubernetes.io/projected/d35653bd-1323-42f3-8c97-23b32257ab0d-kube-api-access-sjnzg\") on node \"crc\" DevicePath \"\"" Jan 22 09:20:45 crc kubenswrapper[4811]: I0122 09:20:45.802483 4811 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d35653bd-1323-42f3-8c97-23b32257ab0d-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 09:20:45 crc kubenswrapper[4811]: I0122 09:20:45.802492 4811 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/d35653bd-1323-42f3-8c97-23b32257ab0d-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 09:20:46 crc kubenswrapper[4811]: I0122 09:20:46.361618 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-nm2bb-config-5s5hl" Jan 22 09:20:46 crc kubenswrapper[4811]: I0122 09:20:46.361608 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-nm2bb-config-5s5hl" event={"ID":"d35653bd-1323-42f3-8c97-23b32257ab0d","Type":"ContainerDied","Data":"89b758bbe2ec40cbf869ae92ffb88cb89d7ec44eded7b82703e2fbd261fffbe2"} Jan 22 09:20:46 crc kubenswrapper[4811]: I0122 09:20:46.362426 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="89b758bbe2ec40cbf869ae92ffb88cb89d7ec44eded7b82703e2fbd261fffbe2" Jan 22 09:20:46 crc kubenswrapper[4811]: I0122 09:20:46.362464 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57768dd7b5-pqcms" Jan 22 09:20:46 crc kubenswrapper[4811]: I0122 09:20:46.670516 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-nm2bb-config-5s5hl"] Jan 22 09:20:46 crc kubenswrapper[4811]: I0122 09:20:46.675060 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-nm2bb-config-5s5hl"] Jan 22 09:20:47 crc kubenswrapper[4811]: I0122 09:20:47.998412 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d35653bd-1323-42f3-8c97-23b32257ab0d" path="/var/lib/kubelet/pods/d35653bd-1323-42f3-8c97-23b32257ab0d/volumes" Jan 22 09:20:49 crc kubenswrapper[4811]: I0122 09:20:49.750814 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 22 09:20:49 crc kubenswrapper[4811]: I0122 09:20:49.955587 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-2s8kx"] Jan 22 09:20:49 crc kubenswrapper[4811]: E0122 09:20:49.956027 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d35653bd-1323-42f3-8c97-23b32257ab0d" containerName="ovn-config" Jan 22 09:20:49 crc kubenswrapper[4811]: I0122 09:20:49.956040 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="d35653bd-1323-42f3-8c97-23b32257ab0d" containerName="ovn-config" Jan 22 09:20:49 crc kubenswrapper[4811]: I0122 09:20:49.956170 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="d35653bd-1323-42f3-8c97-23b32257ab0d" containerName="ovn-config" Jan 22 09:20:49 crc kubenswrapper[4811]: I0122 09:20:49.956603 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-2s8kx" Jan 22 09:20:49 crc kubenswrapper[4811]: I0122 09:20:49.973776 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-2s8kx"] Jan 22 09:20:50 crc kubenswrapper[4811]: I0122 09:20:50.058479 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjbtg\" (UniqueName: \"kubernetes.io/projected/1160427d-3b71-44bc-886d-243991fd40e8-kube-api-access-xjbtg\") pod \"cinder-db-create-2s8kx\" (UID: \"1160427d-3b71-44bc-886d-243991fd40e8\") " pod="openstack/cinder-db-create-2s8kx" Jan 22 09:20:50 crc kubenswrapper[4811]: I0122 09:20:50.058934 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1160427d-3b71-44bc-886d-243991fd40e8-operator-scripts\") pod \"cinder-db-create-2s8kx\" (UID: \"1160427d-3b71-44bc-886d-243991fd40e8\") " pod="openstack/cinder-db-create-2s8kx" Jan 22 09:20:50 crc kubenswrapper[4811]: I0122 09:20:50.062285 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-e374-account-create-update-4m5cv"] Jan 22 09:20:50 crc kubenswrapper[4811]: I0122 09:20:50.068609 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-e374-account-create-update-4m5cv"] Jan 22 09:20:50 crc kubenswrapper[4811]: I0122 09:20:50.068784 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-e374-account-create-update-4m5cv" Jan 22 09:20:50 crc kubenswrapper[4811]: I0122 09:20:50.070209 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 22 09:20:50 crc kubenswrapper[4811]: I0122 09:20:50.098776 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:20:50 crc kubenswrapper[4811]: I0122 09:20:50.160157 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvdd4\" (UniqueName: \"kubernetes.io/projected/1ea82e95-edcf-4f93-a288-8b4550842a28-kube-api-access-bvdd4\") pod \"cinder-e374-account-create-update-4m5cv\" (UID: \"1ea82e95-edcf-4f93-a288-8b4550842a28\") " pod="openstack/cinder-e374-account-create-update-4m5cv" Jan 22 09:20:50 crc kubenswrapper[4811]: I0122 09:20:50.160246 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1ea82e95-edcf-4f93-a288-8b4550842a28-operator-scripts\") pod \"cinder-e374-account-create-update-4m5cv\" (UID: \"1ea82e95-edcf-4f93-a288-8b4550842a28\") " pod="openstack/cinder-e374-account-create-update-4m5cv" Jan 22 09:20:50 crc kubenswrapper[4811]: I0122 09:20:50.160340 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1160427d-3b71-44bc-886d-243991fd40e8-operator-scripts\") pod \"cinder-db-create-2s8kx\" (UID: \"1160427d-3b71-44bc-886d-243991fd40e8\") " pod="openstack/cinder-db-create-2s8kx" Jan 22 09:20:50 crc kubenswrapper[4811]: I0122 09:20:50.160384 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjbtg\" (UniqueName: \"kubernetes.io/projected/1160427d-3b71-44bc-886d-243991fd40e8-kube-api-access-xjbtg\") pod \"cinder-db-create-2s8kx\" (UID: \"1160427d-3b71-44bc-886d-243991fd40e8\") " pod="openstack/cinder-db-create-2s8kx" Jan 22 09:20:50 crc kubenswrapper[4811]: I0122 09:20:50.161072 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1160427d-3b71-44bc-886d-243991fd40e8-operator-scripts\") pod \"cinder-db-create-2s8kx\" (UID: \"1160427d-3b71-44bc-886d-243991fd40e8\") " pod="openstack/cinder-db-create-2s8kx" Jan 22 09:20:50 crc kubenswrapper[4811]: I0122 09:20:50.166943 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-xj5ql"] Jan 22 09:20:50 crc kubenswrapper[4811]: I0122 09:20:50.167796 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-xj5ql" Jan 22 09:20:50 crc kubenswrapper[4811]: I0122 09:20:50.178754 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-xj5ql"] Jan 22 09:20:50 crc kubenswrapper[4811]: I0122 09:20:50.216309 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjbtg\" (UniqueName: \"kubernetes.io/projected/1160427d-3b71-44bc-886d-243991fd40e8-kube-api-access-xjbtg\") pod \"cinder-db-create-2s8kx\" (UID: \"1160427d-3b71-44bc-886d-243991fd40e8\") " pod="openstack/cinder-db-create-2s8kx" Jan 22 09:20:50 crc kubenswrapper[4811]: I0122 09:20:50.261330 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvvd9\" (UniqueName: \"kubernetes.io/projected/866090ed-1206-4cb4-9b12-2450964dc455-kube-api-access-gvvd9\") pod \"barbican-db-create-xj5ql\" (UID: \"866090ed-1206-4cb4-9b12-2450964dc455\") " pod="openstack/barbican-db-create-xj5ql" Jan 22 09:20:50 crc kubenswrapper[4811]: I0122 09:20:50.261392 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvdd4\" (UniqueName: \"kubernetes.io/projected/1ea82e95-edcf-4f93-a288-8b4550842a28-kube-api-access-bvdd4\") pod \"cinder-e374-account-create-update-4m5cv\" (UID: \"1ea82e95-edcf-4f93-a288-8b4550842a28\") " pod="openstack/cinder-e374-account-create-update-4m5cv" Jan 22 09:20:50 crc kubenswrapper[4811]: I0122 09:20:50.261523 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1ea82e95-edcf-4f93-a288-8b4550842a28-operator-scripts\") pod \"cinder-e374-account-create-update-4m5cv\" (UID: \"1ea82e95-edcf-4f93-a288-8b4550842a28\") " pod="openstack/cinder-e374-account-create-update-4m5cv" Jan 22 09:20:50 crc kubenswrapper[4811]: I0122 09:20:50.261615 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/866090ed-1206-4cb4-9b12-2450964dc455-operator-scripts\") pod \"barbican-db-create-xj5ql\" (UID: \"866090ed-1206-4cb4-9b12-2450964dc455\") " pod="openstack/barbican-db-create-xj5ql" Jan 22 09:20:50 crc kubenswrapper[4811]: I0122 09:20:50.262158 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1ea82e95-edcf-4f93-a288-8b4550842a28-operator-scripts\") pod \"cinder-e374-account-create-update-4m5cv\" (UID: \"1ea82e95-edcf-4f93-a288-8b4550842a28\") " pod="openstack/cinder-e374-account-create-update-4m5cv" Jan 22 09:20:50 crc kubenswrapper[4811]: I0122 09:20:50.270954 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-2s8kx" Jan 22 09:20:50 crc kubenswrapper[4811]: I0122 09:20:50.279398 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvdd4\" (UniqueName: \"kubernetes.io/projected/1ea82e95-edcf-4f93-a288-8b4550842a28-kube-api-access-bvdd4\") pod \"cinder-e374-account-create-update-4m5cv\" (UID: \"1ea82e95-edcf-4f93-a288-8b4550842a28\") " pod="openstack/cinder-e374-account-create-update-4m5cv" Jan 22 09:20:50 crc kubenswrapper[4811]: I0122 09:20:50.365430 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/866090ed-1206-4cb4-9b12-2450964dc455-operator-scripts\") pod \"barbican-db-create-xj5ql\" (UID: \"866090ed-1206-4cb4-9b12-2450964dc455\") " pod="openstack/barbican-db-create-xj5ql" Jan 22 09:20:50 crc kubenswrapper[4811]: I0122 09:20:50.365714 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gvvd9\" (UniqueName: \"kubernetes.io/projected/866090ed-1206-4cb4-9b12-2450964dc455-kube-api-access-gvvd9\") pod \"barbican-db-create-xj5ql\" (UID: \"866090ed-1206-4cb4-9b12-2450964dc455\") " pod="openstack/barbican-db-create-xj5ql" Jan 22 09:20:50 crc kubenswrapper[4811]: I0122 09:20:50.368727 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/866090ed-1206-4cb4-9b12-2450964dc455-operator-scripts\") pod \"barbican-db-create-xj5ql\" (UID: \"866090ed-1206-4cb4-9b12-2450964dc455\") " pod="openstack/barbican-db-create-xj5ql" Jan 22 09:20:50 crc kubenswrapper[4811]: I0122 09:20:50.372440 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-82d1-account-create-update-jkspk"] Jan 22 09:20:50 crc kubenswrapper[4811]: I0122 09:20:50.373240 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-82d1-account-create-update-jkspk" Jan 22 09:20:50 crc kubenswrapper[4811]: I0122 09:20:50.375987 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 22 09:20:50 crc kubenswrapper[4811]: I0122 09:20:50.382992 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-e374-account-create-update-4m5cv" Jan 22 09:20:50 crc kubenswrapper[4811]: I0122 09:20:50.407837 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gvvd9\" (UniqueName: \"kubernetes.io/projected/866090ed-1206-4cb4-9b12-2450964dc455-kube-api-access-gvvd9\") pod \"barbican-db-create-xj5ql\" (UID: \"866090ed-1206-4cb4-9b12-2450964dc455\") " pod="openstack/barbican-db-create-xj5ql" Jan 22 09:20:50 crc kubenswrapper[4811]: I0122 09:20:50.409059 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-82d1-account-create-update-jkspk"] Jan 22 09:20:50 crc kubenswrapper[4811]: I0122 09:20:50.418907 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-2bx9h"] Jan 22 09:20:50 crc kubenswrapper[4811]: I0122 09:20:50.419815 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-2bx9h" Jan 22 09:20:50 crc kubenswrapper[4811]: I0122 09:20:50.421201 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-2bx9h"] Jan 22 09:20:50 crc kubenswrapper[4811]: I0122 09:20:50.428119 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 22 09:20:50 crc kubenswrapper[4811]: I0122 09:20:50.428285 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 22 09:20:50 crc kubenswrapper[4811]: I0122 09:20:50.428391 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-9nlpl" Jan 22 09:20:50 crc kubenswrapper[4811]: I0122 09:20:50.428501 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 22 09:20:50 crc kubenswrapper[4811]: I0122 09:20:50.466518 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hk2k6\" (UniqueName: \"kubernetes.io/projected/19d930c1-23b2-476a-b176-3ca4d8456549-kube-api-access-hk2k6\") pod \"barbican-82d1-account-create-update-jkspk\" (UID: \"19d930c1-23b2-476a-b176-3ca4d8456549\") " pod="openstack/barbican-82d1-account-create-update-jkspk" Jan 22 09:20:50 crc kubenswrapper[4811]: I0122 09:20:50.466559 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19d930c1-23b2-476a-b176-3ca4d8456549-operator-scripts\") pod \"barbican-82d1-account-create-update-jkspk\" (UID: \"19d930c1-23b2-476a-b176-3ca4d8456549\") " pod="openstack/barbican-82d1-account-create-update-jkspk" Jan 22 09:20:50 crc kubenswrapper[4811]: I0122 09:20:50.486689 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-xj5ql" Jan 22 09:20:50 crc kubenswrapper[4811]: I0122 09:20:50.487322 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-jxrlt"] Jan 22 09:20:50 crc kubenswrapper[4811]: I0122 09:20:50.488169 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-jxrlt" Jan 22 09:20:50 crc kubenswrapper[4811]: I0122 09:20:50.522092 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-jxrlt"] Jan 22 09:20:50 crc kubenswrapper[4811]: I0122 09:20:50.529129 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-891d-account-create-update-5vdr7"] Jan 22 09:20:50 crc kubenswrapper[4811]: I0122 09:20:50.530076 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-891d-account-create-update-5vdr7" Jan 22 09:20:50 crc kubenswrapper[4811]: I0122 09:20:50.535770 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 22 09:20:50 crc kubenswrapper[4811]: I0122 09:20:50.571153 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/986a4686-46f9-4a8c-9c37-0ffadad37084-operator-scripts\") pod \"neutron-db-create-jxrlt\" (UID: \"986a4686-46f9-4a8c-9c37-0ffadad37084\") " pod="openstack/neutron-db-create-jxrlt" Jan 22 09:20:50 crc kubenswrapper[4811]: I0122 09:20:50.571252 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hk2k6\" (UniqueName: \"kubernetes.io/projected/19d930c1-23b2-476a-b176-3ca4d8456549-kube-api-access-hk2k6\") pod \"barbican-82d1-account-create-update-jkspk\" (UID: \"19d930c1-23b2-476a-b176-3ca4d8456549\") " pod="openstack/barbican-82d1-account-create-update-jkspk" Jan 22 09:20:50 crc kubenswrapper[4811]: I0122 09:20:50.571274 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vc8xh\" (UniqueName: \"kubernetes.io/projected/dc9be5c8-f749-41fa-9c82-7378a3c84569-kube-api-access-vc8xh\") pod \"keystone-db-sync-2bx9h\" (UID: \"dc9be5c8-f749-41fa-9c82-7378a3c84569\") " pod="openstack/keystone-db-sync-2bx9h" Jan 22 09:20:50 crc kubenswrapper[4811]: I0122 09:20:50.571304 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19d930c1-23b2-476a-b176-3ca4d8456549-operator-scripts\") pod \"barbican-82d1-account-create-update-jkspk\" (UID: \"19d930c1-23b2-476a-b176-3ca4d8456549\") " pod="openstack/barbican-82d1-account-create-update-jkspk" Jan 22 09:20:50 crc kubenswrapper[4811]: I0122 09:20:50.571348 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc9be5c8-f749-41fa-9c82-7378a3c84569-config-data\") pod \"keystone-db-sync-2bx9h\" (UID: \"dc9be5c8-f749-41fa-9c82-7378a3c84569\") " pod="openstack/keystone-db-sync-2bx9h" Jan 22 09:20:50 crc kubenswrapper[4811]: I0122 09:20:50.571399 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc9be5c8-f749-41fa-9c82-7378a3c84569-combined-ca-bundle\") pod \"keystone-db-sync-2bx9h\" (UID: \"dc9be5c8-f749-41fa-9c82-7378a3c84569\") " pod="openstack/keystone-db-sync-2bx9h" Jan 22 09:20:50 crc kubenswrapper[4811]: I0122 09:20:50.571431 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzpvw\" (UniqueName: \"kubernetes.io/projected/986a4686-46f9-4a8c-9c37-0ffadad37084-kube-api-access-xzpvw\") pod \"neutron-db-create-jxrlt\" (UID: \"986a4686-46f9-4a8c-9c37-0ffadad37084\") " pod="openstack/neutron-db-create-jxrlt" Jan 22 09:20:50 crc kubenswrapper[4811]: I0122 09:20:50.574351 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19d930c1-23b2-476a-b176-3ca4d8456549-operator-scripts\") pod \"barbican-82d1-account-create-update-jkspk\" (UID: \"19d930c1-23b2-476a-b176-3ca4d8456549\") " pod="openstack/barbican-82d1-account-create-update-jkspk" Jan 22 09:20:50 crc kubenswrapper[4811]: I0122 09:20:50.586524 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-891d-account-create-update-5vdr7"] Jan 22 09:20:50 crc kubenswrapper[4811]: I0122 09:20:50.595692 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hk2k6\" (UniqueName: \"kubernetes.io/projected/19d930c1-23b2-476a-b176-3ca4d8456549-kube-api-access-hk2k6\") pod \"barbican-82d1-account-create-update-jkspk\" (UID: \"19d930c1-23b2-476a-b176-3ca4d8456549\") " pod="openstack/barbican-82d1-account-create-update-jkspk" Jan 22 09:20:50 crc kubenswrapper[4811]: I0122 09:20:50.611505 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-2s8kx"] Jan 22 09:20:50 crc kubenswrapper[4811]: I0122 09:20:50.676584 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/986a4686-46f9-4a8c-9c37-0ffadad37084-operator-scripts\") pod \"neutron-db-create-jxrlt\" (UID: \"986a4686-46f9-4a8c-9c37-0ffadad37084\") " pod="openstack/neutron-db-create-jxrlt" Jan 22 09:20:50 crc kubenswrapper[4811]: I0122 09:20:50.676841 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9qsm\" (UniqueName: \"kubernetes.io/projected/9736e7ec-af7c-4413-96bd-60e84d89fc5b-kube-api-access-x9qsm\") pod \"neutron-891d-account-create-update-5vdr7\" (UID: \"9736e7ec-af7c-4413-96bd-60e84d89fc5b\") " pod="openstack/neutron-891d-account-create-update-5vdr7" Jan 22 09:20:50 crc kubenswrapper[4811]: I0122 09:20:50.676914 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vc8xh\" (UniqueName: \"kubernetes.io/projected/dc9be5c8-f749-41fa-9c82-7378a3c84569-kube-api-access-vc8xh\") pod \"keystone-db-sync-2bx9h\" (UID: \"dc9be5c8-f749-41fa-9c82-7378a3c84569\") " pod="openstack/keystone-db-sync-2bx9h" Jan 22 09:20:50 crc kubenswrapper[4811]: I0122 09:20:50.677459 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc9be5c8-f749-41fa-9c82-7378a3c84569-config-data\") pod \"keystone-db-sync-2bx9h\" (UID: \"dc9be5c8-f749-41fa-9c82-7378a3c84569\") " pod="openstack/keystone-db-sync-2bx9h" Jan 22 09:20:50 crc kubenswrapper[4811]: I0122 09:20:50.677500 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9736e7ec-af7c-4413-96bd-60e84d89fc5b-operator-scripts\") pod \"neutron-891d-account-create-update-5vdr7\" (UID: \"9736e7ec-af7c-4413-96bd-60e84d89fc5b\") " pod="openstack/neutron-891d-account-create-update-5vdr7" Jan 22 09:20:50 crc kubenswrapper[4811]: I0122 09:20:50.677555 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc9be5c8-f749-41fa-9c82-7378a3c84569-combined-ca-bundle\") pod \"keystone-db-sync-2bx9h\" (UID: \"dc9be5c8-f749-41fa-9c82-7378a3c84569\") " pod="openstack/keystone-db-sync-2bx9h" Jan 22 09:20:50 crc kubenswrapper[4811]: I0122 09:20:50.677590 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xzpvw\" (UniqueName: \"kubernetes.io/projected/986a4686-46f9-4a8c-9c37-0ffadad37084-kube-api-access-xzpvw\") pod \"neutron-db-create-jxrlt\" (UID: \"986a4686-46f9-4a8c-9c37-0ffadad37084\") " pod="openstack/neutron-db-create-jxrlt" Jan 22 09:20:50 crc kubenswrapper[4811]: I0122 09:20:50.681333 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/986a4686-46f9-4a8c-9c37-0ffadad37084-operator-scripts\") pod \"neutron-db-create-jxrlt\" (UID: \"986a4686-46f9-4a8c-9c37-0ffadad37084\") " pod="openstack/neutron-db-create-jxrlt" Jan 22 09:20:50 crc kubenswrapper[4811]: I0122 09:20:50.688947 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc9be5c8-f749-41fa-9c82-7378a3c84569-config-data\") pod \"keystone-db-sync-2bx9h\" (UID: \"dc9be5c8-f749-41fa-9c82-7378a3c84569\") " pod="openstack/keystone-db-sync-2bx9h" Jan 22 09:20:50 crc kubenswrapper[4811]: I0122 09:20:50.704282 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc9be5c8-f749-41fa-9c82-7378a3c84569-combined-ca-bundle\") pod \"keystone-db-sync-2bx9h\" (UID: \"dc9be5c8-f749-41fa-9c82-7378a3c84569\") " pod="openstack/keystone-db-sync-2bx9h" Jan 22 09:20:50 crc kubenswrapper[4811]: I0122 09:20:50.711448 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xzpvw\" (UniqueName: \"kubernetes.io/projected/986a4686-46f9-4a8c-9c37-0ffadad37084-kube-api-access-xzpvw\") pod \"neutron-db-create-jxrlt\" (UID: \"986a4686-46f9-4a8c-9c37-0ffadad37084\") " pod="openstack/neutron-db-create-jxrlt" Jan 22 09:20:50 crc kubenswrapper[4811]: I0122 09:20:50.711697 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vc8xh\" (UniqueName: \"kubernetes.io/projected/dc9be5c8-f749-41fa-9c82-7378a3c84569-kube-api-access-vc8xh\") pod \"keystone-db-sync-2bx9h\" (UID: \"dc9be5c8-f749-41fa-9c82-7378a3c84569\") " pod="openstack/keystone-db-sync-2bx9h" Jan 22 09:20:50 crc kubenswrapper[4811]: I0122 09:20:50.770678 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-82d1-account-create-update-jkspk" Jan 22 09:20:50 crc kubenswrapper[4811]: I0122 09:20:50.779660 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9736e7ec-af7c-4413-96bd-60e84d89fc5b-operator-scripts\") pod \"neutron-891d-account-create-update-5vdr7\" (UID: \"9736e7ec-af7c-4413-96bd-60e84d89fc5b\") " pod="openstack/neutron-891d-account-create-update-5vdr7" Jan 22 09:20:50 crc kubenswrapper[4811]: I0122 09:20:50.779762 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x9qsm\" (UniqueName: \"kubernetes.io/projected/9736e7ec-af7c-4413-96bd-60e84d89fc5b-kube-api-access-x9qsm\") pod \"neutron-891d-account-create-update-5vdr7\" (UID: \"9736e7ec-af7c-4413-96bd-60e84d89fc5b\") " pod="openstack/neutron-891d-account-create-update-5vdr7" Jan 22 09:20:50 crc kubenswrapper[4811]: I0122 09:20:50.780660 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9736e7ec-af7c-4413-96bd-60e84d89fc5b-operator-scripts\") pod \"neutron-891d-account-create-update-5vdr7\" (UID: \"9736e7ec-af7c-4413-96bd-60e84d89fc5b\") " pod="openstack/neutron-891d-account-create-update-5vdr7" Jan 22 09:20:50 crc kubenswrapper[4811]: I0122 09:20:50.798381 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9qsm\" (UniqueName: \"kubernetes.io/projected/9736e7ec-af7c-4413-96bd-60e84d89fc5b-kube-api-access-x9qsm\") pod \"neutron-891d-account-create-update-5vdr7\" (UID: \"9736e7ec-af7c-4413-96bd-60e84d89fc5b\") " pod="openstack/neutron-891d-account-create-update-5vdr7" Jan 22 09:20:50 crc kubenswrapper[4811]: I0122 09:20:50.809915 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-2bx9h" Jan 22 09:20:50 crc kubenswrapper[4811]: I0122 09:20:50.821188 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-jxrlt" Jan 22 09:20:50 crc kubenswrapper[4811]: I0122 09:20:50.848525 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-891d-account-create-update-5vdr7" Jan 22 09:20:50 crc kubenswrapper[4811]: I0122 09:20:50.955310 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-e374-account-create-update-4m5cv"] Jan 22 09:20:51 crc kubenswrapper[4811]: I0122 09:20:51.036125 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-xj5ql"] Jan 22 09:20:51 crc kubenswrapper[4811]: I0122 09:20:51.294889 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-2bx9h"] Jan 22 09:20:51 crc kubenswrapper[4811]: W0122 09:20:51.296087 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddc9be5c8_f749_41fa_9c82_7378a3c84569.slice/crio-235b7972006e65851999dc6d2b0faaa1d292ea608b09824a0fea15c7be541e82 WatchSource:0}: Error finding container 235b7972006e65851999dc6d2b0faaa1d292ea608b09824a0fea15c7be541e82: Status 404 returned error can't find the container with id 235b7972006e65851999dc6d2b0faaa1d292ea608b09824a0fea15c7be541e82 Jan 22 09:20:51 crc kubenswrapper[4811]: I0122 09:20:51.323528 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-jxrlt"] Jan 22 09:20:51 crc kubenswrapper[4811]: W0122 09:20:51.334039 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod986a4686_46f9_4a8c_9c37_0ffadad37084.slice/crio-9bd658adbdad55ffc3c12107259f76e10190600a56bf259396ecadf1674af2e7 WatchSource:0}: Error finding container 9bd658adbdad55ffc3c12107259f76e10190600a56bf259396ecadf1674af2e7: Status 404 returned error can't find the container with id 9bd658adbdad55ffc3c12107259f76e10190600a56bf259396ecadf1674af2e7 Jan 22 09:20:51 crc kubenswrapper[4811]: I0122 09:20:51.372333 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-82d1-account-create-update-jkspk"] Jan 22 09:20:51 crc kubenswrapper[4811]: I0122 09:20:51.418844 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-xj5ql" event={"ID":"866090ed-1206-4cb4-9b12-2450964dc455","Type":"ContainerStarted","Data":"030988d4601fac4c9219bec8ec55b84b6f13957d6e98de4ebc3ecf58ea829e6c"} Jan 22 09:20:51 crc kubenswrapper[4811]: I0122 09:20:51.418879 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-xj5ql" event={"ID":"866090ed-1206-4cb4-9b12-2450964dc455","Type":"ContainerStarted","Data":"349740f15c9bde1a4a4a05fcffc9a93decb6ebf6fd2e560dcb0ea87ddfb95c0b"} Jan 22 09:20:51 crc kubenswrapper[4811]: I0122 09:20:51.421863 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-82d1-account-create-update-jkspk" event={"ID":"19d930c1-23b2-476a-b176-3ca4d8456549","Type":"ContainerStarted","Data":"7f95bab22a1a54dba7688fb33e399b3fc81164931f5f359b99b567395cf80f03"} Jan 22 09:20:51 crc kubenswrapper[4811]: I0122 09:20:51.429946 4811 generic.go:334] "Generic (PLEG): container finished" podID="1160427d-3b71-44bc-886d-243991fd40e8" containerID="343f989092dd1ddef9a87795d6d915e224696c34e8340c52cbae291c97aa7d65" exitCode=0 Jan 22 09:20:51 crc kubenswrapper[4811]: I0122 09:20:51.430027 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-2s8kx" event={"ID":"1160427d-3b71-44bc-886d-243991fd40e8","Type":"ContainerDied","Data":"343f989092dd1ddef9a87795d6d915e224696c34e8340c52cbae291c97aa7d65"} Jan 22 09:20:51 crc kubenswrapper[4811]: I0122 09:20:51.430069 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-2s8kx" event={"ID":"1160427d-3b71-44bc-886d-243991fd40e8","Type":"ContainerStarted","Data":"b3faad56d6496101e0cc039eaf91226196401b929549f48d351651994d07058b"} Jan 22 09:20:51 crc kubenswrapper[4811]: I0122 09:20:51.431452 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-e374-account-create-update-4m5cv" event={"ID":"1ea82e95-edcf-4f93-a288-8b4550842a28","Type":"ContainerStarted","Data":"d649498385b63b40e10a1edb943c872649ac07c51b8e6c31a10f533d417c210a"} Jan 22 09:20:51 crc kubenswrapper[4811]: I0122 09:20:51.431474 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-e374-account-create-update-4m5cv" event={"ID":"1ea82e95-edcf-4f93-a288-8b4550842a28","Type":"ContainerStarted","Data":"07dceee71bb709cc5b365ad65166eecf91f828648ba4dd05926e3866dcd7d618"} Jan 22 09:20:51 crc kubenswrapper[4811]: I0122 09:20:51.432443 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-2bx9h" event={"ID":"dc9be5c8-f749-41fa-9c82-7378a3c84569","Type":"ContainerStarted","Data":"235b7972006e65851999dc6d2b0faaa1d292ea608b09824a0fea15c7be541e82"} Jan 22 09:20:51 crc kubenswrapper[4811]: I0122 09:20:51.433299 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-jxrlt" event={"ID":"986a4686-46f9-4a8c-9c37-0ffadad37084","Type":"ContainerStarted","Data":"9bd658adbdad55ffc3c12107259f76e10190600a56bf259396ecadf1674af2e7"} Jan 22 09:20:51 crc kubenswrapper[4811]: I0122 09:20:51.474133 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-xj5ql" podStartSLOduration=1.474119444 podStartE2EDuration="1.474119444s" podCreationTimestamp="2026-01-22 09:20:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:20:51.471908636 +0000 UTC m=+895.794095759" watchObservedRunningTime="2026-01-22 09:20:51.474119444 +0000 UTC m=+895.796306568" Jan 22 09:20:51 crc kubenswrapper[4811]: W0122 09:20:51.479746 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9736e7ec_af7c_4413_96bd_60e84d89fc5b.slice/crio-9a5b599dd2f0be08f4f0c45dd41f185d44b33cd496524cb586458415e478830e WatchSource:0}: Error finding container 9a5b599dd2f0be08f4f0c45dd41f185d44b33cd496524cb586458415e478830e: Status 404 returned error can't find the container with id 9a5b599dd2f0be08f4f0c45dd41f185d44b33cd496524cb586458415e478830e Jan 22 09:20:51 crc kubenswrapper[4811]: I0122 09:20:51.481282 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-891d-account-create-update-5vdr7"] Jan 22 09:20:51 crc kubenswrapper[4811]: I0122 09:20:51.514984 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-e374-account-create-update-4m5cv" podStartSLOduration=1.514973764 podStartE2EDuration="1.514973764s" podCreationTimestamp="2026-01-22 09:20:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:20:51.498175792 +0000 UTC m=+895.820362905" watchObservedRunningTime="2026-01-22 09:20:51.514973764 +0000 UTC m=+895.837160887" Jan 22 09:20:52 crc kubenswrapper[4811]: I0122 09:20:52.441024 4811 generic.go:334] "Generic (PLEG): container finished" podID="986a4686-46f9-4a8c-9c37-0ffadad37084" containerID="b8c1fe97bacc8910b8c8f2a115bec91c76666f47da17c2871ccd063f3c054bf4" exitCode=0 Jan 22 09:20:52 crc kubenswrapper[4811]: I0122 09:20:52.441072 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-jxrlt" event={"ID":"986a4686-46f9-4a8c-9c37-0ffadad37084","Type":"ContainerDied","Data":"b8c1fe97bacc8910b8c8f2a115bec91c76666f47da17c2871ccd063f3c054bf4"} Jan 22 09:20:52 crc kubenswrapper[4811]: I0122 09:20:52.443220 4811 generic.go:334] "Generic (PLEG): container finished" podID="866090ed-1206-4cb4-9b12-2450964dc455" containerID="030988d4601fac4c9219bec8ec55b84b6f13957d6e98de4ebc3ecf58ea829e6c" exitCode=0 Jan 22 09:20:52 crc kubenswrapper[4811]: I0122 09:20:52.443276 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-xj5ql" event={"ID":"866090ed-1206-4cb4-9b12-2450964dc455","Type":"ContainerDied","Data":"030988d4601fac4c9219bec8ec55b84b6f13957d6e98de4ebc3ecf58ea829e6c"} Jan 22 09:20:52 crc kubenswrapper[4811]: I0122 09:20:52.444251 4811 generic.go:334] "Generic (PLEG): container finished" podID="19d930c1-23b2-476a-b176-3ca4d8456549" containerID="ac99e9408b664fb496d607294fa868381e7508096b2a6e4150bb53561788b8f3" exitCode=0 Jan 22 09:20:52 crc kubenswrapper[4811]: I0122 09:20:52.444302 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-82d1-account-create-update-jkspk" event={"ID":"19d930c1-23b2-476a-b176-3ca4d8456549","Type":"ContainerDied","Data":"ac99e9408b664fb496d607294fa868381e7508096b2a6e4150bb53561788b8f3"} Jan 22 09:20:52 crc kubenswrapper[4811]: I0122 09:20:52.445846 4811 generic.go:334] "Generic (PLEG): container finished" podID="1ea82e95-edcf-4f93-a288-8b4550842a28" containerID="d649498385b63b40e10a1edb943c872649ac07c51b8e6c31a10f533d417c210a" exitCode=0 Jan 22 09:20:52 crc kubenswrapper[4811]: I0122 09:20:52.445882 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-e374-account-create-update-4m5cv" event={"ID":"1ea82e95-edcf-4f93-a288-8b4550842a28","Type":"ContainerDied","Data":"d649498385b63b40e10a1edb943c872649ac07c51b8e6c31a10f533d417c210a"} Jan 22 09:20:52 crc kubenswrapper[4811]: I0122 09:20:52.446774 4811 generic.go:334] "Generic (PLEG): container finished" podID="9736e7ec-af7c-4413-96bd-60e84d89fc5b" containerID="bf64d7494082d1084f4ec0c47e05cfb01ebf8bb907a356d093f4bdbadcb0e0d1" exitCode=0 Jan 22 09:20:52 crc kubenswrapper[4811]: I0122 09:20:52.446913 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-891d-account-create-update-5vdr7" event={"ID":"9736e7ec-af7c-4413-96bd-60e84d89fc5b","Type":"ContainerDied","Data":"bf64d7494082d1084f4ec0c47e05cfb01ebf8bb907a356d093f4bdbadcb0e0d1"} Jan 22 09:20:52 crc kubenswrapper[4811]: I0122 09:20:52.446928 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-891d-account-create-update-5vdr7" event={"ID":"9736e7ec-af7c-4413-96bd-60e84d89fc5b","Type":"ContainerStarted","Data":"9a5b599dd2f0be08f4f0c45dd41f185d44b33cd496524cb586458415e478830e"} Jan 22 09:20:52 crc kubenswrapper[4811]: I0122 09:20:52.742740 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-2s8kx" Jan 22 09:20:52 crc kubenswrapper[4811]: I0122 09:20:52.910611 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xjbtg\" (UniqueName: \"kubernetes.io/projected/1160427d-3b71-44bc-886d-243991fd40e8-kube-api-access-xjbtg\") pod \"1160427d-3b71-44bc-886d-243991fd40e8\" (UID: \"1160427d-3b71-44bc-886d-243991fd40e8\") " Jan 22 09:20:52 crc kubenswrapper[4811]: I0122 09:20:52.910708 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1160427d-3b71-44bc-886d-243991fd40e8-operator-scripts\") pod \"1160427d-3b71-44bc-886d-243991fd40e8\" (UID: \"1160427d-3b71-44bc-886d-243991fd40e8\") " Jan 22 09:20:52 crc kubenswrapper[4811]: I0122 09:20:52.911724 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1160427d-3b71-44bc-886d-243991fd40e8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1160427d-3b71-44bc-886d-243991fd40e8" (UID: "1160427d-3b71-44bc-886d-243991fd40e8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:20:52 crc kubenswrapper[4811]: I0122 09:20:52.929855 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1160427d-3b71-44bc-886d-243991fd40e8-kube-api-access-xjbtg" (OuterVolumeSpecName: "kube-api-access-xjbtg") pod "1160427d-3b71-44bc-886d-243991fd40e8" (UID: "1160427d-3b71-44bc-886d-243991fd40e8"). InnerVolumeSpecName "kube-api-access-xjbtg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:20:52 crc kubenswrapper[4811]: I0122 09:20:52.930873 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-57768dd7b5-pqcms" Jan 22 09:20:52 crc kubenswrapper[4811]: I0122 09:20:52.990583 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-757dc6fff9-bllfc"] Jan 22 09:20:52 crc kubenswrapper[4811]: I0122 09:20:52.990933 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-757dc6fff9-bllfc" podUID="11f33cf8-5f50-416a-a6f0-f6647a331d40" containerName="dnsmasq-dns" containerID="cri-o://2561738be79bd08de502ce49d42b888237050084091d8ff4d97fae5b89955548" gracePeriod=10 Jan 22 09:20:53 crc kubenswrapper[4811]: I0122 09:20:53.012415 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xjbtg\" (UniqueName: \"kubernetes.io/projected/1160427d-3b71-44bc-886d-243991fd40e8-kube-api-access-xjbtg\") on node \"crc\" DevicePath \"\"" Jan 22 09:20:53 crc kubenswrapper[4811]: I0122 09:20:53.012442 4811 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1160427d-3b71-44bc-886d-243991fd40e8-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 09:20:53 crc kubenswrapper[4811]: I0122 09:20:53.408688 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757dc6fff9-bllfc" Jan 22 09:20:53 crc kubenswrapper[4811]: I0122 09:20:53.461677 4811 generic.go:334] "Generic (PLEG): container finished" podID="11f33cf8-5f50-416a-a6f0-f6647a331d40" containerID="2561738be79bd08de502ce49d42b888237050084091d8ff4d97fae5b89955548" exitCode=0 Jan 22 09:20:53 crc kubenswrapper[4811]: I0122 09:20:53.461754 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757dc6fff9-bllfc" event={"ID":"11f33cf8-5f50-416a-a6f0-f6647a331d40","Type":"ContainerDied","Data":"2561738be79bd08de502ce49d42b888237050084091d8ff4d97fae5b89955548"} Jan 22 09:20:53 crc kubenswrapper[4811]: I0122 09:20:53.461793 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757dc6fff9-bllfc" event={"ID":"11f33cf8-5f50-416a-a6f0-f6647a331d40","Type":"ContainerDied","Data":"953db159302a13efed3a050b39de463115e9565e4da867bc79d6ce232d16ca11"} Jan 22 09:20:53 crc kubenswrapper[4811]: I0122 09:20:53.461814 4811 scope.go:117] "RemoveContainer" containerID="2561738be79bd08de502ce49d42b888237050084091d8ff4d97fae5b89955548" Jan 22 09:20:53 crc kubenswrapper[4811]: I0122 09:20:53.461950 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757dc6fff9-bllfc" Jan 22 09:20:53 crc kubenswrapper[4811]: I0122 09:20:53.465030 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-2s8kx" event={"ID":"1160427d-3b71-44bc-886d-243991fd40e8","Type":"ContainerDied","Data":"b3faad56d6496101e0cc039eaf91226196401b929549f48d351651994d07058b"} Jan 22 09:20:53 crc kubenswrapper[4811]: I0122 09:20:53.465050 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3faad56d6496101e0cc039eaf91226196401b929549f48d351651994d07058b" Jan 22 09:20:53 crc kubenswrapper[4811]: I0122 09:20:53.465092 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-2s8kx" Jan 22 09:20:53 crc kubenswrapper[4811]: I0122 09:20:53.497341 4811 scope.go:117] "RemoveContainer" containerID="bb35895d17c6891f722ea2fef3ca06fb73b41a98ad821b293f2bf45fdf3d3351" Jan 22 09:20:53 crc kubenswrapper[4811]: I0122 09:20:53.520166 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/11f33cf8-5f50-416a-a6f0-f6647a331d40-ovsdbserver-sb\") pod \"11f33cf8-5f50-416a-a6f0-f6647a331d40\" (UID: \"11f33cf8-5f50-416a-a6f0-f6647a331d40\") " Jan 22 09:20:53 crc kubenswrapper[4811]: I0122 09:20:53.520290 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/11f33cf8-5f50-416a-a6f0-f6647a331d40-ovsdbserver-nb\") pod \"11f33cf8-5f50-416a-a6f0-f6647a331d40\" (UID: \"11f33cf8-5f50-416a-a6f0-f6647a331d40\") " Jan 22 09:20:53 crc kubenswrapper[4811]: I0122 09:20:53.520322 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/11f33cf8-5f50-416a-a6f0-f6647a331d40-dns-svc\") pod \"11f33cf8-5f50-416a-a6f0-f6647a331d40\" (UID: \"11f33cf8-5f50-416a-a6f0-f6647a331d40\") " Jan 22 09:20:53 crc kubenswrapper[4811]: I0122 09:20:53.520412 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11f33cf8-5f50-416a-a6f0-f6647a331d40-config\") pod \"11f33cf8-5f50-416a-a6f0-f6647a331d40\" (UID: \"11f33cf8-5f50-416a-a6f0-f6647a331d40\") " Jan 22 09:20:53 crc kubenswrapper[4811]: I0122 09:20:53.520458 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pm9bw\" (UniqueName: \"kubernetes.io/projected/11f33cf8-5f50-416a-a6f0-f6647a331d40-kube-api-access-pm9bw\") pod \"11f33cf8-5f50-416a-a6f0-f6647a331d40\" (UID: \"11f33cf8-5f50-416a-a6f0-f6647a331d40\") " Jan 22 09:20:53 crc kubenswrapper[4811]: I0122 09:20:53.527251 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11f33cf8-5f50-416a-a6f0-f6647a331d40-kube-api-access-pm9bw" (OuterVolumeSpecName: "kube-api-access-pm9bw") pod "11f33cf8-5f50-416a-a6f0-f6647a331d40" (UID: "11f33cf8-5f50-416a-a6f0-f6647a331d40"). InnerVolumeSpecName "kube-api-access-pm9bw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:20:53 crc kubenswrapper[4811]: I0122 09:20:53.590185 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11f33cf8-5f50-416a-a6f0-f6647a331d40-config" (OuterVolumeSpecName: "config") pod "11f33cf8-5f50-416a-a6f0-f6647a331d40" (UID: "11f33cf8-5f50-416a-a6f0-f6647a331d40"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:20:53 crc kubenswrapper[4811]: I0122 09:20:53.590353 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11f33cf8-5f50-416a-a6f0-f6647a331d40-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "11f33cf8-5f50-416a-a6f0-f6647a331d40" (UID: "11f33cf8-5f50-416a-a6f0-f6647a331d40"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:20:53 crc kubenswrapper[4811]: I0122 09:20:53.604223 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11f33cf8-5f50-416a-a6f0-f6647a331d40-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "11f33cf8-5f50-416a-a6f0-f6647a331d40" (UID: "11f33cf8-5f50-416a-a6f0-f6647a331d40"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:20:53 crc kubenswrapper[4811]: I0122 09:20:53.611081 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11f33cf8-5f50-416a-a6f0-f6647a331d40-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "11f33cf8-5f50-416a-a6f0-f6647a331d40" (UID: "11f33cf8-5f50-416a-a6f0-f6647a331d40"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:20:53 crc kubenswrapper[4811]: I0122 09:20:53.622953 4811 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/11f33cf8-5f50-416a-a6f0-f6647a331d40-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 22 09:20:53 crc kubenswrapper[4811]: I0122 09:20:53.622984 4811 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/11f33cf8-5f50-416a-a6f0-f6647a331d40-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 22 09:20:53 crc kubenswrapper[4811]: I0122 09:20:53.622994 4811 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/11f33cf8-5f50-416a-a6f0-f6647a331d40-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 22 09:20:53 crc kubenswrapper[4811]: I0122 09:20:53.623004 4811 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11f33cf8-5f50-416a-a6f0-f6647a331d40-config\") on node \"crc\" DevicePath \"\"" Jan 22 09:20:53 crc kubenswrapper[4811]: I0122 09:20:53.623024 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pm9bw\" (UniqueName: \"kubernetes.io/projected/11f33cf8-5f50-416a-a6f0-f6647a331d40-kube-api-access-pm9bw\") on node \"crc\" DevicePath \"\"" Jan 22 09:20:53 crc kubenswrapper[4811]: I0122 09:20:53.684660 4811 scope.go:117] "RemoveContainer" containerID="2561738be79bd08de502ce49d42b888237050084091d8ff4d97fae5b89955548" Jan 22 09:20:53 crc kubenswrapper[4811]: E0122 09:20:53.685179 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2561738be79bd08de502ce49d42b888237050084091d8ff4d97fae5b89955548\": container with ID starting with 2561738be79bd08de502ce49d42b888237050084091d8ff4d97fae5b89955548 not found: ID does not exist" containerID="2561738be79bd08de502ce49d42b888237050084091d8ff4d97fae5b89955548" Jan 22 09:20:53 crc kubenswrapper[4811]: I0122 09:20:53.685220 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2561738be79bd08de502ce49d42b888237050084091d8ff4d97fae5b89955548"} err="failed to get container status \"2561738be79bd08de502ce49d42b888237050084091d8ff4d97fae5b89955548\": rpc error: code = NotFound desc = could not find container \"2561738be79bd08de502ce49d42b888237050084091d8ff4d97fae5b89955548\": container with ID starting with 2561738be79bd08de502ce49d42b888237050084091d8ff4d97fae5b89955548 not found: ID does not exist" Jan 22 09:20:53 crc kubenswrapper[4811]: I0122 09:20:53.685248 4811 scope.go:117] "RemoveContainer" containerID="bb35895d17c6891f722ea2fef3ca06fb73b41a98ad821b293f2bf45fdf3d3351" Jan 22 09:20:53 crc kubenswrapper[4811]: E0122 09:20:53.685514 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb35895d17c6891f722ea2fef3ca06fb73b41a98ad821b293f2bf45fdf3d3351\": container with ID starting with bb35895d17c6891f722ea2fef3ca06fb73b41a98ad821b293f2bf45fdf3d3351 not found: ID does not exist" containerID="bb35895d17c6891f722ea2fef3ca06fb73b41a98ad821b293f2bf45fdf3d3351" Jan 22 09:20:53 crc kubenswrapper[4811]: I0122 09:20:53.685538 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb35895d17c6891f722ea2fef3ca06fb73b41a98ad821b293f2bf45fdf3d3351"} err="failed to get container status \"bb35895d17c6891f722ea2fef3ca06fb73b41a98ad821b293f2bf45fdf3d3351\": rpc error: code = NotFound desc = could not find container \"bb35895d17c6891f722ea2fef3ca06fb73b41a98ad821b293f2bf45fdf3d3351\": container with ID starting with bb35895d17c6891f722ea2fef3ca06fb73b41a98ad821b293f2bf45fdf3d3351 not found: ID does not exist" Jan 22 09:20:53 crc kubenswrapper[4811]: I0122 09:20:53.824340 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-757dc6fff9-bllfc"] Jan 22 09:20:53 crc kubenswrapper[4811]: I0122 09:20:53.841980 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-757dc6fff9-bllfc"] Jan 22 09:20:53 crc kubenswrapper[4811]: I0122 09:20:53.979183 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-891d-account-create-update-5vdr7" Jan 22 09:20:54 crc kubenswrapper[4811]: I0122 09:20:54.051158 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11f33cf8-5f50-416a-a6f0-f6647a331d40" path="/var/lib/kubelet/pods/11f33cf8-5f50-416a-a6f0-f6647a331d40/volumes" Jan 22 09:20:54 crc kubenswrapper[4811]: I0122 09:20:54.148473 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9736e7ec-af7c-4413-96bd-60e84d89fc5b-operator-scripts\") pod \"9736e7ec-af7c-4413-96bd-60e84d89fc5b\" (UID: \"9736e7ec-af7c-4413-96bd-60e84d89fc5b\") " Jan 22 09:20:54 crc kubenswrapper[4811]: I0122 09:20:54.148532 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x9qsm\" (UniqueName: \"kubernetes.io/projected/9736e7ec-af7c-4413-96bd-60e84d89fc5b-kube-api-access-x9qsm\") pod \"9736e7ec-af7c-4413-96bd-60e84d89fc5b\" (UID: \"9736e7ec-af7c-4413-96bd-60e84d89fc5b\") " Jan 22 09:20:54 crc kubenswrapper[4811]: I0122 09:20:54.149573 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9736e7ec-af7c-4413-96bd-60e84d89fc5b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9736e7ec-af7c-4413-96bd-60e84d89fc5b" (UID: "9736e7ec-af7c-4413-96bd-60e84d89fc5b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:20:54 crc kubenswrapper[4811]: I0122 09:20:54.158593 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9736e7ec-af7c-4413-96bd-60e84d89fc5b-kube-api-access-x9qsm" (OuterVolumeSpecName: "kube-api-access-x9qsm") pod "9736e7ec-af7c-4413-96bd-60e84d89fc5b" (UID: "9736e7ec-af7c-4413-96bd-60e84d89fc5b"). InnerVolumeSpecName "kube-api-access-x9qsm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:20:54 crc kubenswrapper[4811]: I0122 09:20:54.217860 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-jxrlt" Jan 22 09:20:54 crc kubenswrapper[4811]: I0122 09:20:54.228106 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-82d1-account-create-update-jkspk" Jan 22 09:20:54 crc kubenswrapper[4811]: I0122 09:20:54.251644 4811 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9736e7ec-af7c-4413-96bd-60e84d89fc5b-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 09:20:54 crc kubenswrapper[4811]: I0122 09:20:54.251684 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x9qsm\" (UniqueName: \"kubernetes.io/projected/9736e7ec-af7c-4413-96bd-60e84d89fc5b-kube-api-access-x9qsm\") on node \"crc\" DevicePath \"\"" Jan 22 09:20:54 crc kubenswrapper[4811]: I0122 09:20:54.264115 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-e374-account-create-update-4m5cv" Jan 22 09:20:54 crc kubenswrapper[4811]: I0122 09:20:54.278802 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-xj5ql" Jan 22 09:20:54 crc kubenswrapper[4811]: I0122 09:20:54.352584 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19d930c1-23b2-476a-b176-3ca4d8456549-operator-scripts\") pod \"19d930c1-23b2-476a-b176-3ca4d8456549\" (UID: \"19d930c1-23b2-476a-b176-3ca4d8456549\") " Jan 22 09:20:54 crc kubenswrapper[4811]: I0122 09:20:54.352652 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/986a4686-46f9-4a8c-9c37-0ffadad37084-operator-scripts\") pod \"986a4686-46f9-4a8c-9c37-0ffadad37084\" (UID: \"986a4686-46f9-4a8c-9c37-0ffadad37084\") " Jan 22 09:20:54 crc kubenswrapper[4811]: I0122 09:20:54.352674 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1ea82e95-edcf-4f93-a288-8b4550842a28-operator-scripts\") pod \"1ea82e95-edcf-4f93-a288-8b4550842a28\" (UID: \"1ea82e95-edcf-4f93-a288-8b4550842a28\") " Jan 22 09:20:54 crc kubenswrapper[4811]: I0122 09:20:54.353134 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/986a4686-46f9-4a8c-9c37-0ffadad37084-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "986a4686-46f9-4a8c-9c37-0ffadad37084" (UID: "986a4686-46f9-4a8c-9c37-0ffadad37084"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:20:54 crc kubenswrapper[4811]: I0122 09:20:54.353235 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19d930c1-23b2-476a-b176-3ca4d8456549-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "19d930c1-23b2-476a-b176-3ca4d8456549" (UID: "19d930c1-23b2-476a-b176-3ca4d8456549"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:20:54 crc kubenswrapper[4811]: I0122 09:20:54.353437 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ea82e95-edcf-4f93-a288-8b4550842a28-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1ea82e95-edcf-4f93-a288-8b4550842a28" (UID: "1ea82e95-edcf-4f93-a288-8b4550842a28"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:20:54 crc kubenswrapper[4811]: I0122 09:20:54.353495 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hk2k6\" (UniqueName: \"kubernetes.io/projected/19d930c1-23b2-476a-b176-3ca4d8456549-kube-api-access-hk2k6\") pod \"19d930c1-23b2-476a-b176-3ca4d8456549\" (UID: \"19d930c1-23b2-476a-b176-3ca4d8456549\") " Jan 22 09:20:54 crc kubenswrapper[4811]: I0122 09:20:54.353527 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xzpvw\" (UniqueName: \"kubernetes.io/projected/986a4686-46f9-4a8c-9c37-0ffadad37084-kube-api-access-xzpvw\") pod \"986a4686-46f9-4a8c-9c37-0ffadad37084\" (UID: \"986a4686-46f9-4a8c-9c37-0ffadad37084\") " Jan 22 09:20:54 crc kubenswrapper[4811]: I0122 09:20:54.353979 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bvdd4\" (UniqueName: \"kubernetes.io/projected/1ea82e95-edcf-4f93-a288-8b4550842a28-kube-api-access-bvdd4\") pod \"1ea82e95-edcf-4f93-a288-8b4550842a28\" (UID: \"1ea82e95-edcf-4f93-a288-8b4550842a28\") " Jan 22 09:20:54 crc kubenswrapper[4811]: I0122 09:20:54.355008 4811 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19d930c1-23b2-476a-b176-3ca4d8456549-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 09:20:54 crc kubenswrapper[4811]: I0122 09:20:54.355068 4811 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/986a4686-46f9-4a8c-9c37-0ffadad37084-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 09:20:54 crc kubenswrapper[4811]: I0122 09:20:54.355117 4811 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1ea82e95-edcf-4f93-a288-8b4550842a28-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 09:20:54 crc kubenswrapper[4811]: I0122 09:20:54.356472 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19d930c1-23b2-476a-b176-3ca4d8456549-kube-api-access-hk2k6" (OuterVolumeSpecName: "kube-api-access-hk2k6") pod "19d930c1-23b2-476a-b176-3ca4d8456549" (UID: "19d930c1-23b2-476a-b176-3ca4d8456549"). InnerVolumeSpecName "kube-api-access-hk2k6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:20:54 crc kubenswrapper[4811]: I0122 09:20:54.357192 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ea82e95-edcf-4f93-a288-8b4550842a28-kube-api-access-bvdd4" (OuterVolumeSpecName: "kube-api-access-bvdd4") pod "1ea82e95-edcf-4f93-a288-8b4550842a28" (UID: "1ea82e95-edcf-4f93-a288-8b4550842a28"). InnerVolumeSpecName "kube-api-access-bvdd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:20:54 crc kubenswrapper[4811]: I0122 09:20:54.357362 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/986a4686-46f9-4a8c-9c37-0ffadad37084-kube-api-access-xzpvw" (OuterVolumeSpecName: "kube-api-access-xzpvw") pod "986a4686-46f9-4a8c-9c37-0ffadad37084" (UID: "986a4686-46f9-4a8c-9c37-0ffadad37084"). InnerVolumeSpecName "kube-api-access-xzpvw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:20:54 crc kubenswrapper[4811]: I0122 09:20:54.456294 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gvvd9\" (UniqueName: \"kubernetes.io/projected/866090ed-1206-4cb4-9b12-2450964dc455-kube-api-access-gvvd9\") pod \"866090ed-1206-4cb4-9b12-2450964dc455\" (UID: \"866090ed-1206-4cb4-9b12-2450964dc455\") " Jan 22 09:20:54 crc kubenswrapper[4811]: I0122 09:20:54.456481 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/866090ed-1206-4cb4-9b12-2450964dc455-operator-scripts\") pod \"866090ed-1206-4cb4-9b12-2450964dc455\" (UID: \"866090ed-1206-4cb4-9b12-2450964dc455\") " Jan 22 09:20:54 crc kubenswrapper[4811]: I0122 09:20:54.456905 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/866090ed-1206-4cb4-9b12-2450964dc455-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "866090ed-1206-4cb4-9b12-2450964dc455" (UID: "866090ed-1206-4cb4-9b12-2450964dc455"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:20:54 crc kubenswrapper[4811]: I0122 09:20:54.457300 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hk2k6\" (UniqueName: \"kubernetes.io/projected/19d930c1-23b2-476a-b176-3ca4d8456549-kube-api-access-hk2k6\") on node \"crc\" DevicePath \"\"" Jan 22 09:20:54 crc kubenswrapper[4811]: I0122 09:20:54.457322 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xzpvw\" (UniqueName: \"kubernetes.io/projected/986a4686-46f9-4a8c-9c37-0ffadad37084-kube-api-access-xzpvw\") on node \"crc\" DevicePath \"\"" Jan 22 09:20:54 crc kubenswrapper[4811]: I0122 09:20:54.457332 4811 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/866090ed-1206-4cb4-9b12-2450964dc455-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 09:20:54 crc kubenswrapper[4811]: I0122 09:20:54.457342 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bvdd4\" (UniqueName: \"kubernetes.io/projected/1ea82e95-edcf-4f93-a288-8b4550842a28-kube-api-access-bvdd4\") on node \"crc\" DevicePath \"\"" Jan 22 09:20:54 crc kubenswrapper[4811]: I0122 09:20:54.475105 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/866090ed-1206-4cb4-9b12-2450964dc455-kube-api-access-gvvd9" (OuterVolumeSpecName: "kube-api-access-gvvd9") pod "866090ed-1206-4cb4-9b12-2450964dc455" (UID: "866090ed-1206-4cb4-9b12-2450964dc455"). InnerVolumeSpecName "kube-api-access-gvvd9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:20:54 crc kubenswrapper[4811]: I0122 09:20:54.477470 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-xj5ql" Jan 22 09:20:54 crc kubenswrapper[4811]: I0122 09:20:54.477660 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-xj5ql" event={"ID":"866090ed-1206-4cb4-9b12-2450964dc455","Type":"ContainerDied","Data":"349740f15c9bde1a4a4a05fcffc9a93decb6ebf6fd2e560dcb0ea87ddfb95c0b"} Jan 22 09:20:54 crc kubenswrapper[4811]: I0122 09:20:54.477761 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="349740f15c9bde1a4a4a05fcffc9a93decb6ebf6fd2e560dcb0ea87ddfb95c0b" Jan 22 09:20:54 crc kubenswrapper[4811]: I0122 09:20:54.489773 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-82d1-account-create-update-jkspk" event={"ID":"19d930c1-23b2-476a-b176-3ca4d8456549","Type":"ContainerDied","Data":"7f95bab22a1a54dba7688fb33e399b3fc81164931f5f359b99b567395cf80f03"} Jan 22 09:20:54 crc kubenswrapper[4811]: I0122 09:20:54.489806 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7f95bab22a1a54dba7688fb33e399b3fc81164931f5f359b99b567395cf80f03" Jan 22 09:20:54 crc kubenswrapper[4811]: I0122 09:20:54.490147 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-82d1-account-create-update-jkspk" Jan 22 09:20:54 crc kubenswrapper[4811]: I0122 09:20:54.517934 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-e374-account-create-update-4m5cv" event={"ID":"1ea82e95-edcf-4f93-a288-8b4550842a28","Type":"ContainerDied","Data":"07dceee71bb709cc5b365ad65166eecf91f828648ba4dd05926e3866dcd7d618"} Jan 22 09:20:54 crc kubenswrapper[4811]: I0122 09:20:54.517962 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="07dceee71bb709cc5b365ad65166eecf91f828648ba4dd05926e3866dcd7d618" Jan 22 09:20:54 crc kubenswrapper[4811]: I0122 09:20:54.518008 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-e374-account-create-update-4m5cv" Jan 22 09:20:54 crc kubenswrapper[4811]: I0122 09:20:54.520490 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-891d-account-create-update-5vdr7" event={"ID":"9736e7ec-af7c-4413-96bd-60e84d89fc5b","Type":"ContainerDied","Data":"9a5b599dd2f0be08f4f0c45dd41f185d44b33cd496524cb586458415e478830e"} Jan 22 09:20:54 crc kubenswrapper[4811]: I0122 09:20:54.520884 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9a5b599dd2f0be08f4f0c45dd41f185d44b33cd496524cb586458415e478830e" Jan 22 09:20:54 crc kubenswrapper[4811]: I0122 09:20:54.520570 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-891d-account-create-update-5vdr7" Jan 22 09:20:54 crc kubenswrapper[4811]: I0122 09:20:54.524500 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-jxrlt" event={"ID":"986a4686-46f9-4a8c-9c37-0ffadad37084","Type":"ContainerDied","Data":"9bd658adbdad55ffc3c12107259f76e10190600a56bf259396ecadf1674af2e7"} Jan 22 09:20:54 crc kubenswrapper[4811]: I0122 09:20:54.524522 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9bd658adbdad55ffc3c12107259f76e10190600a56bf259396ecadf1674af2e7" Jan 22 09:20:54 crc kubenswrapper[4811]: I0122 09:20:54.524590 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-jxrlt" Jan 22 09:20:54 crc kubenswrapper[4811]: I0122 09:20:54.559020 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gvvd9\" (UniqueName: \"kubernetes.io/projected/866090ed-1206-4cb4-9b12-2450964dc455-kube-api-access-gvvd9\") on node \"crc\" DevicePath \"\"" Jan 22 09:20:57 crc kubenswrapper[4811]: I0122 09:20:57.554332 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-2bx9h" event={"ID":"dc9be5c8-f749-41fa-9c82-7378a3c84569","Type":"ContainerStarted","Data":"7eb1d67d8f5fb006e9f4ca0882d1403a9fb92920be629aa2e6f7628cf0d5107e"} Jan 22 09:20:57 crc kubenswrapper[4811]: I0122 09:20:57.569973 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-2bx9h" podStartSLOduration=1.814350359 podStartE2EDuration="7.56994427s" podCreationTimestamp="2026-01-22 09:20:50 +0000 UTC" firstStartedPulling="2026-01-22 09:20:51.297468298 +0000 UTC m=+895.619655422" lastFinishedPulling="2026-01-22 09:20:57.05306221 +0000 UTC m=+901.375249333" observedRunningTime="2026-01-22 09:20:57.566604644 +0000 UTC m=+901.888791767" watchObservedRunningTime="2026-01-22 09:20:57.56994427 +0000 UTC m=+901.892131393" Jan 22 09:20:59 crc kubenswrapper[4811]: I0122 09:20:59.573848 4811 generic.go:334] "Generic (PLEG): container finished" podID="dc9be5c8-f749-41fa-9c82-7378a3c84569" containerID="7eb1d67d8f5fb006e9f4ca0882d1403a9fb92920be629aa2e6f7628cf0d5107e" exitCode=0 Jan 22 09:20:59 crc kubenswrapper[4811]: I0122 09:20:59.573933 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-2bx9h" event={"ID":"dc9be5c8-f749-41fa-9c82-7378a3c84569","Type":"ContainerDied","Data":"7eb1d67d8f5fb006e9f4ca0882d1403a9fb92920be629aa2e6f7628cf0d5107e"} Jan 22 09:21:00 crc kubenswrapper[4811]: I0122 09:21:00.925125 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-2bx9h" Jan 22 09:21:01 crc kubenswrapper[4811]: I0122 09:21:01.093952 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc9be5c8-f749-41fa-9c82-7378a3c84569-combined-ca-bundle\") pod \"dc9be5c8-f749-41fa-9c82-7378a3c84569\" (UID: \"dc9be5c8-f749-41fa-9c82-7378a3c84569\") " Jan 22 09:21:01 crc kubenswrapper[4811]: I0122 09:21:01.094029 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vc8xh\" (UniqueName: \"kubernetes.io/projected/dc9be5c8-f749-41fa-9c82-7378a3c84569-kube-api-access-vc8xh\") pod \"dc9be5c8-f749-41fa-9c82-7378a3c84569\" (UID: \"dc9be5c8-f749-41fa-9c82-7378a3c84569\") " Jan 22 09:21:01 crc kubenswrapper[4811]: I0122 09:21:01.094087 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc9be5c8-f749-41fa-9c82-7378a3c84569-config-data\") pod \"dc9be5c8-f749-41fa-9c82-7378a3c84569\" (UID: \"dc9be5c8-f749-41fa-9c82-7378a3c84569\") " Jan 22 09:21:01 crc kubenswrapper[4811]: I0122 09:21:01.101688 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc9be5c8-f749-41fa-9c82-7378a3c84569-kube-api-access-vc8xh" (OuterVolumeSpecName: "kube-api-access-vc8xh") pod "dc9be5c8-f749-41fa-9c82-7378a3c84569" (UID: "dc9be5c8-f749-41fa-9c82-7378a3c84569"). InnerVolumeSpecName "kube-api-access-vc8xh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:21:01 crc kubenswrapper[4811]: I0122 09:21:01.117333 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc9be5c8-f749-41fa-9c82-7378a3c84569-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dc9be5c8-f749-41fa-9c82-7378a3c84569" (UID: "dc9be5c8-f749-41fa-9c82-7378a3c84569"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:21:01 crc kubenswrapper[4811]: I0122 09:21:01.132672 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc9be5c8-f749-41fa-9c82-7378a3c84569-config-data" (OuterVolumeSpecName: "config-data") pod "dc9be5c8-f749-41fa-9c82-7378a3c84569" (UID: "dc9be5c8-f749-41fa-9c82-7378a3c84569"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:21:01 crc kubenswrapper[4811]: I0122 09:21:01.197674 4811 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc9be5c8-f749-41fa-9c82-7378a3c84569-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 09:21:01 crc kubenswrapper[4811]: I0122 09:21:01.197703 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vc8xh\" (UniqueName: \"kubernetes.io/projected/dc9be5c8-f749-41fa-9c82-7378a3c84569-kube-api-access-vc8xh\") on node \"crc\" DevicePath \"\"" Jan 22 09:21:01 crc kubenswrapper[4811]: I0122 09:21:01.197715 4811 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc9be5c8-f749-41fa-9c82-7378a3c84569-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 09:21:01 crc kubenswrapper[4811]: I0122 09:21:01.586535 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-2bx9h" event={"ID":"dc9be5c8-f749-41fa-9c82-7378a3c84569","Type":"ContainerDied","Data":"235b7972006e65851999dc6d2b0faaa1d292ea608b09824a0fea15c7be541e82"} Jan 22 09:21:01 crc kubenswrapper[4811]: I0122 09:21:01.586576 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="235b7972006e65851999dc6d2b0faaa1d292ea608b09824a0fea15c7be541e82" Jan 22 09:21:01 crc kubenswrapper[4811]: I0122 09:21:01.586664 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-2bx9h" Jan 22 09:21:01 crc kubenswrapper[4811]: I0122 09:21:01.819754 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78fbc4bbf-nscvg"] Jan 22 09:21:01 crc kubenswrapper[4811]: E0122 09:21:01.820016 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="986a4686-46f9-4a8c-9c37-0ffadad37084" containerName="mariadb-database-create" Jan 22 09:21:01 crc kubenswrapper[4811]: I0122 09:21:01.820033 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="986a4686-46f9-4a8c-9c37-0ffadad37084" containerName="mariadb-database-create" Jan 22 09:21:01 crc kubenswrapper[4811]: E0122 09:21:01.820045 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="866090ed-1206-4cb4-9b12-2450964dc455" containerName="mariadb-database-create" Jan 22 09:21:01 crc kubenswrapper[4811]: I0122 09:21:01.820050 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="866090ed-1206-4cb4-9b12-2450964dc455" containerName="mariadb-database-create" Jan 22 09:21:01 crc kubenswrapper[4811]: E0122 09:21:01.820077 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19d930c1-23b2-476a-b176-3ca4d8456549" containerName="mariadb-account-create-update" Jan 22 09:21:01 crc kubenswrapper[4811]: I0122 09:21:01.820096 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="19d930c1-23b2-476a-b176-3ca4d8456549" containerName="mariadb-account-create-update" Jan 22 09:21:01 crc kubenswrapper[4811]: E0122 09:21:01.820106 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11f33cf8-5f50-416a-a6f0-f6647a331d40" containerName="init" Jan 22 09:21:01 crc kubenswrapper[4811]: I0122 09:21:01.820112 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="11f33cf8-5f50-416a-a6f0-f6647a331d40" containerName="init" Jan 22 09:21:01 crc kubenswrapper[4811]: E0122 09:21:01.820121 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11f33cf8-5f50-416a-a6f0-f6647a331d40" containerName="dnsmasq-dns" Jan 22 09:21:01 crc kubenswrapper[4811]: I0122 09:21:01.820126 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="11f33cf8-5f50-416a-a6f0-f6647a331d40" containerName="dnsmasq-dns" Jan 22 09:21:01 crc kubenswrapper[4811]: E0122 09:21:01.820134 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1160427d-3b71-44bc-886d-243991fd40e8" containerName="mariadb-database-create" Jan 22 09:21:01 crc kubenswrapper[4811]: I0122 09:21:01.820139 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="1160427d-3b71-44bc-886d-243991fd40e8" containerName="mariadb-database-create" Jan 22 09:21:01 crc kubenswrapper[4811]: E0122 09:21:01.820146 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ea82e95-edcf-4f93-a288-8b4550842a28" containerName="mariadb-account-create-update" Jan 22 09:21:01 crc kubenswrapper[4811]: I0122 09:21:01.820151 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ea82e95-edcf-4f93-a288-8b4550842a28" containerName="mariadb-account-create-update" Jan 22 09:21:01 crc kubenswrapper[4811]: E0122 09:21:01.820161 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc9be5c8-f749-41fa-9c82-7378a3c84569" containerName="keystone-db-sync" Jan 22 09:21:01 crc kubenswrapper[4811]: I0122 09:21:01.820166 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc9be5c8-f749-41fa-9c82-7378a3c84569" containerName="keystone-db-sync" Jan 22 09:21:01 crc kubenswrapper[4811]: E0122 09:21:01.820177 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9736e7ec-af7c-4413-96bd-60e84d89fc5b" containerName="mariadb-account-create-update" Jan 22 09:21:01 crc kubenswrapper[4811]: I0122 09:21:01.820183 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="9736e7ec-af7c-4413-96bd-60e84d89fc5b" containerName="mariadb-account-create-update" Jan 22 09:21:01 crc kubenswrapper[4811]: I0122 09:21:01.820307 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="19d930c1-23b2-476a-b176-3ca4d8456549" containerName="mariadb-account-create-update" Jan 22 09:21:01 crc kubenswrapper[4811]: I0122 09:21:01.820321 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc9be5c8-f749-41fa-9c82-7378a3c84569" containerName="keystone-db-sync" Jan 22 09:21:01 crc kubenswrapper[4811]: I0122 09:21:01.820330 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="11f33cf8-5f50-416a-a6f0-f6647a331d40" containerName="dnsmasq-dns" Jan 22 09:21:01 crc kubenswrapper[4811]: I0122 09:21:01.820350 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="9736e7ec-af7c-4413-96bd-60e84d89fc5b" containerName="mariadb-account-create-update" Jan 22 09:21:01 crc kubenswrapper[4811]: I0122 09:21:01.820358 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="1160427d-3b71-44bc-886d-243991fd40e8" containerName="mariadb-database-create" Jan 22 09:21:01 crc kubenswrapper[4811]: I0122 09:21:01.820365 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ea82e95-edcf-4f93-a288-8b4550842a28" containerName="mariadb-account-create-update" Jan 22 09:21:01 crc kubenswrapper[4811]: I0122 09:21:01.820372 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="866090ed-1206-4cb4-9b12-2450964dc455" containerName="mariadb-database-create" Jan 22 09:21:01 crc kubenswrapper[4811]: I0122 09:21:01.820381 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="986a4686-46f9-4a8c-9c37-0ffadad37084" containerName="mariadb-database-create" Jan 22 09:21:01 crc kubenswrapper[4811]: I0122 09:21:01.821049 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78fbc4bbf-nscvg" Jan 22 09:21:01 crc kubenswrapper[4811]: I0122 09:21:01.839836 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78fbc4bbf-nscvg"] Jan 22 09:21:01 crc kubenswrapper[4811]: I0122 09:21:01.884512 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-xzm8b"] Jan 22 09:21:01 crc kubenswrapper[4811]: I0122 09:21:01.885348 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-xzm8b" Jan 22 09:21:01 crc kubenswrapper[4811]: I0122 09:21:01.887564 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 22 09:21:01 crc kubenswrapper[4811]: I0122 09:21:01.887804 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-9nlpl" Jan 22 09:21:01 crc kubenswrapper[4811]: I0122 09:21:01.887964 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 22 09:21:01 crc kubenswrapper[4811]: I0122 09:21:01.890502 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 22 09:21:01 crc kubenswrapper[4811]: I0122 09:21:01.890880 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 22 09:21:01 crc kubenswrapper[4811]: I0122 09:21:01.901099 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-xzm8b"] Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.007663 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a2d8e429-147e-426b-8310-85bb09cace99-fernet-keys\") pod \"keystone-bootstrap-xzm8b\" (UID: \"a2d8e429-147e-426b-8310-85bb09cace99\") " pod="openstack/keystone-bootstrap-xzm8b" Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.008444 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b68e3ad6-883d-4a1f-af69-3a94e705afe8-ovsdbserver-nb\") pod \"dnsmasq-dns-78fbc4bbf-nscvg\" (UID: \"b68e3ad6-883d-4a1f-af69-3a94e705afe8\") " pod="openstack/dnsmasq-dns-78fbc4bbf-nscvg" Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.008561 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b68e3ad6-883d-4a1f-af69-3a94e705afe8-ovsdbserver-sb\") pod \"dnsmasq-dns-78fbc4bbf-nscvg\" (UID: \"b68e3ad6-883d-4a1f-af69-3a94e705afe8\") " pod="openstack/dnsmasq-dns-78fbc4bbf-nscvg" Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.008681 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b68e3ad6-883d-4a1f-af69-3a94e705afe8-dns-svc\") pod \"dnsmasq-dns-78fbc4bbf-nscvg\" (UID: \"b68e3ad6-883d-4a1f-af69-3a94e705afe8\") " pod="openstack/dnsmasq-dns-78fbc4bbf-nscvg" Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.008786 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kj296\" (UniqueName: \"kubernetes.io/projected/a2d8e429-147e-426b-8310-85bb09cace99-kube-api-access-kj296\") pod \"keystone-bootstrap-xzm8b\" (UID: \"a2d8e429-147e-426b-8310-85bb09cace99\") " pod="openstack/keystone-bootstrap-xzm8b" Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.008883 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b68e3ad6-883d-4a1f-af69-3a94e705afe8-config\") pod \"dnsmasq-dns-78fbc4bbf-nscvg\" (UID: \"b68e3ad6-883d-4a1f-af69-3a94e705afe8\") " pod="openstack/dnsmasq-dns-78fbc4bbf-nscvg" Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.008999 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a2d8e429-147e-426b-8310-85bb09cace99-scripts\") pod \"keystone-bootstrap-xzm8b\" (UID: \"a2d8e429-147e-426b-8310-85bb09cace99\") " pod="openstack/keystone-bootstrap-xzm8b" Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.009103 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2d8e429-147e-426b-8310-85bb09cace99-combined-ca-bundle\") pod \"keystone-bootstrap-xzm8b\" (UID: \"a2d8e429-147e-426b-8310-85bb09cace99\") " pod="openstack/keystone-bootstrap-xzm8b" Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.009210 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9q2z\" (UniqueName: \"kubernetes.io/projected/b68e3ad6-883d-4a1f-af69-3a94e705afe8-kube-api-access-z9q2z\") pod \"dnsmasq-dns-78fbc4bbf-nscvg\" (UID: \"b68e3ad6-883d-4a1f-af69-3a94e705afe8\") " pod="openstack/dnsmasq-dns-78fbc4bbf-nscvg" Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.009300 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a2d8e429-147e-426b-8310-85bb09cace99-credential-keys\") pod \"keystone-bootstrap-xzm8b\" (UID: \"a2d8e429-147e-426b-8310-85bb09cace99\") " pod="openstack/keystone-bootstrap-xzm8b" Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.009384 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2d8e429-147e-426b-8310-85bb09cace99-config-data\") pod \"keystone-bootstrap-xzm8b\" (UID: \"a2d8e429-147e-426b-8310-85bb09cace99\") " pod="openstack/keystone-bootstrap-xzm8b" Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.057786 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-bbl7b"] Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.058787 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-bbl7b" Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.061848 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.062148 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-l9n7w" Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.062445 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.090267 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-bbl7b"] Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.110455 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kj296\" (UniqueName: \"kubernetes.io/projected/a2d8e429-147e-426b-8310-85bb09cace99-kube-api-access-kj296\") pod \"keystone-bootstrap-xzm8b\" (UID: \"a2d8e429-147e-426b-8310-85bb09cace99\") " pod="openstack/keystone-bootstrap-xzm8b" Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.110591 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b68e3ad6-883d-4a1f-af69-3a94e705afe8-config\") pod \"dnsmasq-dns-78fbc4bbf-nscvg\" (UID: \"b68e3ad6-883d-4a1f-af69-3a94e705afe8\") " pod="openstack/dnsmasq-dns-78fbc4bbf-nscvg" Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.110702 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a2d8e429-147e-426b-8310-85bb09cace99-scripts\") pod \"keystone-bootstrap-xzm8b\" (UID: \"a2d8e429-147e-426b-8310-85bb09cace99\") " pod="openstack/keystone-bootstrap-xzm8b" Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.110783 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2d8e429-147e-426b-8310-85bb09cace99-combined-ca-bundle\") pod \"keystone-bootstrap-xzm8b\" (UID: \"a2d8e429-147e-426b-8310-85bb09cace99\") " pod="openstack/keystone-bootstrap-xzm8b" Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.110887 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9q2z\" (UniqueName: \"kubernetes.io/projected/b68e3ad6-883d-4a1f-af69-3a94e705afe8-kube-api-access-z9q2z\") pod \"dnsmasq-dns-78fbc4bbf-nscvg\" (UID: \"b68e3ad6-883d-4a1f-af69-3a94e705afe8\") " pod="openstack/dnsmasq-dns-78fbc4bbf-nscvg" Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.111105 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a2d8e429-147e-426b-8310-85bb09cace99-credential-keys\") pod \"keystone-bootstrap-xzm8b\" (UID: \"a2d8e429-147e-426b-8310-85bb09cace99\") " pod="openstack/keystone-bootstrap-xzm8b" Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.111190 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2d8e429-147e-426b-8310-85bb09cace99-config-data\") pod \"keystone-bootstrap-xzm8b\" (UID: \"a2d8e429-147e-426b-8310-85bb09cace99\") " pod="openstack/keystone-bootstrap-xzm8b" Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.111351 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a2d8e429-147e-426b-8310-85bb09cace99-fernet-keys\") pod \"keystone-bootstrap-xzm8b\" (UID: \"a2d8e429-147e-426b-8310-85bb09cace99\") " pod="openstack/keystone-bootstrap-xzm8b" Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.111442 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b68e3ad6-883d-4a1f-af69-3a94e705afe8-ovsdbserver-nb\") pod \"dnsmasq-dns-78fbc4bbf-nscvg\" (UID: \"b68e3ad6-883d-4a1f-af69-3a94e705afe8\") " pod="openstack/dnsmasq-dns-78fbc4bbf-nscvg" Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.111521 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b68e3ad6-883d-4a1f-af69-3a94e705afe8-ovsdbserver-sb\") pod \"dnsmasq-dns-78fbc4bbf-nscvg\" (UID: \"b68e3ad6-883d-4a1f-af69-3a94e705afe8\") " pod="openstack/dnsmasq-dns-78fbc4bbf-nscvg" Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.111609 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b68e3ad6-883d-4a1f-af69-3a94e705afe8-dns-svc\") pod \"dnsmasq-dns-78fbc4bbf-nscvg\" (UID: \"b68e3ad6-883d-4a1f-af69-3a94e705afe8\") " pod="openstack/dnsmasq-dns-78fbc4bbf-nscvg" Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.112413 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b68e3ad6-883d-4a1f-af69-3a94e705afe8-dns-svc\") pod \"dnsmasq-dns-78fbc4bbf-nscvg\" (UID: \"b68e3ad6-883d-4a1f-af69-3a94e705afe8\") " pod="openstack/dnsmasq-dns-78fbc4bbf-nscvg" Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.113213 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b68e3ad6-883d-4a1f-af69-3a94e705afe8-config\") pod \"dnsmasq-dns-78fbc4bbf-nscvg\" (UID: \"b68e3ad6-883d-4a1f-af69-3a94e705afe8\") " pod="openstack/dnsmasq-dns-78fbc4bbf-nscvg" Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.115122 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b68e3ad6-883d-4a1f-af69-3a94e705afe8-ovsdbserver-sb\") pod \"dnsmasq-dns-78fbc4bbf-nscvg\" (UID: \"b68e3ad6-883d-4a1f-af69-3a94e705afe8\") " pod="openstack/dnsmasq-dns-78fbc4bbf-nscvg" Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.115682 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b68e3ad6-883d-4a1f-af69-3a94e705afe8-ovsdbserver-nb\") pod \"dnsmasq-dns-78fbc4bbf-nscvg\" (UID: \"b68e3ad6-883d-4a1f-af69-3a94e705afe8\") " pod="openstack/dnsmasq-dns-78fbc4bbf-nscvg" Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.120541 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2d8e429-147e-426b-8310-85bb09cace99-combined-ca-bundle\") pod \"keystone-bootstrap-xzm8b\" (UID: \"a2d8e429-147e-426b-8310-85bb09cace99\") " pod="openstack/keystone-bootstrap-xzm8b" Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.120967 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a2d8e429-147e-426b-8310-85bb09cace99-credential-keys\") pod \"keystone-bootstrap-xzm8b\" (UID: \"a2d8e429-147e-426b-8310-85bb09cace99\") " pod="openstack/keystone-bootstrap-xzm8b" Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.121193 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a2d8e429-147e-426b-8310-85bb09cace99-scripts\") pod \"keystone-bootstrap-xzm8b\" (UID: \"a2d8e429-147e-426b-8310-85bb09cace99\") " pod="openstack/keystone-bootstrap-xzm8b" Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.127095 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a2d8e429-147e-426b-8310-85bb09cace99-fernet-keys\") pod \"keystone-bootstrap-xzm8b\" (UID: \"a2d8e429-147e-426b-8310-85bb09cace99\") " pod="openstack/keystone-bootstrap-xzm8b" Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.132380 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-zztph"] Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.134981 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-zztph" Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.136616 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2d8e429-147e-426b-8310-85bb09cace99-config-data\") pod \"keystone-bootstrap-xzm8b\" (UID: \"a2d8e429-147e-426b-8310-85bb09cace99\") " pod="openstack/keystone-bootstrap-xzm8b" Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.144039 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.144208 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-j2wv7" Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.144317 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.152172 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9q2z\" (UniqueName: \"kubernetes.io/projected/b68e3ad6-883d-4a1f-af69-3a94e705afe8-kube-api-access-z9q2z\") pod \"dnsmasq-dns-78fbc4bbf-nscvg\" (UID: \"b68e3ad6-883d-4a1f-af69-3a94e705afe8\") " pod="openstack/dnsmasq-dns-78fbc4bbf-nscvg" Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.167241 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kj296\" (UniqueName: \"kubernetes.io/projected/a2d8e429-147e-426b-8310-85bb09cace99-kube-api-access-kj296\") pod \"keystone-bootstrap-xzm8b\" (UID: \"a2d8e429-147e-426b-8310-85bb09cace99\") " pod="openstack/keystone-bootstrap-xzm8b" Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.180681 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-zztph"] Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.198782 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-xzm8b" Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.212984 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0847c2e9-9761-4a9a-96fa-a216884fc3dc-config\") pod \"neutron-db-sync-bbl7b\" (UID: \"0847c2e9-9761-4a9a-96fa-a216884fc3dc\") " pod="openstack/neutron-db-sync-bbl7b" Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.213234 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0847c2e9-9761-4a9a-96fa-a216884fc3dc-combined-ca-bundle\") pod \"neutron-db-sync-bbl7b\" (UID: \"0847c2e9-9761-4a9a-96fa-a216884fc3dc\") " pod="openstack/neutron-db-sync-bbl7b" Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.213412 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xvwv\" (UniqueName: \"kubernetes.io/projected/0847c2e9-9761-4a9a-96fa-a216884fc3dc-kube-api-access-4xvwv\") pod \"neutron-db-sync-bbl7b\" (UID: \"0847c2e9-9761-4a9a-96fa-a216884fc3dc\") " pod="openstack/neutron-db-sync-bbl7b" Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.225429 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78fbc4bbf-nscvg"] Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.225973 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78fbc4bbf-nscvg" Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.263822 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-rnmr4"] Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.264675 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-rnmr4" Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.269224 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-hbjd7" Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.269593 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.278013 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.287648 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5d87b7c6dc-f7kdt"] Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.289562 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d87b7c6dc-f7kdt" Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.302734 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-rnmr4"] Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.315170 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1cd5889f-63d3-47a0-8b17-e8ffac0011d3-db-sync-config-data\") pod \"cinder-db-sync-zztph\" (UID: \"1cd5889f-63d3-47a0-8b17-e8ffac0011d3\") " pod="openstack/cinder-db-sync-zztph" Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.322704 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0847c2e9-9761-4a9a-96fa-a216884fc3dc-combined-ca-bundle\") pod \"neutron-db-sync-bbl7b\" (UID: \"0847c2e9-9761-4a9a-96fa-a216884fc3dc\") " pod="openstack/neutron-db-sync-bbl7b" Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.322818 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1cd5889f-63d3-47a0-8b17-e8ffac0011d3-config-data\") pod \"cinder-db-sync-zztph\" (UID: \"1cd5889f-63d3-47a0-8b17-e8ffac0011d3\") " pod="openstack/cinder-db-sync-zztph" Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.322955 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1cd5889f-63d3-47a0-8b17-e8ffac0011d3-scripts\") pod \"cinder-db-sync-zztph\" (UID: \"1cd5889f-63d3-47a0-8b17-e8ffac0011d3\") " pod="openstack/cinder-db-sync-zztph" Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.323061 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6wfm\" (UniqueName: \"kubernetes.io/projected/1cd5889f-63d3-47a0-8b17-e8ffac0011d3-kube-api-access-q6wfm\") pod \"cinder-db-sync-zztph\" (UID: \"1cd5889f-63d3-47a0-8b17-e8ffac0011d3\") " pod="openstack/cinder-db-sync-zztph" Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.323184 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4xvwv\" (UniqueName: \"kubernetes.io/projected/0847c2e9-9761-4a9a-96fa-a216884fc3dc-kube-api-access-4xvwv\") pod \"neutron-db-sync-bbl7b\" (UID: \"0847c2e9-9761-4a9a-96fa-a216884fc3dc\") " pod="openstack/neutron-db-sync-bbl7b" Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.323283 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cd5889f-63d3-47a0-8b17-e8ffac0011d3-combined-ca-bundle\") pod \"cinder-db-sync-zztph\" (UID: \"1cd5889f-63d3-47a0-8b17-e8ffac0011d3\") " pod="openstack/cinder-db-sync-zztph" Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.323445 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1cd5889f-63d3-47a0-8b17-e8ffac0011d3-etc-machine-id\") pod \"cinder-db-sync-zztph\" (UID: \"1cd5889f-63d3-47a0-8b17-e8ffac0011d3\") " pod="openstack/cinder-db-sync-zztph" Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.323550 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0847c2e9-9761-4a9a-96fa-a216884fc3dc-config\") pod \"neutron-db-sync-bbl7b\" (UID: \"0847c2e9-9761-4a9a-96fa-a216884fc3dc\") " pod="openstack/neutron-db-sync-bbl7b" Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.326195 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d87b7c6dc-f7kdt"] Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.327492 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/0847c2e9-9761-4a9a-96fa-a216884fc3dc-config\") pod \"neutron-db-sync-bbl7b\" (UID: \"0847c2e9-9761-4a9a-96fa-a216884fc3dc\") " pod="openstack/neutron-db-sync-bbl7b" Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.329721 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0847c2e9-9761-4a9a-96fa-a216884fc3dc-combined-ca-bundle\") pod \"neutron-db-sync-bbl7b\" (UID: \"0847c2e9-9761-4a9a-96fa-a216884fc3dc\") " pod="openstack/neutron-db-sync-bbl7b" Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.373733 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4xvwv\" (UniqueName: \"kubernetes.io/projected/0847c2e9-9761-4a9a-96fa-a216884fc3dc-kube-api-access-4xvwv\") pod \"neutron-db-sync-bbl7b\" (UID: \"0847c2e9-9761-4a9a-96fa-a216884fc3dc\") " pod="openstack/neutron-db-sync-bbl7b" Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.376500 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.378605 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-bbl7b" Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.378885 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.390863 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.391196 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.403486 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.425892 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b642751-b1e4-4488-b305-fed7f4fcd9fa-config-data\") pod \"placement-db-sync-rnmr4\" (UID: \"7b642751-b1e4-4488-b305-fed7f4fcd9fa\") " pod="openstack/placement-db-sync-rnmr4" Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.425936 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1cd5889f-63d3-47a0-8b17-e8ffac0011d3-scripts\") pod \"cinder-db-sync-zztph\" (UID: \"1cd5889f-63d3-47a0-8b17-e8ffac0011d3\") " pod="openstack/cinder-db-sync-zztph" Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.425959 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6wfm\" (UniqueName: \"kubernetes.io/projected/1cd5889f-63d3-47a0-8b17-e8ffac0011d3-kube-api-access-q6wfm\") pod \"cinder-db-sync-zztph\" (UID: \"1cd5889f-63d3-47a0-8b17-e8ffac0011d3\") " pod="openstack/cinder-db-sync-zztph" Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.425977 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dp2c\" (UniqueName: \"kubernetes.io/projected/7b642751-b1e4-4488-b305-fed7f4fcd9fa-kube-api-access-9dp2c\") pod \"placement-db-sync-rnmr4\" (UID: \"7b642751-b1e4-4488-b305-fed7f4fcd9fa\") " pod="openstack/placement-db-sync-rnmr4" Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.425995 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8d7l\" (UniqueName: \"kubernetes.io/projected/36b87b12-1080-46b8-a342-b9e743377f23-kube-api-access-l8d7l\") pod \"dnsmasq-dns-5d87b7c6dc-f7kdt\" (UID: \"36b87b12-1080-46b8-a342-b9e743377f23\") " pod="openstack/dnsmasq-dns-5d87b7c6dc-f7kdt" Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.426023 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b642751-b1e4-4488-b305-fed7f4fcd9fa-combined-ca-bundle\") pod \"placement-db-sync-rnmr4\" (UID: \"7b642751-b1e4-4488-b305-fed7f4fcd9fa\") " pod="openstack/placement-db-sync-rnmr4" Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.426048 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8n8s\" (UniqueName: \"kubernetes.io/projected/173400f9-c99e-4737-b27c-cff0bdb5ee94-kube-api-access-s8n8s\") pod \"ceilometer-0\" (UID: \"173400f9-c99e-4737-b27c-cff0bdb5ee94\") " pod="openstack/ceilometer-0" Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.426065 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cd5889f-63d3-47a0-8b17-e8ffac0011d3-combined-ca-bundle\") pod \"cinder-db-sync-zztph\" (UID: \"1cd5889f-63d3-47a0-8b17-e8ffac0011d3\") " pod="openstack/cinder-db-sync-zztph" Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.426104 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36b87b12-1080-46b8-a342-b9e743377f23-config\") pod \"dnsmasq-dns-5d87b7c6dc-f7kdt\" (UID: \"36b87b12-1080-46b8-a342-b9e743377f23\") " pod="openstack/dnsmasq-dns-5d87b7c6dc-f7kdt" Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.426120 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/36b87b12-1080-46b8-a342-b9e743377f23-dns-svc\") pod \"dnsmasq-dns-5d87b7c6dc-f7kdt\" (UID: \"36b87b12-1080-46b8-a342-b9e743377f23\") " pod="openstack/dnsmasq-dns-5d87b7c6dc-f7kdt" Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.426146 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1cd5889f-63d3-47a0-8b17-e8ffac0011d3-etc-machine-id\") pod \"cinder-db-sync-zztph\" (UID: \"1cd5889f-63d3-47a0-8b17-e8ffac0011d3\") " pod="openstack/cinder-db-sync-zztph" Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.426181 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/173400f9-c99e-4737-b27c-cff0bdb5ee94-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"173400f9-c99e-4737-b27c-cff0bdb5ee94\") " pod="openstack/ceilometer-0" Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.426194 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/173400f9-c99e-4737-b27c-cff0bdb5ee94-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"173400f9-c99e-4737-b27c-cff0bdb5ee94\") " pod="openstack/ceilometer-0" Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.426237 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/173400f9-c99e-4737-b27c-cff0bdb5ee94-log-httpd\") pod \"ceilometer-0\" (UID: \"173400f9-c99e-4737-b27c-cff0bdb5ee94\") " pod="openstack/ceilometer-0" Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.426259 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1cd5889f-63d3-47a0-8b17-e8ffac0011d3-db-sync-config-data\") pod \"cinder-db-sync-zztph\" (UID: \"1cd5889f-63d3-47a0-8b17-e8ffac0011d3\") " pod="openstack/cinder-db-sync-zztph" Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.426273 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/173400f9-c99e-4737-b27c-cff0bdb5ee94-scripts\") pod \"ceilometer-0\" (UID: \"173400f9-c99e-4737-b27c-cff0bdb5ee94\") " pod="openstack/ceilometer-0" Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.426301 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/36b87b12-1080-46b8-a342-b9e743377f23-ovsdbserver-nb\") pod \"dnsmasq-dns-5d87b7c6dc-f7kdt\" (UID: \"36b87b12-1080-46b8-a342-b9e743377f23\") " pod="openstack/dnsmasq-dns-5d87b7c6dc-f7kdt" Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.426316 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/36b87b12-1080-46b8-a342-b9e743377f23-ovsdbserver-sb\") pod \"dnsmasq-dns-5d87b7c6dc-f7kdt\" (UID: \"36b87b12-1080-46b8-a342-b9e743377f23\") " pod="openstack/dnsmasq-dns-5d87b7c6dc-f7kdt" Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.426335 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/173400f9-c99e-4737-b27c-cff0bdb5ee94-run-httpd\") pod \"ceilometer-0\" (UID: \"173400f9-c99e-4737-b27c-cff0bdb5ee94\") " pod="openstack/ceilometer-0" Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.426348 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7b642751-b1e4-4488-b305-fed7f4fcd9fa-logs\") pod \"placement-db-sync-rnmr4\" (UID: \"7b642751-b1e4-4488-b305-fed7f4fcd9fa\") " pod="openstack/placement-db-sync-rnmr4" Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.426374 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/173400f9-c99e-4737-b27c-cff0bdb5ee94-config-data\") pod \"ceilometer-0\" (UID: \"173400f9-c99e-4737-b27c-cff0bdb5ee94\") " pod="openstack/ceilometer-0" Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.426393 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b642751-b1e4-4488-b305-fed7f4fcd9fa-scripts\") pod \"placement-db-sync-rnmr4\" (UID: \"7b642751-b1e4-4488-b305-fed7f4fcd9fa\") " pod="openstack/placement-db-sync-rnmr4" Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.426406 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1cd5889f-63d3-47a0-8b17-e8ffac0011d3-config-data\") pod \"cinder-db-sync-zztph\" (UID: \"1cd5889f-63d3-47a0-8b17-e8ffac0011d3\") " pod="openstack/cinder-db-sync-zztph" Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.427439 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1cd5889f-63d3-47a0-8b17-e8ffac0011d3-etc-machine-id\") pod \"cinder-db-sync-zztph\" (UID: \"1cd5889f-63d3-47a0-8b17-e8ffac0011d3\") " pod="openstack/cinder-db-sync-zztph" Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.431037 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1cd5889f-63d3-47a0-8b17-e8ffac0011d3-config-data\") pod \"cinder-db-sync-zztph\" (UID: \"1cd5889f-63d3-47a0-8b17-e8ffac0011d3\") " pod="openstack/cinder-db-sync-zztph" Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.431394 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1cd5889f-63d3-47a0-8b17-e8ffac0011d3-db-sync-config-data\") pod \"cinder-db-sync-zztph\" (UID: \"1cd5889f-63d3-47a0-8b17-e8ffac0011d3\") " pod="openstack/cinder-db-sync-zztph" Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.445250 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cd5889f-63d3-47a0-8b17-e8ffac0011d3-combined-ca-bundle\") pod \"cinder-db-sync-zztph\" (UID: \"1cd5889f-63d3-47a0-8b17-e8ffac0011d3\") " pod="openstack/cinder-db-sync-zztph" Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.445540 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1cd5889f-63d3-47a0-8b17-e8ffac0011d3-scripts\") pod \"cinder-db-sync-zztph\" (UID: \"1cd5889f-63d3-47a0-8b17-e8ffac0011d3\") " pod="openstack/cinder-db-sync-zztph" Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.461060 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6wfm\" (UniqueName: \"kubernetes.io/projected/1cd5889f-63d3-47a0-8b17-e8ffac0011d3-kube-api-access-q6wfm\") pod \"cinder-db-sync-zztph\" (UID: \"1cd5889f-63d3-47a0-8b17-e8ffac0011d3\") " pod="openstack/cinder-db-sync-zztph" Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.488248 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-zmjzd"] Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.493823 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-zmjzd" Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.499086 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.499290 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-4drkf" Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.527155 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36b87b12-1080-46b8-a342-b9e743377f23-config\") pod \"dnsmasq-dns-5d87b7c6dc-f7kdt\" (UID: \"36b87b12-1080-46b8-a342-b9e743377f23\") " pod="openstack/dnsmasq-dns-5d87b7c6dc-f7kdt" Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.527192 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/36b87b12-1080-46b8-a342-b9e743377f23-dns-svc\") pod \"dnsmasq-dns-5d87b7c6dc-f7kdt\" (UID: \"36b87b12-1080-46b8-a342-b9e743377f23\") " pod="openstack/dnsmasq-dns-5d87b7c6dc-f7kdt" Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.527228 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/173400f9-c99e-4737-b27c-cff0bdb5ee94-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"173400f9-c99e-4737-b27c-cff0bdb5ee94\") " pod="openstack/ceilometer-0" Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.527241 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/173400f9-c99e-4737-b27c-cff0bdb5ee94-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"173400f9-c99e-4737-b27c-cff0bdb5ee94\") " pod="openstack/ceilometer-0" Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.527275 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/173400f9-c99e-4737-b27c-cff0bdb5ee94-log-httpd\") pod \"ceilometer-0\" (UID: \"173400f9-c99e-4737-b27c-cff0bdb5ee94\") " pod="openstack/ceilometer-0" Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.527295 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/173400f9-c99e-4737-b27c-cff0bdb5ee94-scripts\") pod \"ceilometer-0\" (UID: \"173400f9-c99e-4737-b27c-cff0bdb5ee94\") " pod="openstack/ceilometer-0" Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.527313 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/36b87b12-1080-46b8-a342-b9e743377f23-ovsdbserver-nb\") pod \"dnsmasq-dns-5d87b7c6dc-f7kdt\" (UID: \"36b87b12-1080-46b8-a342-b9e743377f23\") " pod="openstack/dnsmasq-dns-5d87b7c6dc-f7kdt" Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.527326 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/36b87b12-1080-46b8-a342-b9e743377f23-ovsdbserver-sb\") pod \"dnsmasq-dns-5d87b7c6dc-f7kdt\" (UID: \"36b87b12-1080-46b8-a342-b9e743377f23\") " pod="openstack/dnsmasq-dns-5d87b7c6dc-f7kdt" Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.527342 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/173400f9-c99e-4737-b27c-cff0bdb5ee94-run-httpd\") pod \"ceilometer-0\" (UID: \"173400f9-c99e-4737-b27c-cff0bdb5ee94\") " pod="openstack/ceilometer-0" Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.527354 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7b642751-b1e4-4488-b305-fed7f4fcd9fa-logs\") pod \"placement-db-sync-rnmr4\" (UID: \"7b642751-b1e4-4488-b305-fed7f4fcd9fa\") " pod="openstack/placement-db-sync-rnmr4" Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.527377 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/173400f9-c99e-4737-b27c-cff0bdb5ee94-config-data\") pod \"ceilometer-0\" (UID: \"173400f9-c99e-4737-b27c-cff0bdb5ee94\") " pod="openstack/ceilometer-0" Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.527393 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b642751-b1e4-4488-b305-fed7f4fcd9fa-scripts\") pod \"placement-db-sync-rnmr4\" (UID: \"7b642751-b1e4-4488-b305-fed7f4fcd9fa\") " pod="openstack/placement-db-sync-rnmr4" Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.527414 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e22a065b-3b3c-41a9-ad35-b1c1e594af9b-combined-ca-bundle\") pod \"barbican-db-sync-zmjzd\" (UID: \"e22a065b-3b3c-41a9-ad35-b1c1e594af9b\") " pod="openstack/barbican-db-sync-zmjzd" Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.527436 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b642751-b1e4-4488-b305-fed7f4fcd9fa-config-data\") pod \"placement-db-sync-rnmr4\" (UID: \"7b642751-b1e4-4488-b305-fed7f4fcd9fa\") " pod="openstack/placement-db-sync-rnmr4" Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.527455 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9dp2c\" (UniqueName: \"kubernetes.io/projected/7b642751-b1e4-4488-b305-fed7f4fcd9fa-kube-api-access-9dp2c\") pod \"placement-db-sync-rnmr4\" (UID: \"7b642751-b1e4-4488-b305-fed7f4fcd9fa\") " pod="openstack/placement-db-sync-rnmr4" Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.527469 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8d7l\" (UniqueName: \"kubernetes.io/projected/36b87b12-1080-46b8-a342-b9e743377f23-kube-api-access-l8d7l\") pod \"dnsmasq-dns-5d87b7c6dc-f7kdt\" (UID: \"36b87b12-1080-46b8-a342-b9e743377f23\") " pod="openstack/dnsmasq-dns-5d87b7c6dc-f7kdt" Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.527490 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b642751-b1e4-4488-b305-fed7f4fcd9fa-combined-ca-bundle\") pod \"placement-db-sync-rnmr4\" (UID: \"7b642751-b1e4-4488-b305-fed7f4fcd9fa\") " pod="openstack/placement-db-sync-rnmr4" Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.527504 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmqq7\" (UniqueName: \"kubernetes.io/projected/e22a065b-3b3c-41a9-ad35-b1c1e594af9b-kube-api-access-tmqq7\") pod \"barbican-db-sync-zmjzd\" (UID: \"e22a065b-3b3c-41a9-ad35-b1c1e594af9b\") " pod="openstack/barbican-db-sync-zmjzd" Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.527525 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e22a065b-3b3c-41a9-ad35-b1c1e594af9b-db-sync-config-data\") pod \"barbican-db-sync-zmjzd\" (UID: \"e22a065b-3b3c-41a9-ad35-b1c1e594af9b\") " pod="openstack/barbican-db-sync-zmjzd" Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.527542 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8n8s\" (UniqueName: \"kubernetes.io/projected/173400f9-c99e-4737-b27c-cff0bdb5ee94-kube-api-access-s8n8s\") pod \"ceilometer-0\" (UID: \"173400f9-c99e-4737-b27c-cff0bdb5ee94\") " pod="openstack/ceilometer-0" Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.528211 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7b642751-b1e4-4488-b305-fed7f4fcd9fa-logs\") pod \"placement-db-sync-rnmr4\" (UID: \"7b642751-b1e4-4488-b305-fed7f4fcd9fa\") " pod="openstack/placement-db-sync-rnmr4" Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.529200 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36b87b12-1080-46b8-a342-b9e743377f23-config\") pod \"dnsmasq-dns-5d87b7c6dc-f7kdt\" (UID: \"36b87b12-1080-46b8-a342-b9e743377f23\") " pod="openstack/dnsmasq-dns-5d87b7c6dc-f7kdt" Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.532788 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/36b87b12-1080-46b8-a342-b9e743377f23-dns-svc\") pod \"dnsmasq-dns-5d87b7c6dc-f7kdt\" (UID: \"36b87b12-1080-46b8-a342-b9e743377f23\") " pod="openstack/dnsmasq-dns-5d87b7c6dc-f7kdt" Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.534662 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/173400f9-c99e-4737-b27c-cff0bdb5ee94-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"173400f9-c99e-4737-b27c-cff0bdb5ee94\") " pod="openstack/ceilometer-0" Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.535382 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/36b87b12-1080-46b8-a342-b9e743377f23-ovsdbserver-sb\") pod \"dnsmasq-dns-5d87b7c6dc-f7kdt\" (UID: \"36b87b12-1080-46b8-a342-b9e743377f23\") " pod="openstack/dnsmasq-dns-5d87b7c6dc-f7kdt" Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.535739 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/173400f9-c99e-4737-b27c-cff0bdb5ee94-run-httpd\") pod \"ceilometer-0\" (UID: \"173400f9-c99e-4737-b27c-cff0bdb5ee94\") " pod="openstack/ceilometer-0" Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.536020 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/173400f9-c99e-4737-b27c-cff0bdb5ee94-log-httpd\") pod \"ceilometer-0\" (UID: \"173400f9-c99e-4737-b27c-cff0bdb5ee94\") " pod="openstack/ceilometer-0" Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.547499 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-zmjzd"] Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.551353 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/36b87b12-1080-46b8-a342-b9e743377f23-ovsdbserver-nb\") pod \"dnsmasq-dns-5d87b7c6dc-f7kdt\" (UID: \"36b87b12-1080-46b8-a342-b9e743377f23\") " pod="openstack/dnsmasq-dns-5d87b7c6dc-f7kdt" Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.552695 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8n8s\" (UniqueName: \"kubernetes.io/projected/173400f9-c99e-4737-b27c-cff0bdb5ee94-kube-api-access-s8n8s\") pod \"ceilometer-0\" (UID: \"173400f9-c99e-4737-b27c-cff0bdb5ee94\") " pod="openstack/ceilometer-0" Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.553568 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b642751-b1e4-4488-b305-fed7f4fcd9fa-scripts\") pod \"placement-db-sync-rnmr4\" (UID: \"7b642751-b1e4-4488-b305-fed7f4fcd9fa\") " pod="openstack/placement-db-sync-rnmr4" Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.556018 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/173400f9-c99e-4737-b27c-cff0bdb5ee94-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"173400f9-c99e-4737-b27c-cff0bdb5ee94\") " pod="openstack/ceilometer-0" Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.556315 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/173400f9-c99e-4737-b27c-cff0bdb5ee94-scripts\") pod \"ceilometer-0\" (UID: \"173400f9-c99e-4737-b27c-cff0bdb5ee94\") " pod="openstack/ceilometer-0" Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.559426 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b642751-b1e4-4488-b305-fed7f4fcd9fa-combined-ca-bundle\") pod \"placement-db-sync-rnmr4\" (UID: \"7b642751-b1e4-4488-b305-fed7f4fcd9fa\") " pod="openstack/placement-db-sync-rnmr4" Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.563763 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/173400f9-c99e-4737-b27c-cff0bdb5ee94-config-data\") pod \"ceilometer-0\" (UID: \"173400f9-c99e-4737-b27c-cff0bdb5ee94\") " pod="openstack/ceilometer-0" Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.568415 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9dp2c\" (UniqueName: \"kubernetes.io/projected/7b642751-b1e4-4488-b305-fed7f4fcd9fa-kube-api-access-9dp2c\") pod \"placement-db-sync-rnmr4\" (UID: \"7b642751-b1e4-4488-b305-fed7f4fcd9fa\") " pod="openstack/placement-db-sync-rnmr4" Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.580115 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-zztph" Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.595341 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b642751-b1e4-4488-b305-fed7f4fcd9fa-config-data\") pod \"placement-db-sync-rnmr4\" (UID: \"7b642751-b1e4-4488-b305-fed7f4fcd9fa\") " pod="openstack/placement-db-sync-rnmr4" Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.597098 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-rnmr4" Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.601078 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8d7l\" (UniqueName: \"kubernetes.io/projected/36b87b12-1080-46b8-a342-b9e743377f23-kube-api-access-l8d7l\") pod \"dnsmasq-dns-5d87b7c6dc-f7kdt\" (UID: \"36b87b12-1080-46b8-a342-b9e743377f23\") " pod="openstack/dnsmasq-dns-5d87b7c6dc-f7kdt" Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.620881 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d87b7c6dc-f7kdt" Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.639497 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e22a065b-3b3c-41a9-ad35-b1c1e594af9b-combined-ca-bundle\") pod \"barbican-db-sync-zmjzd\" (UID: \"e22a065b-3b3c-41a9-ad35-b1c1e594af9b\") " pod="openstack/barbican-db-sync-zmjzd" Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.641749 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmqq7\" (UniqueName: \"kubernetes.io/projected/e22a065b-3b3c-41a9-ad35-b1c1e594af9b-kube-api-access-tmqq7\") pod \"barbican-db-sync-zmjzd\" (UID: \"e22a065b-3b3c-41a9-ad35-b1c1e594af9b\") " pod="openstack/barbican-db-sync-zmjzd" Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.641818 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e22a065b-3b3c-41a9-ad35-b1c1e594af9b-db-sync-config-data\") pod \"barbican-db-sync-zmjzd\" (UID: \"e22a065b-3b3c-41a9-ad35-b1c1e594af9b\") " pod="openstack/barbican-db-sync-zmjzd" Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.651853 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e22a065b-3b3c-41a9-ad35-b1c1e594af9b-db-sync-config-data\") pod \"barbican-db-sync-zmjzd\" (UID: \"e22a065b-3b3c-41a9-ad35-b1c1e594af9b\") " pod="openstack/barbican-db-sync-zmjzd" Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.660400 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e22a065b-3b3c-41a9-ad35-b1c1e594af9b-combined-ca-bundle\") pod \"barbican-db-sync-zmjzd\" (UID: \"e22a065b-3b3c-41a9-ad35-b1c1e594af9b\") " pod="openstack/barbican-db-sync-zmjzd" Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.668528 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmqq7\" (UniqueName: \"kubernetes.io/projected/e22a065b-3b3c-41a9-ad35-b1c1e594af9b-kube-api-access-tmqq7\") pod \"barbican-db-sync-zmjzd\" (UID: \"e22a065b-3b3c-41a9-ad35-b1c1e594af9b\") " pod="openstack/barbican-db-sync-zmjzd" Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.720918 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.822960 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-zmjzd" Jan 22 09:21:02 crc kubenswrapper[4811]: I0122 09:21:02.961723 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-xzm8b"] Jan 22 09:21:03 crc kubenswrapper[4811]: I0122 09:21:03.027226 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-bbl7b"] Jan 22 09:21:03 crc kubenswrapper[4811]: I0122 09:21:03.056782 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78fbc4bbf-nscvg"] Jan 22 09:21:03 crc kubenswrapper[4811]: I0122 09:21:03.269491 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-rnmr4"] Jan 22 09:21:03 crc kubenswrapper[4811]: W0122 09:21:03.296608 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7b642751_b1e4_4488_b305_fed7f4fcd9fa.slice/crio-c8e2ffb6e740654e0fafdd010d459fc712427c7a78b0a1d873c07ceb7cebd5dc WatchSource:0}: Error finding container c8e2ffb6e740654e0fafdd010d459fc712427c7a78b0a1d873c07ceb7cebd5dc: Status 404 returned error can't find the container with id c8e2ffb6e740654e0fafdd010d459fc712427c7a78b0a1d873c07ceb7cebd5dc Jan 22 09:21:03 crc kubenswrapper[4811]: I0122 09:21:03.625691 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-rnmr4" event={"ID":"7b642751-b1e4-4488-b305-fed7f4fcd9fa","Type":"ContainerStarted","Data":"c8e2ffb6e740654e0fafdd010d459fc712427c7a78b0a1d873c07ceb7cebd5dc"} Jan 22 09:21:03 crc kubenswrapper[4811]: I0122 09:21:03.627143 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-bbl7b" event={"ID":"0847c2e9-9761-4a9a-96fa-a216884fc3dc","Type":"ContainerStarted","Data":"ad14d3f990ab1077c0a5c3d8ce964721a1571b7e78fb9cc881b77a70d8d71444"} Jan 22 09:21:03 crc kubenswrapper[4811]: I0122 09:21:03.627178 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-bbl7b" event={"ID":"0847c2e9-9761-4a9a-96fa-a216884fc3dc","Type":"ContainerStarted","Data":"1986fedd4520d83a4716a0a28813a60723dd41a7b2db1eb3b873afcc53597253"} Jan 22 09:21:03 crc kubenswrapper[4811]: I0122 09:21:03.629323 4811 generic.go:334] "Generic (PLEG): container finished" podID="b68e3ad6-883d-4a1f-af69-3a94e705afe8" containerID="c5c11f84a968fb193c0b3c64870df170056c389340c7c4be1530af84a225c7d0" exitCode=0 Jan 22 09:21:03 crc kubenswrapper[4811]: I0122 09:21:03.629397 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78fbc4bbf-nscvg" event={"ID":"b68e3ad6-883d-4a1f-af69-3a94e705afe8","Type":"ContainerDied","Data":"c5c11f84a968fb193c0b3c64870df170056c389340c7c4be1530af84a225c7d0"} Jan 22 09:21:03 crc kubenswrapper[4811]: I0122 09:21:03.629414 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78fbc4bbf-nscvg" event={"ID":"b68e3ad6-883d-4a1f-af69-3a94e705afe8","Type":"ContainerStarted","Data":"a2c8a6d280d507ddc6b4eaf586842cbebabd9690630d1ccef136ae51207e4a27"} Jan 22 09:21:03 crc kubenswrapper[4811]: I0122 09:21:03.633697 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-xzm8b" event={"ID":"a2d8e429-147e-426b-8310-85bb09cace99","Type":"ContainerStarted","Data":"ae3bf1fa80785b6d4df74303472c557a6ac0718284d5eb541b8118e80750e927"} Jan 22 09:21:03 crc kubenswrapper[4811]: I0122 09:21:03.633724 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-xzm8b" event={"ID":"a2d8e429-147e-426b-8310-85bb09cace99","Type":"ContainerStarted","Data":"7753407831f30cf9db485d463c7c4afdd0b6c02c77f0cc35c464f929f6c097df"} Jan 22 09:21:03 crc kubenswrapper[4811]: I0122 09:21:03.646533 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-bbl7b" podStartSLOduration=1.646521616 podStartE2EDuration="1.646521616s" podCreationTimestamp="2026-01-22 09:21:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:21:03.641202075 +0000 UTC m=+907.963389198" watchObservedRunningTime="2026-01-22 09:21:03.646521616 +0000 UTC m=+907.968708738" Jan 22 09:21:03 crc kubenswrapper[4811]: I0122 09:21:03.667550 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-xzm8b" podStartSLOduration=2.667537951 podStartE2EDuration="2.667537951s" podCreationTimestamp="2026-01-22 09:21:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:21:03.665137495 +0000 UTC m=+907.987324619" watchObservedRunningTime="2026-01-22 09:21:03.667537951 +0000 UTC m=+907.989725074" Jan 22 09:21:03 crc kubenswrapper[4811]: I0122 09:21:03.968154 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 22 09:21:03 crc kubenswrapper[4811]: I0122 09:21:03.985546 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-zmjzd"] Jan 22 09:21:04 crc kubenswrapper[4811]: I0122 09:21:04.001501 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d87b7c6dc-f7kdt"] Jan 22 09:21:04 crc kubenswrapper[4811]: I0122 09:21:04.013120 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78fbc4bbf-nscvg" Jan 22 09:21:04 crc kubenswrapper[4811]: I0122 09:21:04.061521 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z9q2z\" (UniqueName: \"kubernetes.io/projected/b68e3ad6-883d-4a1f-af69-3a94e705afe8-kube-api-access-z9q2z\") pod \"b68e3ad6-883d-4a1f-af69-3a94e705afe8\" (UID: \"b68e3ad6-883d-4a1f-af69-3a94e705afe8\") " Jan 22 09:21:04 crc kubenswrapper[4811]: I0122 09:21:04.061560 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b68e3ad6-883d-4a1f-af69-3a94e705afe8-config\") pod \"b68e3ad6-883d-4a1f-af69-3a94e705afe8\" (UID: \"b68e3ad6-883d-4a1f-af69-3a94e705afe8\") " Jan 22 09:21:04 crc kubenswrapper[4811]: I0122 09:21:04.061589 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b68e3ad6-883d-4a1f-af69-3a94e705afe8-dns-svc\") pod \"b68e3ad6-883d-4a1f-af69-3a94e705afe8\" (UID: \"b68e3ad6-883d-4a1f-af69-3a94e705afe8\") " Jan 22 09:21:04 crc kubenswrapper[4811]: I0122 09:21:04.061706 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b68e3ad6-883d-4a1f-af69-3a94e705afe8-ovsdbserver-nb\") pod \"b68e3ad6-883d-4a1f-af69-3a94e705afe8\" (UID: \"b68e3ad6-883d-4a1f-af69-3a94e705afe8\") " Jan 22 09:21:04 crc kubenswrapper[4811]: I0122 09:21:04.061735 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b68e3ad6-883d-4a1f-af69-3a94e705afe8-ovsdbserver-sb\") pod \"b68e3ad6-883d-4a1f-af69-3a94e705afe8\" (UID: \"b68e3ad6-883d-4a1f-af69-3a94e705afe8\") " Jan 22 09:21:04 crc kubenswrapper[4811]: I0122 09:21:04.072314 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b68e3ad6-883d-4a1f-af69-3a94e705afe8-kube-api-access-z9q2z" (OuterVolumeSpecName: "kube-api-access-z9q2z") pod "b68e3ad6-883d-4a1f-af69-3a94e705afe8" (UID: "b68e3ad6-883d-4a1f-af69-3a94e705afe8"). InnerVolumeSpecName "kube-api-access-z9q2z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:21:04 crc kubenswrapper[4811]: I0122 09:21:04.080036 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-zztph"] Jan 22 09:21:04 crc kubenswrapper[4811]: I0122 09:21:04.084111 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b68e3ad6-883d-4a1f-af69-3a94e705afe8-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b68e3ad6-883d-4a1f-af69-3a94e705afe8" (UID: "b68e3ad6-883d-4a1f-af69-3a94e705afe8"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:21:04 crc kubenswrapper[4811]: I0122 09:21:04.090233 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b68e3ad6-883d-4a1f-af69-3a94e705afe8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b68e3ad6-883d-4a1f-af69-3a94e705afe8" (UID: "b68e3ad6-883d-4a1f-af69-3a94e705afe8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:21:04 crc kubenswrapper[4811]: I0122 09:21:04.096165 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b68e3ad6-883d-4a1f-af69-3a94e705afe8-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b68e3ad6-883d-4a1f-af69-3a94e705afe8" (UID: "b68e3ad6-883d-4a1f-af69-3a94e705afe8"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:21:04 crc kubenswrapper[4811]: I0122 09:21:04.096179 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b68e3ad6-883d-4a1f-af69-3a94e705afe8-config" (OuterVolumeSpecName: "config") pod "b68e3ad6-883d-4a1f-af69-3a94e705afe8" (UID: "b68e3ad6-883d-4a1f-af69-3a94e705afe8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:21:04 crc kubenswrapper[4811]: I0122 09:21:04.163146 4811 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b68e3ad6-883d-4a1f-af69-3a94e705afe8-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 22 09:21:04 crc kubenswrapper[4811]: I0122 09:21:04.163177 4811 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b68e3ad6-883d-4a1f-af69-3a94e705afe8-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 22 09:21:04 crc kubenswrapper[4811]: I0122 09:21:04.163187 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z9q2z\" (UniqueName: \"kubernetes.io/projected/b68e3ad6-883d-4a1f-af69-3a94e705afe8-kube-api-access-z9q2z\") on node \"crc\" DevicePath \"\"" Jan 22 09:21:04 crc kubenswrapper[4811]: I0122 09:21:04.163197 4811 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b68e3ad6-883d-4a1f-af69-3a94e705afe8-config\") on node \"crc\" DevicePath \"\"" Jan 22 09:21:04 crc kubenswrapper[4811]: I0122 09:21:04.163209 4811 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b68e3ad6-883d-4a1f-af69-3a94e705afe8-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 22 09:21:04 crc kubenswrapper[4811]: I0122 09:21:04.238411 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 22 09:21:04 crc kubenswrapper[4811]: I0122 09:21:04.649560 4811 generic.go:334] "Generic (PLEG): container finished" podID="36b87b12-1080-46b8-a342-b9e743377f23" containerID="2a5d01c985f836de9107beeb271bdbe89b6d281af17d2eeca1f2d783a8f922bf" exitCode=0 Jan 22 09:21:04 crc kubenswrapper[4811]: I0122 09:21:04.649832 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d87b7c6dc-f7kdt" event={"ID":"36b87b12-1080-46b8-a342-b9e743377f23","Type":"ContainerDied","Data":"2a5d01c985f836de9107beeb271bdbe89b6d281af17d2eeca1f2d783a8f922bf"} Jan 22 09:21:04 crc kubenswrapper[4811]: I0122 09:21:04.650283 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d87b7c6dc-f7kdt" event={"ID":"36b87b12-1080-46b8-a342-b9e743377f23","Type":"ContainerStarted","Data":"88b2d28ad766be5a779a026c7dade30d407225debb9822dd56bc8ee4167560e5"} Jan 22 09:21:04 crc kubenswrapper[4811]: I0122 09:21:04.655208 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"173400f9-c99e-4737-b27c-cff0bdb5ee94","Type":"ContainerStarted","Data":"098423c2a3605e4c619446d724d48d3aff151ca985eacec29cf4afa50b997982"} Jan 22 09:21:04 crc kubenswrapper[4811]: I0122 09:21:04.656970 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-zztph" event={"ID":"1cd5889f-63d3-47a0-8b17-e8ffac0011d3","Type":"ContainerStarted","Data":"aa4cca89ee2fff064a09ad2133ada22854189dd6d3ab4a5226a2b4f72716c1e5"} Jan 22 09:21:04 crc kubenswrapper[4811]: I0122 09:21:04.662171 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-zmjzd" event={"ID":"e22a065b-3b3c-41a9-ad35-b1c1e594af9b","Type":"ContainerStarted","Data":"7ce59424be3a3698d875724ffc129fe1009eab96be4aa85155246668e6676809"} Jan 22 09:21:04 crc kubenswrapper[4811]: I0122 09:21:04.675025 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78fbc4bbf-nscvg" Jan 22 09:21:04 crc kubenswrapper[4811]: I0122 09:21:04.679859 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78fbc4bbf-nscvg" event={"ID":"b68e3ad6-883d-4a1f-af69-3a94e705afe8","Type":"ContainerDied","Data":"a2c8a6d280d507ddc6b4eaf586842cbebabd9690630d1ccef136ae51207e4a27"} Jan 22 09:21:04 crc kubenswrapper[4811]: I0122 09:21:04.679945 4811 scope.go:117] "RemoveContainer" containerID="c5c11f84a968fb193c0b3c64870df170056c389340c7c4be1530af84a225c7d0" Jan 22 09:21:04 crc kubenswrapper[4811]: I0122 09:21:04.838960 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78fbc4bbf-nscvg"] Jan 22 09:21:04 crc kubenswrapper[4811]: I0122 09:21:04.849835 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78fbc4bbf-nscvg"] Jan 22 09:21:05 crc kubenswrapper[4811]: I0122 09:21:05.502139 4811 patch_prober.go:28] interesting pod/machine-config-daemon-txvcq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 09:21:05 crc kubenswrapper[4811]: I0122 09:21:05.502507 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 09:21:05 crc kubenswrapper[4811]: I0122 09:21:05.690337 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d87b7c6dc-f7kdt" event={"ID":"36b87b12-1080-46b8-a342-b9e743377f23","Type":"ContainerStarted","Data":"70a86b019d8d490a8376bf72bdea67e48c1425e64b7851debe6637a69c5e0ecc"} Jan 22 09:21:05 crc kubenswrapper[4811]: I0122 09:21:05.690798 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5d87b7c6dc-f7kdt" Jan 22 09:21:05 crc kubenswrapper[4811]: I0122 09:21:05.710652 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5d87b7c6dc-f7kdt" podStartSLOduration=3.71063843 podStartE2EDuration="3.71063843s" podCreationTimestamp="2026-01-22 09:21:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:21:05.703192792 +0000 UTC m=+910.025379915" watchObservedRunningTime="2026-01-22 09:21:05.71063843 +0000 UTC m=+910.032825553" Jan 22 09:21:06 crc kubenswrapper[4811]: I0122 09:21:06.008778 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b68e3ad6-883d-4a1f-af69-3a94e705afe8" path="/var/lib/kubelet/pods/b68e3ad6-883d-4a1f-af69-3a94e705afe8/volumes" Jan 22 09:21:07 crc kubenswrapper[4811]: I0122 09:21:07.712834 4811 generic.go:334] "Generic (PLEG): container finished" podID="a2d8e429-147e-426b-8310-85bb09cace99" containerID="ae3bf1fa80785b6d4df74303472c557a6ac0718284d5eb541b8118e80750e927" exitCode=0 Jan 22 09:21:07 crc kubenswrapper[4811]: I0122 09:21:07.712875 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-xzm8b" event={"ID":"a2d8e429-147e-426b-8310-85bb09cace99","Type":"ContainerDied","Data":"ae3bf1fa80785b6d4df74303472c557a6ac0718284d5eb541b8118e80750e927"} Jan 22 09:21:11 crc kubenswrapper[4811]: I0122 09:21:11.744949 4811 generic.go:334] "Generic (PLEG): container finished" podID="0847c2e9-9761-4a9a-96fa-a216884fc3dc" containerID="ad14d3f990ab1077c0a5c3d8ce964721a1571b7e78fb9cc881b77a70d8d71444" exitCode=0 Jan 22 09:21:11 crc kubenswrapper[4811]: I0122 09:21:11.745027 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-bbl7b" event={"ID":"0847c2e9-9761-4a9a-96fa-a216884fc3dc","Type":"ContainerDied","Data":"ad14d3f990ab1077c0a5c3d8ce964721a1571b7e78fb9cc881b77a70d8d71444"} Jan 22 09:21:12 crc kubenswrapper[4811]: I0122 09:21:12.623034 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5d87b7c6dc-f7kdt" Jan 22 09:21:12 crc kubenswrapper[4811]: I0122 09:21:12.678792 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57768dd7b5-pqcms"] Jan 22 09:21:12 crc kubenswrapper[4811]: I0122 09:21:12.679015 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57768dd7b5-pqcms" podUID="3c7ecfd5-ce1b-487a-a11e-82f13e1fb1e2" containerName="dnsmasq-dns" containerID="cri-o://f91f41c59e3ce5c3c4b0b61b117f817be9ba096ecfdab23959d3ef3fa95d5302" gracePeriod=10 Jan 22 09:21:12 crc kubenswrapper[4811]: I0122 09:21:12.919150 4811 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-57768dd7b5-pqcms" podUID="3c7ecfd5-ce1b-487a-a11e-82f13e1fb1e2" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.118:5353: connect: connection refused" Jan 22 09:21:13 crc kubenswrapper[4811]: I0122 09:21:13.759558 4811 generic.go:334] "Generic (PLEG): container finished" podID="3c7ecfd5-ce1b-487a-a11e-82f13e1fb1e2" containerID="f91f41c59e3ce5c3c4b0b61b117f817be9ba096ecfdab23959d3ef3fa95d5302" exitCode=0 Jan 22 09:21:13 crc kubenswrapper[4811]: I0122 09:21:13.759602 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57768dd7b5-pqcms" event={"ID":"3c7ecfd5-ce1b-487a-a11e-82f13e1fb1e2","Type":"ContainerDied","Data":"f91f41c59e3ce5c3c4b0b61b117f817be9ba096ecfdab23959d3ef3fa95d5302"} Jan 22 09:21:14 crc kubenswrapper[4811]: I0122 09:21:14.378969 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-xzm8b" Jan 22 09:21:14 crc kubenswrapper[4811]: I0122 09:21:14.420249 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2d8e429-147e-426b-8310-85bb09cace99-config-data\") pod \"a2d8e429-147e-426b-8310-85bb09cace99\" (UID: \"a2d8e429-147e-426b-8310-85bb09cace99\") " Jan 22 09:21:14 crc kubenswrapper[4811]: I0122 09:21:14.420283 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2d8e429-147e-426b-8310-85bb09cace99-combined-ca-bundle\") pod \"a2d8e429-147e-426b-8310-85bb09cace99\" (UID: \"a2d8e429-147e-426b-8310-85bb09cace99\") " Jan 22 09:21:14 crc kubenswrapper[4811]: I0122 09:21:14.420324 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a2d8e429-147e-426b-8310-85bb09cace99-credential-keys\") pod \"a2d8e429-147e-426b-8310-85bb09cace99\" (UID: \"a2d8e429-147e-426b-8310-85bb09cace99\") " Jan 22 09:21:14 crc kubenswrapper[4811]: I0122 09:21:14.420343 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a2d8e429-147e-426b-8310-85bb09cace99-scripts\") pod \"a2d8e429-147e-426b-8310-85bb09cace99\" (UID: \"a2d8e429-147e-426b-8310-85bb09cace99\") " Jan 22 09:21:14 crc kubenswrapper[4811]: I0122 09:21:14.430792 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2d8e429-147e-426b-8310-85bb09cace99-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "a2d8e429-147e-426b-8310-85bb09cace99" (UID: "a2d8e429-147e-426b-8310-85bb09cace99"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:21:14 crc kubenswrapper[4811]: I0122 09:21:14.431269 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2d8e429-147e-426b-8310-85bb09cace99-scripts" (OuterVolumeSpecName: "scripts") pod "a2d8e429-147e-426b-8310-85bb09cace99" (UID: "a2d8e429-147e-426b-8310-85bb09cace99"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:21:14 crc kubenswrapper[4811]: I0122 09:21:14.438236 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2d8e429-147e-426b-8310-85bb09cace99-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a2d8e429-147e-426b-8310-85bb09cace99" (UID: "a2d8e429-147e-426b-8310-85bb09cace99"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:21:14 crc kubenswrapper[4811]: I0122 09:21:14.440665 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2d8e429-147e-426b-8310-85bb09cace99-config-data" (OuterVolumeSpecName: "config-data") pod "a2d8e429-147e-426b-8310-85bb09cace99" (UID: "a2d8e429-147e-426b-8310-85bb09cace99"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:21:14 crc kubenswrapper[4811]: I0122 09:21:14.521509 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a2d8e429-147e-426b-8310-85bb09cace99-fernet-keys\") pod \"a2d8e429-147e-426b-8310-85bb09cace99\" (UID: \"a2d8e429-147e-426b-8310-85bb09cace99\") " Jan 22 09:21:14 crc kubenswrapper[4811]: I0122 09:21:14.521584 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kj296\" (UniqueName: \"kubernetes.io/projected/a2d8e429-147e-426b-8310-85bb09cace99-kube-api-access-kj296\") pod \"a2d8e429-147e-426b-8310-85bb09cace99\" (UID: \"a2d8e429-147e-426b-8310-85bb09cace99\") " Jan 22 09:21:14 crc kubenswrapper[4811]: I0122 09:21:14.521901 4811 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2d8e429-147e-426b-8310-85bb09cace99-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 09:21:14 crc kubenswrapper[4811]: I0122 09:21:14.521920 4811 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2d8e429-147e-426b-8310-85bb09cace99-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 09:21:14 crc kubenswrapper[4811]: I0122 09:21:14.521938 4811 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a2d8e429-147e-426b-8310-85bb09cace99-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 22 09:21:14 crc kubenswrapper[4811]: I0122 09:21:14.521946 4811 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a2d8e429-147e-426b-8310-85bb09cace99-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 09:21:14 crc kubenswrapper[4811]: I0122 09:21:14.524197 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2d8e429-147e-426b-8310-85bb09cace99-kube-api-access-kj296" (OuterVolumeSpecName: "kube-api-access-kj296") pod "a2d8e429-147e-426b-8310-85bb09cace99" (UID: "a2d8e429-147e-426b-8310-85bb09cace99"). InnerVolumeSpecName "kube-api-access-kj296". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:21:14 crc kubenswrapper[4811]: I0122 09:21:14.525357 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2d8e429-147e-426b-8310-85bb09cace99-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "a2d8e429-147e-426b-8310-85bb09cace99" (UID: "a2d8e429-147e-426b-8310-85bb09cace99"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:21:14 crc kubenswrapper[4811]: I0122 09:21:14.623264 4811 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a2d8e429-147e-426b-8310-85bb09cace99-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 22 09:21:14 crc kubenswrapper[4811]: I0122 09:21:14.623286 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kj296\" (UniqueName: \"kubernetes.io/projected/a2d8e429-147e-426b-8310-85bb09cace99-kube-api-access-kj296\") on node \"crc\" DevicePath \"\"" Jan 22 09:21:14 crc kubenswrapper[4811]: I0122 09:21:14.766827 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-xzm8b" event={"ID":"a2d8e429-147e-426b-8310-85bb09cace99","Type":"ContainerDied","Data":"7753407831f30cf9db485d463c7c4afdd0b6c02c77f0cc35c464f929f6c097df"} Jan 22 09:21:14 crc kubenswrapper[4811]: I0122 09:21:14.766854 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-xzm8b" Jan 22 09:21:14 crc kubenswrapper[4811]: I0122 09:21:14.766865 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7753407831f30cf9db485d463c7c4afdd0b6c02c77f0cc35c464f929f6c097df" Jan 22 09:21:15 crc kubenswrapper[4811]: I0122 09:21:15.439331 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-xzm8b"] Jan 22 09:21:15 crc kubenswrapper[4811]: I0122 09:21:15.445848 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-xzm8b"] Jan 22 09:21:15 crc kubenswrapper[4811]: I0122 09:21:15.539912 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-zt2qg"] Jan 22 09:21:15 crc kubenswrapper[4811]: E0122 09:21:15.540185 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b68e3ad6-883d-4a1f-af69-3a94e705afe8" containerName="init" Jan 22 09:21:15 crc kubenswrapper[4811]: I0122 09:21:15.540203 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="b68e3ad6-883d-4a1f-af69-3a94e705afe8" containerName="init" Jan 22 09:21:15 crc kubenswrapper[4811]: E0122 09:21:15.540222 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2d8e429-147e-426b-8310-85bb09cace99" containerName="keystone-bootstrap" Jan 22 09:21:15 crc kubenswrapper[4811]: I0122 09:21:15.540229 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2d8e429-147e-426b-8310-85bb09cace99" containerName="keystone-bootstrap" Jan 22 09:21:15 crc kubenswrapper[4811]: I0122 09:21:15.540395 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="b68e3ad6-883d-4a1f-af69-3a94e705afe8" containerName="init" Jan 22 09:21:15 crc kubenswrapper[4811]: I0122 09:21:15.540403 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2d8e429-147e-426b-8310-85bb09cace99" containerName="keystone-bootstrap" Jan 22 09:21:15 crc kubenswrapper[4811]: I0122 09:21:15.540836 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-zt2qg" Jan 22 09:21:15 crc kubenswrapper[4811]: I0122 09:21:15.543299 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 22 09:21:15 crc kubenswrapper[4811]: I0122 09:21:15.543334 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 22 09:21:15 crc kubenswrapper[4811]: I0122 09:21:15.543733 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 22 09:21:15 crc kubenswrapper[4811]: I0122 09:21:15.543815 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 22 09:21:15 crc kubenswrapper[4811]: I0122 09:21:15.543920 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-9nlpl" Jan 22 09:21:15 crc kubenswrapper[4811]: I0122 09:21:15.555076 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-zt2qg"] Jan 22 09:21:15 crc kubenswrapper[4811]: I0122 09:21:15.738682 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14a1e4fa-5a60-47c5-a2da-e57110ca0b57-config-data\") pod \"keystone-bootstrap-zt2qg\" (UID: \"14a1e4fa-5a60-47c5-a2da-e57110ca0b57\") " pod="openstack/keystone-bootstrap-zt2qg" Jan 22 09:21:15 crc kubenswrapper[4811]: I0122 09:21:15.738741 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/14a1e4fa-5a60-47c5-a2da-e57110ca0b57-scripts\") pod \"keystone-bootstrap-zt2qg\" (UID: \"14a1e4fa-5a60-47c5-a2da-e57110ca0b57\") " pod="openstack/keystone-bootstrap-zt2qg" Jan 22 09:21:15 crc kubenswrapper[4811]: I0122 09:21:15.738775 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/14a1e4fa-5a60-47c5-a2da-e57110ca0b57-credential-keys\") pod \"keystone-bootstrap-zt2qg\" (UID: \"14a1e4fa-5a60-47c5-a2da-e57110ca0b57\") " pod="openstack/keystone-bootstrap-zt2qg" Jan 22 09:21:15 crc kubenswrapper[4811]: I0122 09:21:15.738815 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14a1e4fa-5a60-47c5-a2da-e57110ca0b57-combined-ca-bundle\") pod \"keystone-bootstrap-zt2qg\" (UID: \"14a1e4fa-5a60-47c5-a2da-e57110ca0b57\") " pod="openstack/keystone-bootstrap-zt2qg" Jan 22 09:21:15 crc kubenswrapper[4811]: I0122 09:21:15.738834 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmtqw\" (UniqueName: \"kubernetes.io/projected/14a1e4fa-5a60-47c5-a2da-e57110ca0b57-kube-api-access-cmtqw\") pod \"keystone-bootstrap-zt2qg\" (UID: \"14a1e4fa-5a60-47c5-a2da-e57110ca0b57\") " pod="openstack/keystone-bootstrap-zt2qg" Jan 22 09:21:15 crc kubenswrapper[4811]: I0122 09:21:15.738906 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/14a1e4fa-5a60-47c5-a2da-e57110ca0b57-fernet-keys\") pod \"keystone-bootstrap-zt2qg\" (UID: \"14a1e4fa-5a60-47c5-a2da-e57110ca0b57\") " pod="openstack/keystone-bootstrap-zt2qg" Jan 22 09:21:15 crc kubenswrapper[4811]: I0122 09:21:15.840279 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14a1e4fa-5a60-47c5-a2da-e57110ca0b57-config-data\") pod \"keystone-bootstrap-zt2qg\" (UID: \"14a1e4fa-5a60-47c5-a2da-e57110ca0b57\") " pod="openstack/keystone-bootstrap-zt2qg" Jan 22 09:21:15 crc kubenswrapper[4811]: I0122 09:21:15.840330 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/14a1e4fa-5a60-47c5-a2da-e57110ca0b57-scripts\") pod \"keystone-bootstrap-zt2qg\" (UID: \"14a1e4fa-5a60-47c5-a2da-e57110ca0b57\") " pod="openstack/keystone-bootstrap-zt2qg" Jan 22 09:21:15 crc kubenswrapper[4811]: I0122 09:21:15.840368 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/14a1e4fa-5a60-47c5-a2da-e57110ca0b57-credential-keys\") pod \"keystone-bootstrap-zt2qg\" (UID: \"14a1e4fa-5a60-47c5-a2da-e57110ca0b57\") " pod="openstack/keystone-bootstrap-zt2qg" Jan 22 09:21:15 crc kubenswrapper[4811]: I0122 09:21:15.840391 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14a1e4fa-5a60-47c5-a2da-e57110ca0b57-combined-ca-bundle\") pod \"keystone-bootstrap-zt2qg\" (UID: \"14a1e4fa-5a60-47c5-a2da-e57110ca0b57\") " pod="openstack/keystone-bootstrap-zt2qg" Jan 22 09:21:15 crc kubenswrapper[4811]: I0122 09:21:15.840408 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cmtqw\" (UniqueName: \"kubernetes.io/projected/14a1e4fa-5a60-47c5-a2da-e57110ca0b57-kube-api-access-cmtqw\") pod \"keystone-bootstrap-zt2qg\" (UID: \"14a1e4fa-5a60-47c5-a2da-e57110ca0b57\") " pod="openstack/keystone-bootstrap-zt2qg" Jan 22 09:21:15 crc kubenswrapper[4811]: I0122 09:21:15.840479 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/14a1e4fa-5a60-47c5-a2da-e57110ca0b57-fernet-keys\") pod \"keystone-bootstrap-zt2qg\" (UID: \"14a1e4fa-5a60-47c5-a2da-e57110ca0b57\") " pod="openstack/keystone-bootstrap-zt2qg" Jan 22 09:21:15 crc kubenswrapper[4811]: I0122 09:21:15.844879 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/14a1e4fa-5a60-47c5-a2da-e57110ca0b57-fernet-keys\") pod \"keystone-bootstrap-zt2qg\" (UID: \"14a1e4fa-5a60-47c5-a2da-e57110ca0b57\") " pod="openstack/keystone-bootstrap-zt2qg" Jan 22 09:21:15 crc kubenswrapper[4811]: I0122 09:21:15.846059 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14a1e4fa-5a60-47c5-a2da-e57110ca0b57-combined-ca-bundle\") pod \"keystone-bootstrap-zt2qg\" (UID: \"14a1e4fa-5a60-47c5-a2da-e57110ca0b57\") " pod="openstack/keystone-bootstrap-zt2qg" Jan 22 09:21:15 crc kubenswrapper[4811]: I0122 09:21:15.847236 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14a1e4fa-5a60-47c5-a2da-e57110ca0b57-config-data\") pod \"keystone-bootstrap-zt2qg\" (UID: \"14a1e4fa-5a60-47c5-a2da-e57110ca0b57\") " pod="openstack/keystone-bootstrap-zt2qg" Jan 22 09:21:15 crc kubenswrapper[4811]: I0122 09:21:15.848232 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/14a1e4fa-5a60-47c5-a2da-e57110ca0b57-credential-keys\") pod \"keystone-bootstrap-zt2qg\" (UID: \"14a1e4fa-5a60-47c5-a2da-e57110ca0b57\") " pod="openstack/keystone-bootstrap-zt2qg" Jan 22 09:21:15 crc kubenswrapper[4811]: I0122 09:21:15.850568 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/14a1e4fa-5a60-47c5-a2da-e57110ca0b57-scripts\") pod \"keystone-bootstrap-zt2qg\" (UID: \"14a1e4fa-5a60-47c5-a2da-e57110ca0b57\") " pod="openstack/keystone-bootstrap-zt2qg" Jan 22 09:21:15 crc kubenswrapper[4811]: I0122 09:21:15.856944 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cmtqw\" (UniqueName: \"kubernetes.io/projected/14a1e4fa-5a60-47c5-a2da-e57110ca0b57-kube-api-access-cmtqw\") pod \"keystone-bootstrap-zt2qg\" (UID: \"14a1e4fa-5a60-47c5-a2da-e57110ca0b57\") " pod="openstack/keystone-bootstrap-zt2qg" Jan 22 09:21:15 crc kubenswrapper[4811]: I0122 09:21:15.867748 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-zt2qg" Jan 22 09:21:16 crc kubenswrapper[4811]: I0122 09:21:16.000939 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2d8e429-147e-426b-8310-85bb09cace99" path="/var/lib/kubelet/pods/a2d8e429-147e-426b-8310-85bb09cace99/volumes" Jan 22 09:21:17 crc kubenswrapper[4811]: I0122 09:21:17.919888 4811 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-57768dd7b5-pqcms" podUID="3c7ecfd5-ce1b-487a-a11e-82f13e1fb1e2" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.118:5353: connect: connection refused" Jan 22 09:21:21 crc kubenswrapper[4811]: E0122 09:21:21.349539 4811 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central@sha256:5a548c25fe3d02f7a042cb0a6d28fc8039a34c4a3b3d07aadda4aba3a926e777" Jan 22 09:21:21 crc kubenswrapper[4811]: E0122 09:21:21.349836 4811 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central@sha256:5a548c25fe3d02f7a042cb0a6d28fc8039a34c4a3b3d07aadda4aba3a926e777,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nb9h98h69h57bh66fh556h589h67bh687h64bh65dh66ch65ch59bh677h89h54fh577h68dh5c8h688h689h675h657h5ffhffh646h89hc9h545h8bhb7q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s8n8s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(173400f9-c99e-4737-b27c-cff0bdb5ee94): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 09:21:21 crc kubenswrapper[4811]: I0122 09:21:21.403997 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-bbl7b" Jan 22 09:21:21 crc kubenswrapper[4811]: I0122 09:21:21.429297 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4xvwv\" (UniqueName: \"kubernetes.io/projected/0847c2e9-9761-4a9a-96fa-a216884fc3dc-kube-api-access-4xvwv\") pod \"0847c2e9-9761-4a9a-96fa-a216884fc3dc\" (UID: \"0847c2e9-9761-4a9a-96fa-a216884fc3dc\") " Jan 22 09:21:21 crc kubenswrapper[4811]: I0122 09:21:21.429459 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0847c2e9-9761-4a9a-96fa-a216884fc3dc-combined-ca-bundle\") pod \"0847c2e9-9761-4a9a-96fa-a216884fc3dc\" (UID: \"0847c2e9-9761-4a9a-96fa-a216884fc3dc\") " Jan 22 09:21:21 crc kubenswrapper[4811]: I0122 09:21:21.429737 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0847c2e9-9761-4a9a-96fa-a216884fc3dc-config\") pod \"0847c2e9-9761-4a9a-96fa-a216884fc3dc\" (UID: \"0847c2e9-9761-4a9a-96fa-a216884fc3dc\") " Jan 22 09:21:21 crc kubenswrapper[4811]: I0122 09:21:21.451719 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0847c2e9-9761-4a9a-96fa-a216884fc3dc-kube-api-access-4xvwv" (OuterVolumeSpecName: "kube-api-access-4xvwv") pod "0847c2e9-9761-4a9a-96fa-a216884fc3dc" (UID: "0847c2e9-9761-4a9a-96fa-a216884fc3dc"). InnerVolumeSpecName "kube-api-access-4xvwv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:21:21 crc kubenswrapper[4811]: I0122 09:21:21.456206 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0847c2e9-9761-4a9a-96fa-a216884fc3dc-config" (OuterVolumeSpecName: "config") pod "0847c2e9-9761-4a9a-96fa-a216884fc3dc" (UID: "0847c2e9-9761-4a9a-96fa-a216884fc3dc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:21:21 crc kubenswrapper[4811]: I0122 09:21:21.457524 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0847c2e9-9761-4a9a-96fa-a216884fc3dc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0847c2e9-9761-4a9a-96fa-a216884fc3dc" (UID: "0847c2e9-9761-4a9a-96fa-a216884fc3dc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:21:21 crc kubenswrapper[4811]: I0122 09:21:21.533366 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4xvwv\" (UniqueName: \"kubernetes.io/projected/0847c2e9-9761-4a9a-96fa-a216884fc3dc-kube-api-access-4xvwv\") on node \"crc\" DevicePath \"\"" Jan 22 09:21:21 crc kubenswrapper[4811]: I0122 09:21:21.533618 4811 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0847c2e9-9761-4a9a-96fa-a216884fc3dc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 09:21:21 crc kubenswrapper[4811]: I0122 09:21:21.533645 4811 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/0847c2e9-9761-4a9a-96fa-a216884fc3dc-config\") on node \"crc\" DevicePath \"\"" Jan 22 09:21:21 crc kubenswrapper[4811]: I0122 09:21:21.811647 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-bbl7b" event={"ID":"0847c2e9-9761-4a9a-96fa-a216884fc3dc","Type":"ContainerDied","Data":"1986fedd4520d83a4716a0a28813a60723dd41a7b2db1eb3b873afcc53597253"} Jan 22 09:21:21 crc kubenswrapper[4811]: I0122 09:21:21.811694 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1986fedd4520d83a4716a0a28813a60723dd41a7b2db1eb3b873afcc53597253" Jan 22 09:21:21 crc kubenswrapper[4811]: I0122 09:21:21.811718 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-bbl7b" Jan 22 09:21:22 crc kubenswrapper[4811]: I0122 09:21:22.331975 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57768dd7b5-pqcms" Jan 22 09:21:22 crc kubenswrapper[4811]: I0122 09:21:22.345879 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3c7ecfd5-ce1b-487a-a11e-82f13e1fb1e2-ovsdbserver-sb\") pod \"3c7ecfd5-ce1b-487a-a11e-82f13e1fb1e2\" (UID: \"3c7ecfd5-ce1b-487a-a11e-82f13e1fb1e2\") " Jan 22 09:21:22 crc kubenswrapper[4811]: I0122 09:21:22.346012 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3c7ecfd5-ce1b-487a-a11e-82f13e1fb1e2-ovsdbserver-nb\") pod \"3c7ecfd5-ce1b-487a-a11e-82f13e1fb1e2\" (UID: \"3c7ecfd5-ce1b-487a-a11e-82f13e1fb1e2\") " Jan 22 09:21:22 crc kubenswrapper[4811]: I0122 09:21:22.346077 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sp2cl\" (UniqueName: \"kubernetes.io/projected/3c7ecfd5-ce1b-487a-a11e-82f13e1fb1e2-kube-api-access-sp2cl\") pod \"3c7ecfd5-ce1b-487a-a11e-82f13e1fb1e2\" (UID: \"3c7ecfd5-ce1b-487a-a11e-82f13e1fb1e2\") " Jan 22 09:21:22 crc kubenswrapper[4811]: I0122 09:21:22.346221 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3c7ecfd5-ce1b-487a-a11e-82f13e1fb1e2-dns-svc\") pod \"3c7ecfd5-ce1b-487a-a11e-82f13e1fb1e2\" (UID: \"3c7ecfd5-ce1b-487a-a11e-82f13e1fb1e2\") " Jan 22 09:21:22 crc kubenswrapper[4811]: I0122 09:21:22.346329 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c7ecfd5-ce1b-487a-a11e-82f13e1fb1e2-config\") pod \"3c7ecfd5-ce1b-487a-a11e-82f13e1fb1e2\" (UID: \"3c7ecfd5-ce1b-487a-a11e-82f13e1fb1e2\") " Jan 22 09:21:22 crc kubenswrapper[4811]: I0122 09:21:22.365191 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c7ecfd5-ce1b-487a-a11e-82f13e1fb1e2-kube-api-access-sp2cl" (OuterVolumeSpecName: "kube-api-access-sp2cl") pod "3c7ecfd5-ce1b-487a-a11e-82f13e1fb1e2" (UID: "3c7ecfd5-ce1b-487a-a11e-82f13e1fb1e2"). InnerVolumeSpecName "kube-api-access-sp2cl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:21:22 crc kubenswrapper[4811]: I0122 09:21:22.439395 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c7ecfd5-ce1b-487a-a11e-82f13e1fb1e2-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3c7ecfd5-ce1b-487a-a11e-82f13e1fb1e2" (UID: "3c7ecfd5-ce1b-487a-a11e-82f13e1fb1e2"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:21:22 crc kubenswrapper[4811]: I0122 09:21:22.454183 4811 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3c7ecfd5-ce1b-487a-a11e-82f13e1fb1e2-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 22 09:21:22 crc kubenswrapper[4811]: I0122 09:21:22.454288 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sp2cl\" (UniqueName: \"kubernetes.io/projected/3c7ecfd5-ce1b-487a-a11e-82f13e1fb1e2-kube-api-access-sp2cl\") on node \"crc\" DevicePath \"\"" Jan 22 09:21:22 crc kubenswrapper[4811]: I0122 09:21:22.463151 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c7ecfd5-ce1b-487a-a11e-82f13e1fb1e2-config" (OuterVolumeSpecName: "config") pod "3c7ecfd5-ce1b-487a-a11e-82f13e1fb1e2" (UID: "3c7ecfd5-ce1b-487a-a11e-82f13e1fb1e2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:21:22 crc kubenswrapper[4811]: I0122 09:21:22.472224 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c7ecfd5-ce1b-487a-a11e-82f13e1fb1e2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3c7ecfd5-ce1b-487a-a11e-82f13e1fb1e2" (UID: "3c7ecfd5-ce1b-487a-a11e-82f13e1fb1e2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:21:22 crc kubenswrapper[4811]: I0122 09:21:22.489763 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c7ecfd5-ce1b-487a-a11e-82f13e1fb1e2-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3c7ecfd5-ce1b-487a-a11e-82f13e1fb1e2" (UID: "3c7ecfd5-ce1b-487a-a11e-82f13e1fb1e2"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:21:22 crc kubenswrapper[4811]: I0122 09:21:22.556542 4811 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3c7ecfd5-ce1b-487a-a11e-82f13e1fb1e2-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 22 09:21:22 crc kubenswrapper[4811]: I0122 09:21:22.556580 4811 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3c7ecfd5-ce1b-487a-a11e-82f13e1fb1e2-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 22 09:21:22 crc kubenswrapper[4811]: I0122 09:21:22.556593 4811 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c7ecfd5-ce1b-487a-a11e-82f13e1fb1e2-config\") on node \"crc\" DevicePath \"\"" Jan 22 09:21:22 crc kubenswrapper[4811]: I0122 09:21:22.603092 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-548894858c-nn2fm"] Jan 22 09:21:22 crc kubenswrapper[4811]: E0122 09:21:22.603564 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0847c2e9-9761-4a9a-96fa-a216884fc3dc" containerName="neutron-db-sync" Jan 22 09:21:22 crc kubenswrapper[4811]: I0122 09:21:22.603580 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="0847c2e9-9761-4a9a-96fa-a216884fc3dc" containerName="neutron-db-sync" Jan 22 09:21:22 crc kubenswrapper[4811]: E0122 09:21:22.603591 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c7ecfd5-ce1b-487a-a11e-82f13e1fb1e2" containerName="init" Jan 22 09:21:22 crc kubenswrapper[4811]: I0122 09:21:22.603598 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c7ecfd5-ce1b-487a-a11e-82f13e1fb1e2" containerName="init" Jan 22 09:21:22 crc kubenswrapper[4811]: E0122 09:21:22.603616 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c7ecfd5-ce1b-487a-a11e-82f13e1fb1e2" containerName="dnsmasq-dns" Jan 22 09:21:22 crc kubenswrapper[4811]: I0122 09:21:22.603670 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c7ecfd5-ce1b-487a-a11e-82f13e1fb1e2" containerName="dnsmasq-dns" Jan 22 09:21:22 crc kubenswrapper[4811]: I0122 09:21:22.604614 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="0847c2e9-9761-4a9a-96fa-a216884fc3dc" containerName="neutron-db-sync" Jan 22 09:21:22 crc kubenswrapper[4811]: I0122 09:21:22.604654 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c7ecfd5-ce1b-487a-a11e-82f13e1fb1e2" containerName="dnsmasq-dns" Jan 22 09:21:22 crc kubenswrapper[4811]: I0122 09:21:22.606796 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-548894858c-nn2fm" Jan 22 09:21:22 crc kubenswrapper[4811]: I0122 09:21:22.634525 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-548894858c-nn2fm"] Jan 22 09:21:22 crc kubenswrapper[4811]: I0122 09:21:22.659713 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7z4fm\" (UniqueName: \"kubernetes.io/projected/91f1af8b-22e5-45b9-ac16-6696840485e4-kube-api-access-7z4fm\") pod \"dnsmasq-dns-548894858c-nn2fm\" (UID: \"91f1af8b-22e5-45b9-ac16-6696840485e4\") " pod="openstack/dnsmasq-dns-548894858c-nn2fm" Jan 22 09:21:22 crc kubenswrapper[4811]: I0122 09:21:22.659868 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/91f1af8b-22e5-45b9-ac16-6696840485e4-dns-svc\") pod \"dnsmasq-dns-548894858c-nn2fm\" (UID: \"91f1af8b-22e5-45b9-ac16-6696840485e4\") " pod="openstack/dnsmasq-dns-548894858c-nn2fm" Jan 22 09:21:22 crc kubenswrapper[4811]: I0122 09:21:22.660138 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/91f1af8b-22e5-45b9-ac16-6696840485e4-ovsdbserver-sb\") pod \"dnsmasq-dns-548894858c-nn2fm\" (UID: \"91f1af8b-22e5-45b9-ac16-6696840485e4\") " pod="openstack/dnsmasq-dns-548894858c-nn2fm" Jan 22 09:21:22 crc kubenswrapper[4811]: I0122 09:21:22.660256 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91f1af8b-22e5-45b9-ac16-6696840485e4-config\") pod \"dnsmasq-dns-548894858c-nn2fm\" (UID: \"91f1af8b-22e5-45b9-ac16-6696840485e4\") " pod="openstack/dnsmasq-dns-548894858c-nn2fm" Jan 22 09:21:22 crc kubenswrapper[4811]: I0122 09:21:22.660364 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/91f1af8b-22e5-45b9-ac16-6696840485e4-ovsdbserver-nb\") pod \"dnsmasq-dns-548894858c-nn2fm\" (UID: \"91f1af8b-22e5-45b9-ac16-6696840485e4\") " pod="openstack/dnsmasq-dns-548894858c-nn2fm" Jan 22 09:21:22 crc kubenswrapper[4811]: I0122 09:21:22.751203 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-8b9759fbd-gggfs"] Jan 22 09:21:22 crc kubenswrapper[4811]: I0122 09:21:22.753590 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8b9759fbd-gggfs" Jan 22 09:21:22 crc kubenswrapper[4811]: I0122 09:21:22.755840 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 22 09:21:22 crc kubenswrapper[4811]: I0122 09:21:22.760057 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 22 09:21:22 crc kubenswrapper[4811]: I0122 09:21:22.760784 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Jan 22 09:21:22 crc kubenswrapper[4811]: I0122 09:21:22.761666 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7z4fm\" (UniqueName: \"kubernetes.io/projected/91f1af8b-22e5-45b9-ac16-6696840485e4-kube-api-access-7z4fm\") pod \"dnsmasq-dns-548894858c-nn2fm\" (UID: \"91f1af8b-22e5-45b9-ac16-6696840485e4\") " pod="openstack/dnsmasq-dns-548894858c-nn2fm" Jan 22 09:21:22 crc kubenswrapper[4811]: I0122 09:21:22.761759 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/91f1af8b-22e5-45b9-ac16-6696840485e4-dns-svc\") pod \"dnsmasq-dns-548894858c-nn2fm\" (UID: \"91f1af8b-22e5-45b9-ac16-6696840485e4\") " pod="openstack/dnsmasq-dns-548894858c-nn2fm" Jan 22 09:21:22 crc kubenswrapper[4811]: I0122 09:21:22.761897 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/91f1af8b-22e5-45b9-ac16-6696840485e4-ovsdbserver-sb\") pod \"dnsmasq-dns-548894858c-nn2fm\" (UID: \"91f1af8b-22e5-45b9-ac16-6696840485e4\") " pod="openstack/dnsmasq-dns-548894858c-nn2fm" Jan 22 09:21:22 crc kubenswrapper[4811]: I0122 09:21:22.762007 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91f1af8b-22e5-45b9-ac16-6696840485e4-config\") pod \"dnsmasq-dns-548894858c-nn2fm\" (UID: \"91f1af8b-22e5-45b9-ac16-6696840485e4\") " pod="openstack/dnsmasq-dns-548894858c-nn2fm" Jan 22 09:21:22 crc kubenswrapper[4811]: I0122 09:21:22.765685 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/91f1af8b-22e5-45b9-ac16-6696840485e4-ovsdbserver-nb\") pod \"dnsmasq-dns-548894858c-nn2fm\" (UID: \"91f1af8b-22e5-45b9-ac16-6696840485e4\") " pod="openstack/dnsmasq-dns-548894858c-nn2fm" Jan 22 09:21:22 crc kubenswrapper[4811]: I0122 09:21:22.762718 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-8b9759fbd-gggfs"] Jan 22 09:21:22 crc kubenswrapper[4811]: I0122 09:21:22.762988 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/91f1af8b-22e5-45b9-ac16-6696840485e4-ovsdbserver-sb\") pod \"dnsmasq-dns-548894858c-nn2fm\" (UID: \"91f1af8b-22e5-45b9-ac16-6696840485e4\") " pod="openstack/dnsmasq-dns-548894858c-nn2fm" Jan 22 09:21:22 crc kubenswrapper[4811]: I0122 09:21:22.763396 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91f1af8b-22e5-45b9-ac16-6696840485e4-config\") pod \"dnsmasq-dns-548894858c-nn2fm\" (UID: \"91f1af8b-22e5-45b9-ac16-6696840485e4\") " pod="openstack/dnsmasq-dns-548894858c-nn2fm" Jan 22 09:21:22 crc kubenswrapper[4811]: I0122 09:21:22.762883 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/91f1af8b-22e5-45b9-ac16-6696840485e4-dns-svc\") pod \"dnsmasq-dns-548894858c-nn2fm\" (UID: \"91f1af8b-22e5-45b9-ac16-6696840485e4\") " pod="openstack/dnsmasq-dns-548894858c-nn2fm" Jan 22 09:21:22 crc kubenswrapper[4811]: I0122 09:21:22.766234 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-l9n7w" Jan 22 09:21:22 crc kubenswrapper[4811]: I0122 09:21:22.766604 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/91f1af8b-22e5-45b9-ac16-6696840485e4-ovsdbserver-nb\") pod \"dnsmasq-dns-548894858c-nn2fm\" (UID: \"91f1af8b-22e5-45b9-ac16-6696840485e4\") " pod="openstack/dnsmasq-dns-548894858c-nn2fm" Jan 22 09:21:22 crc kubenswrapper[4811]: I0122 09:21:22.796130 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7z4fm\" (UniqueName: \"kubernetes.io/projected/91f1af8b-22e5-45b9-ac16-6696840485e4-kube-api-access-7z4fm\") pod \"dnsmasq-dns-548894858c-nn2fm\" (UID: \"91f1af8b-22e5-45b9-ac16-6696840485e4\") " pod="openstack/dnsmasq-dns-548894858c-nn2fm" Jan 22 09:21:22 crc kubenswrapper[4811]: I0122 09:21:22.824733 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-zt2qg"] Jan 22 09:21:22 crc kubenswrapper[4811]: W0122 09:21:22.837820 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod14a1e4fa_5a60_47c5_a2da_e57110ca0b57.slice/crio-6d34ea78b75b67d1f13bc858222a74c8c08d7dbfca8f109c4a6373a297604a88 WatchSource:0}: Error finding container 6d34ea78b75b67d1f13bc858222a74c8c08d7dbfca8f109c4a6373a297604a88: Status 404 returned error can't find the container with id 6d34ea78b75b67d1f13bc858222a74c8c08d7dbfca8f109c4a6373a297604a88 Jan 22 09:21:22 crc kubenswrapper[4811]: I0122 09:21:22.839213 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-rnmr4" event={"ID":"7b642751-b1e4-4488-b305-fed7f4fcd9fa","Type":"ContainerStarted","Data":"ce01439c4f3c6a661151f644ce887beeb972d579495b1e7c28a458e92c77e19c"} Jan 22 09:21:22 crc kubenswrapper[4811]: I0122 09:21:22.854550 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57768dd7b5-pqcms" event={"ID":"3c7ecfd5-ce1b-487a-a11e-82f13e1fb1e2","Type":"ContainerDied","Data":"02a473aa9b3ccc58e68aee51e2af249215847f7af96b6b095c0ef26fe78e289b"} Jan 22 09:21:22 crc kubenswrapper[4811]: I0122 09:21:22.854601 4811 scope.go:117] "RemoveContainer" containerID="f91f41c59e3ce5c3c4b0b61b117f817be9ba096ecfdab23959d3ef3fa95d5302" Jan 22 09:21:22 crc kubenswrapper[4811]: I0122 09:21:22.854769 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57768dd7b5-pqcms" Jan 22 09:21:22 crc kubenswrapper[4811]: I0122 09:21:22.859016 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-rnmr4" podStartSLOduration=1.8878889829999999 podStartE2EDuration="20.859001578s" podCreationTimestamp="2026-01-22 09:21:02 +0000 UTC" firstStartedPulling="2026-01-22 09:21:03.300114245 +0000 UTC m=+907.622301369" lastFinishedPulling="2026-01-22 09:21:22.271226841 +0000 UTC m=+926.593413964" observedRunningTime="2026-01-22 09:21:22.854150722 +0000 UTC m=+927.176337845" watchObservedRunningTime="2026-01-22 09:21:22.859001578 +0000 UTC m=+927.181188701" Jan 22 09:21:22 crc kubenswrapper[4811]: I0122 09:21:22.867575 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3ccb4838-2855-4017-9c00-e8765846d47e-config\") pod \"neutron-8b9759fbd-gggfs\" (UID: \"3ccb4838-2855-4017-9c00-e8765846d47e\") " pod="openstack/neutron-8b9759fbd-gggfs" Jan 22 09:21:22 crc kubenswrapper[4811]: I0122 09:21:22.867661 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3ccb4838-2855-4017-9c00-e8765846d47e-httpd-config\") pod \"neutron-8b9759fbd-gggfs\" (UID: \"3ccb4838-2855-4017-9c00-e8765846d47e\") " pod="openstack/neutron-8b9759fbd-gggfs" Jan 22 09:21:22 crc kubenswrapper[4811]: I0122 09:21:22.867695 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lw5l\" (UniqueName: \"kubernetes.io/projected/3ccb4838-2855-4017-9c00-e8765846d47e-kube-api-access-6lw5l\") pod \"neutron-8b9759fbd-gggfs\" (UID: \"3ccb4838-2855-4017-9c00-e8765846d47e\") " pod="openstack/neutron-8b9759fbd-gggfs" Jan 22 09:21:22 crc kubenswrapper[4811]: I0122 09:21:22.867816 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ccb4838-2855-4017-9c00-e8765846d47e-combined-ca-bundle\") pod \"neutron-8b9759fbd-gggfs\" (UID: \"3ccb4838-2855-4017-9c00-e8765846d47e\") " pod="openstack/neutron-8b9759fbd-gggfs" Jan 22 09:21:22 crc kubenswrapper[4811]: I0122 09:21:22.870048 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ccb4838-2855-4017-9c00-e8765846d47e-ovndb-tls-certs\") pod \"neutron-8b9759fbd-gggfs\" (UID: \"3ccb4838-2855-4017-9c00-e8765846d47e\") " pod="openstack/neutron-8b9759fbd-gggfs" Jan 22 09:21:22 crc kubenswrapper[4811]: I0122 09:21:22.878578 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-zmjzd" event={"ID":"e22a065b-3b3c-41a9-ad35-b1c1e594af9b","Type":"ContainerStarted","Data":"65a67a4c6384c24670b32b62b88f9d49ab432208740d4873bc567b7af7673bae"} Jan 22 09:21:22 crc kubenswrapper[4811]: I0122 09:21:22.914023 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57768dd7b5-pqcms"] Jan 22 09:21:22 crc kubenswrapper[4811]: I0122 09:21:22.914444 4811 scope.go:117] "RemoveContainer" containerID="189453dde239ba0028f7d9ee028b4f1313c9e267186c2ea4045d88727c29b923" Jan 22 09:21:22 crc kubenswrapper[4811]: I0122 09:21:22.935206 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57768dd7b5-pqcms"] Jan 22 09:21:22 crc kubenswrapper[4811]: I0122 09:21:22.943445 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-zmjzd" podStartSLOduration=2.651419508 podStartE2EDuration="20.943421432s" podCreationTimestamp="2026-01-22 09:21:02 +0000 UTC" firstStartedPulling="2026-01-22 09:21:03.984761936 +0000 UTC m=+908.306949060" lastFinishedPulling="2026-01-22 09:21:22.276763861 +0000 UTC m=+926.598950984" observedRunningTime="2026-01-22 09:21:22.901110975 +0000 UTC m=+927.223298098" watchObservedRunningTime="2026-01-22 09:21:22.943421432 +0000 UTC m=+927.265608555" Jan 22 09:21:22 crc kubenswrapper[4811]: I0122 09:21:22.963299 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-548894858c-nn2fm" Jan 22 09:21:22 crc kubenswrapper[4811]: I0122 09:21:22.973147 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ccb4838-2855-4017-9c00-e8765846d47e-ovndb-tls-certs\") pod \"neutron-8b9759fbd-gggfs\" (UID: \"3ccb4838-2855-4017-9c00-e8765846d47e\") " pod="openstack/neutron-8b9759fbd-gggfs" Jan 22 09:21:22 crc kubenswrapper[4811]: I0122 09:21:22.973273 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3ccb4838-2855-4017-9c00-e8765846d47e-config\") pod \"neutron-8b9759fbd-gggfs\" (UID: \"3ccb4838-2855-4017-9c00-e8765846d47e\") " pod="openstack/neutron-8b9759fbd-gggfs" Jan 22 09:21:22 crc kubenswrapper[4811]: I0122 09:21:22.973325 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3ccb4838-2855-4017-9c00-e8765846d47e-httpd-config\") pod \"neutron-8b9759fbd-gggfs\" (UID: \"3ccb4838-2855-4017-9c00-e8765846d47e\") " pod="openstack/neutron-8b9759fbd-gggfs" Jan 22 09:21:22 crc kubenswrapper[4811]: I0122 09:21:22.973385 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6lw5l\" (UniqueName: \"kubernetes.io/projected/3ccb4838-2855-4017-9c00-e8765846d47e-kube-api-access-6lw5l\") pod \"neutron-8b9759fbd-gggfs\" (UID: \"3ccb4838-2855-4017-9c00-e8765846d47e\") " pod="openstack/neutron-8b9759fbd-gggfs" Jan 22 09:21:22 crc kubenswrapper[4811]: I0122 09:21:22.973484 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ccb4838-2855-4017-9c00-e8765846d47e-combined-ca-bundle\") pod \"neutron-8b9759fbd-gggfs\" (UID: \"3ccb4838-2855-4017-9c00-e8765846d47e\") " pod="openstack/neutron-8b9759fbd-gggfs" Jan 22 09:21:22 crc kubenswrapper[4811]: I0122 09:21:22.989254 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/3ccb4838-2855-4017-9c00-e8765846d47e-config\") pod \"neutron-8b9759fbd-gggfs\" (UID: \"3ccb4838-2855-4017-9c00-e8765846d47e\") " pod="openstack/neutron-8b9759fbd-gggfs" Jan 22 09:21:22 crc kubenswrapper[4811]: I0122 09:21:22.991821 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ccb4838-2855-4017-9c00-e8765846d47e-combined-ca-bundle\") pod \"neutron-8b9759fbd-gggfs\" (UID: \"3ccb4838-2855-4017-9c00-e8765846d47e\") " pod="openstack/neutron-8b9759fbd-gggfs" Jan 22 09:21:22 crc kubenswrapper[4811]: I0122 09:21:22.992295 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3ccb4838-2855-4017-9c00-e8765846d47e-httpd-config\") pod \"neutron-8b9759fbd-gggfs\" (UID: \"3ccb4838-2855-4017-9c00-e8765846d47e\") " pod="openstack/neutron-8b9759fbd-gggfs" Jan 22 09:21:22 crc kubenswrapper[4811]: I0122 09:21:22.993402 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6lw5l\" (UniqueName: \"kubernetes.io/projected/3ccb4838-2855-4017-9c00-e8765846d47e-kube-api-access-6lw5l\") pod \"neutron-8b9759fbd-gggfs\" (UID: \"3ccb4838-2855-4017-9c00-e8765846d47e\") " pod="openstack/neutron-8b9759fbd-gggfs" Jan 22 09:21:23 crc kubenswrapper[4811]: I0122 09:21:23.000948 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ccb4838-2855-4017-9c00-e8765846d47e-ovndb-tls-certs\") pod \"neutron-8b9759fbd-gggfs\" (UID: \"3ccb4838-2855-4017-9c00-e8765846d47e\") " pod="openstack/neutron-8b9759fbd-gggfs" Jan 22 09:21:23 crc kubenswrapper[4811]: I0122 09:21:23.091269 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8b9759fbd-gggfs" Jan 22 09:21:23 crc kubenswrapper[4811]: I0122 09:21:23.522606 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-548894858c-nn2fm"] Jan 22 09:21:23 crc kubenswrapper[4811]: I0122 09:21:23.827780 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-8b9759fbd-gggfs"] Jan 22 09:21:23 crc kubenswrapper[4811]: I0122 09:21:23.886518 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-zztph" event={"ID":"1cd5889f-63d3-47a0-8b17-e8ffac0011d3","Type":"ContainerStarted","Data":"355d945e984c35fa416eaab7c76e47e82435510a56e52acccadfc3695b1fe0a3"} Jan 22 09:21:23 crc kubenswrapper[4811]: I0122 09:21:23.891359 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-548894858c-nn2fm" event={"ID":"91f1af8b-22e5-45b9-ac16-6696840485e4","Type":"ContainerStarted","Data":"e7c2939caf45e2d57c2f42a8edb1656eba34a5c5e8c51aa5067a157b50954f2f"} Jan 22 09:21:23 crc kubenswrapper[4811]: I0122 09:21:23.894565 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-zt2qg" event={"ID":"14a1e4fa-5a60-47c5-a2da-e57110ca0b57","Type":"ContainerStarted","Data":"726e9b017aa22b710235e164587d5c31326c132ae65bebec4355aa245026cab8"} Jan 22 09:21:23 crc kubenswrapper[4811]: I0122 09:21:23.894685 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-zt2qg" event={"ID":"14a1e4fa-5a60-47c5-a2da-e57110ca0b57","Type":"ContainerStarted","Data":"6d34ea78b75b67d1f13bc858222a74c8c08d7dbfca8f109c4a6373a297604a88"} Jan 22 09:21:23 crc kubenswrapper[4811]: I0122 09:21:23.901769 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-zztph" podStartSLOduration=3.573914419 podStartE2EDuration="21.901755628s" podCreationTimestamp="2026-01-22 09:21:02 +0000 UTC" firstStartedPulling="2026-01-22 09:21:04.098316062 +0000 UTC m=+908.420503185" lastFinishedPulling="2026-01-22 09:21:22.426157272 +0000 UTC m=+926.748344394" observedRunningTime="2026-01-22 09:21:23.899371773 +0000 UTC m=+928.221558895" watchObservedRunningTime="2026-01-22 09:21:23.901755628 +0000 UTC m=+928.223942750" Jan 22 09:21:23 crc kubenswrapper[4811]: I0122 09:21:23.920438 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-zt2qg" podStartSLOduration=8.920423005 podStartE2EDuration="8.920423005s" podCreationTimestamp="2026-01-22 09:21:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:21:23.914578815 +0000 UTC m=+928.236765939" watchObservedRunningTime="2026-01-22 09:21:23.920423005 +0000 UTC m=+928.242610128" Jan 22 09:21:24 crc kubenswrapper[4811]: I0122 09:21:24.000419 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c7ecfd5-ce1b-487a-a11e-82f13e1fb1e2" path="/var/lib/kubelet/pods/3c7ecfd5-ce1b-487a-a11e-82f13e1fb1e2/volumes" Jan 22 09:21:24 crc kubenswrapper[4811]: I0122 09:21:24.713557 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-b5fd9ff5-td6xf"] Jan 22 09:21:24 crc kubenswrapper[4811]: I0122 09:21:24.715231 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-b5fd9ff5-td6xf" Jan 22 09:21:24 crc kubenswrapper[4811]: I0122 09:21:24.717187 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Jan 22 09:21:24 crc kubenswrapper[4811]: I0122 09:21:24.718179 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Jan 22 09:21:24 crc kubenswrapper[4811]: I0122 09:21:24.736865 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-b5fd9ff5-td6xf"] Jan 22 09:21:24 crc kubenswrapper[4811]: I0122 09:21:24.825791 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d8ace218-8390-49f1-950a-7162f7bce032-httpd-config\") pod \"neutron-b5fd9ff5-td6xf\" (UID: \"d8ace218-8390-49f1-950a-7162f7bce032\") " pod="openstack/neutron-b5fd9ff5-td6xf" Jan 22 09:21:24 crc kubenswrapper[4811]: I0122 09:21:24.825899 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8ace218-8390-49f1-950a-7162f7bce032-public-tls-certs\") pod \"neutron-b5fd9ff5-td6xf\" (UID: \"d8ace218-8390-49f1-950a-7162f7bce032\") " pod="openstack/neutron-b5fd9ff5-td6xf" Jan 22 09:21:24 crc kubenswrapper[4811]: I0122 09:21:24.825962 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8ace218-8390-49f1-950a-7162f7bce032-internal-tls-certs\") pod \"neutron-b5fd9ff5-td6xf\" (UID: \"d8ace218-8390-49f1-950a-7162f7bce032\") " pod="openstack/neutron-b5fd9ff5-td6xf" Jan 22 09:21:24 crc kubenswrapper[4811]: I0122 09:21:24.826007 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d8ace218-8390-49f1-950a-7162f7bce032-config\") pod \"neutron-b5fd9ff5-td6xf\" (UID: \"d8ace218-8390-49f1-950a-7162f7bce032\") " pod="openstack/neutron-b5fd9ff5-td6xf" Jan 22 09:21:24 crc kubenswrapper[4811]: I0122 09:21:24.826073 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8ace218-8390-49f1-950a-7162f7bce032-ovndb-tls-certs\") pod \"neutron-b5fd9ff5-td6xf\" (UID: \"d8ace218-8390-49f1-950a-7162f7bce032\") " pod="openstack/neutron-b5fd9ff5-td6xf" Jan 22 09:21:24 crc kubenswrapper[4811]: I0122 09:21:24.826120 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qsjgv\" (UniqueName: \"kubernetes.io/projected/d8ace218-8390-49f1-950a-7162f7bce032-kube-api-access-qsjgv\") pod \"neutron-b5fd9ff5-td6xf\" (UID: \"d8ace218-8390-49f1-950a-7162f7bce032\") " pod="openstack/neutron-b5fd9ff5-td6xf" Jan 22 09:21:24 crc kubenswrapper[4811]: I0122 09:21:24.826276 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8ace218-8390-49f1-950a-7162f7bce032-combined-ca-bundle\") pod \"neutron-b5fd9ff5-td6xf\" (UID: \"d8ace218-8390-49f1-950a-7162f7bce032\") " pod="openstack/neutron-b5fd9ff5-td6xf" Jan 22 09:21:24 crc kubenswrapper[4811]: I0122 09:21:24.904974 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"173400f9-c99e-4737-b27c-cff0bdb5ee94","Type":"ContainerStarted","Data":"1ddbd4da0737b0e060323ce04e1e1091447ceeddb176ab83d32f3f37cfb10b20"} Jan 22 09:21:24 crc kubenswrapper[4811]: I0122 09:21:24.908182 4811 generic.go:334] "Generic (PLEG): container finished" podID="91f1af8b-22e5-45b9-ac16-6696840485e4" containerID="d2cddbbc31452a4aa28c7399cc73e16401277ab8a9e9fe9ba00888f4798a59d9" exitCode=0 Jan 22 09:21:24 crc kubenswrapper[4811]: I0122 09:21:24.908222 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-548894858c-nn2fm" event={"ID":"91f1af8b-22e5-45b9-ac16-6696840485e4","Type":"ContainerDied","Data":"d2cddbbc31452a4aa28c7399cc73e16401277ab8a9e9fe9ba00888f4798a59d9"} Jan 22 09:21:24 crc kubenswrapper[4811]: I0122 09:21:24.926204 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8b9759fbd-gggfs" event={"ID":"3ccb4838-2855-4017-9c00-e8765846d47e","Type":"ContainerStarted","Data":"391d864b3a391cb1cd483f20345fe978b43e8dff672200e2da4b9b0d1d331b6e"} Jan 22 09:21:24 crc kubenswrapper[4811]: I0122 09:21:24.926239 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-8b9759fbd-gggfs" Jan 22 09:21:24 crc kubenswrapper[4811]: I0122 09:21:24.926249 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8b9759fbd-gggfs" event={"ID":"3ccb4838-2855-4017-9c00-e8765846d47e","Type":"ContainerStarted","Data":"b5c695e389fdee90e681326b0f6901391f4404a9ccb1ebc6e28255e7dfc021b6"} Jan 22 09:21:24 crc kubenswrapper[4811]: I0122 09:21:24.926257 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8b9759fbd-gggfs" event={"ID":"3ccb4838-2855-4017-9c00-e8765846d47e","Type":"ContainerStarted","Data":"db5c72b99798c0998f8f31e711dcaa1ba168fbadaef7902707f3d0325f8b399b"} Jan 22 09:21:24 crc kubenswrapper[4811]: I0122 09:21:24.927602 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8ace218-8390-49f1-950a-7162f7bce032-combined-ca-bundle\") pod \"neutron-b5fd9ff5-td6xf\" (UID: \"d8ace218-8390-49f1-950a-7162f7bce032\") " pod="openstack/neutron-b5fd9ff5-td6xf" Jan 22 09:21:24 crc kubenswrapper[4811]: I0122 09:21:24.927861 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d8ace218-8390-49f1-950a-7162f7bce032-httpd-config\") pod \"neutron-b5fd9ff5-td6xf\" (UID: \"d8ace218-8390-49f1-950a-7162f7bce032\") " pod="openstack/neutron-b5fd9ff5-td6xf" Jan 22 09:21:24 crc kubenswrapper[4811]: I0122 09:21:24.927992 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8ace218-8390-49f1-950a-7162f7bce032-public-tls-certs\") pod \"neutron-b5fd9ff5-td6xf\" (UID: \"d8ace218-8390-49f1-950a-7162f7bce032\") " pod="openstack/neutron-b5fd9ff5-td6xf" Jan 22 09:21:24 crc kubenswrapper[4811]: I0122 09:21:24.928080 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8ace218-8390-49f1-950a-7162f7bce032-internal-tls-certs\") pod \"neutron-b5fd9ff5-td6xf\" (UID: \"d8ace218-8390-49f1-950a-7162f7bce032\") " pod="openstack/neutron-b5fd9ff5-td6xf" Jan 22 09:21:24 crc kubenswrapper[4811]: I0122 09:21:24.928175 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d8ace218-8390-49f1-950a-7162f7bce032-config\") pod \"neutron-b5fd9ff5-td6xf\" (UID: \"d8ace218-8390-49f1-950a-7162f7bce032\") " pod="openstack/neutron-b5fd9ff5-td6xf" Jan 22 09:21:24 crc kubenswrapper[4811]: I0122 09:21:24.928247 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8ace218-8390-49f1-950a-7162f7bce032-ovndb-tls-certs\") pod \"neutron-b5fd9ff5-td6xf\" (UID: \"d8ace218-8390-49f1-950a-7162f7bce032\") " pod="openstack/neutron-b5fd9ff5-td6xf" Jan 22 09:21:24 crc kubenswrapper[4811]: I0122 09:21:24.928319 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qsjgv\" (UniqueName: \"kubernetes.io/projected/d8ace218-8390-49f1-950a-7162f7bce032-kube-api-access-qsjgv\") pod \"neutron-b5fd9ff5-td6xf\" (UID: \"d8ace218-8390-49f1-950a-7162f7bce032\") " pod="openstack/neutron-b5fd9ff5-td6xf" Jan 22 09:21:24 crc kubenswrapper[4811]: I0122 09:21:24.934262 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d8ace218-8390-49f1-950a-7162f7bce032-httpd-config\") pod \"neutron-b5fd9ff5-td6xf\" (UID: \"d8ace218-8390-49f1-950a-7162f7bce032\") " pod="openstack/neutron-b5fd9ff5-td6xf" Jan 22 09:21:24 crc kubenswrapper[4811]: I0122 09:21:24.934492 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8ace218-8390-49f1-950a-7162f7bce032-internal-tls-certs\") pod \"neutron-b5fd9ff5-td6xf\" (UID: \"d8ace218-8390-49f1-950a-7162f7bce032\") " pod="openstack/neutron-b5fd9ff5-td6xf" Jan 22 09:21:24 crc kubenswrapper[4811]: I0122 09:21:24.936337 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8ace218-8390-49f1-950a-7162f7bce032-combined-ca-bundle\") pod \"neutron-b5fd9ff5-td6xf\" (UID: \"d8ace218-8390-49f1-950a-7162f7bce032\") " pod="openstack/neutron-b5fd9ff5-td6xf" Jan 22 09:21:24 crc kubenswrapper[4811]: I0122 09:21:24.938611 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/d8ace218-8390-49f1-950a-7162f7bce032-config\") pod \"neutron-b5fd9ff5-td6xf\" (UID: \"d8ace218-8390-49f1-950a-7162f7bce032\") " pod="openstack/neutron-b5fd9ff5-td6xf" Jan 22 09:21:24 crc kubenswrapper[4811]: I0122 09:21:24.942289 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8ace218-8390-49f1-950a-7162f7bce032-public-tls-certs\") pod \"neutron-b5fd9ff5-td6xf\" (UID: \"d8ace218-8390-49f1-950a-7162f7bce032\") " pod="openstack/neutron-b5fd9ff5-td6xf" Jan 22 09:21:24 crc kubenswrapper[4811]: I0122 09:21:24.945427 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qsjgv\" (UniqueName: \"kubernetes.io/projected/d8ace218-8390-49f1-950a-7162f7bce032-kube-api-access-qsjgv\") pod \"neutron-b5fd9ff5-td6xf\" (UID: \"d8ace218-8390-49f1-950a-7162f7bce032\") " pod="openstack/neutron-b5fd9ff5-td6xf" Jan 22 09:21:24 crc kubenswrapper[4811]: I0122 09:21:24.950231 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8ace218-8390-49f1-950a-7162f7bce032-ovndb-tls-certs\") pod \"neutron-b5fd9ff5-td6xf\" (UID: \"d8ace218-8390-49f1-950a-7162f7bce032\") " pod="openstack/neutron-b5fd9ff5-td6xf" Jan 22 09:21:24 crc kubenswrapper[4811]: I0122 09:21:24.983885 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-8b9759fbd-gggfs" podStartSLOduration=2.983873157 podStartE2EDuration="2.983873157s" podCreationTimestamp="2026-01-22 09:21:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:21:24.974494002 +0000 UTC m=+929.296681125" watchObservedRunningTime="2026-01-22 09:21:24.983873157 +0000 UTC m=+929.306060279" Jan 22 09:21:25 crc kubenswrapper[4811]: I0122 09:21:25.030089 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-b5fd9ff5-td6xf" Jan 22 09:21:25 crc kubenswrapper[4811]: I0122 09:21:25.617540 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-b5fd9ff5-td6xf"] Jan 22 09:21:25 crc kubenswrapper[4811]: I0122 09:21:25.967220 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-548894858c-nn2fm" event={"ID":"91f1af8b-22e5-45b9-ac16-6696840485e4","Type":"ContainerStarted","Data":"acb1305ea593365b2079387184ef6e638eb0b7028c698ac2fc7d8e1e5e90ee44"} Jan 22 09:21:25 crc kubenswrapper[4811]: I0122 09:21:25.968951 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-548894858c-nn2fm" Jan 22 09:21:25 crc kubenswrapper[4811]: I0122 09:21:25.997336 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-548894858c-nn2fm" podStartSLOduration=3.997321259 podStartE2EDuration="3.997321259s" podCreationTimestamp="2026-01-22 09:21:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:21:25.988735159 +0000 UTC m=+930.310922283" watchObservedRunningTime="2026-01-22 09:21:25.997321259 +0000 UTC m=+930.319508382" Jan 22 09:21:26 crc kubenswrapper[4811]: I0122 09:21:26.000719 4811 generic.go:334] "Generic (PLEG): container finished" podID="e22a065b-3b3c-41a9-ad35-b1c1e594af9b" containerID="65a67a4c6384c24670b32b62b88f9d49ab432208740d4873bc567b7af7673bae" exitCode=0 Jan 22 09:21:26 crc kubenswrapper[4811]: I0122 09:21:26.009400 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-zmjzd" event={"ID":"e22a065b-3b3c-41a9-ad35-b1c1e594af9b","Type":"ContainerDied","Data":"65a67a4c6384c24670b32b62b88f9d49ab432208740d4873bc567b7af7673bae"} Jan 22 09:21:26 crc kubenswrapper[4811]: I0122 09:21:26.009485 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-b5fd9ff5-td6xf" event={"ID":"d8ace218-8390-49f1-950a-7162f7bce032","Type":"ContainerStarted","Data":"fa6201b52d14e99f6cea3de3c8c89efb500fedf9bd7c3d79ef1de96fd0c4ddcf"} Jan 22 09:21:26 crc kubenswrapper[4811]: I0122 09:21:26.009501 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-b5fd9ff5-td6xf" event={"ID":"d8ace218-8390-49f1-950a-7162f7bce032","Type":"ContainerStarted","Data":"cf52d87bfede37c38a852f5c0f9615fc219d79e983f911a01d8548db7cd3108c"} Jan 22 09:21:26 crc kubenswrapper[4811]: I0122 09:21:26.015868 4811 generic.go:334] "Generic (PLEG): container finished" podID="7b642751-b1e4-4488-b305-fed7f4fcd9fa" containerID="ce01439c4f3c6a661151f644ce887beeb972d579495b1e7c28a458e92c77e19c" exitCode=0 Jan 22 09:21:26 crc kubenswrapper[4811]: I0122 09:21:26.015965 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-rnmr4" event={"ID":"7b642751-b1e4-4488-b305-fed7f4fcd9fa","Type":"ContainerDied","Data":"ce01439c4f3c6a661151f644ce887beeb972d579495b1e7c28a458e92c77e19c"} Jan 22 09:21:27 crc kubenswrapper[4811]: I0122 09:21:27.023229 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-b5fd9ff5-td6xf" event={"ID":"d8ace218-8390-49f1-950a-7162f7bce032","Type":"ContainerStarted","Data":"9391132e8d380c4c3f1fd0d0b7dfdabe021b6149cf92e443862c0b800bb9c893"} Jan 22 09:21:27 crc kubenswrapper[4811]: I0122 09:21:27.025280 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-b5fd9ff5-td6xf" Jan 22 09:21:27 crc kubenswrapper[4811]: I0122 09:21:27.028208 4811 generic.go:334] "Generic (PLEG): container finished" podID="14a1e4fa-5a60-47c5-a2da-e57110ca0b57" containerID="726e9b017aa22b710235e164587d5c31326c132ae65bebec4355aa245026cab8" exitCode=0 Jan 22 09:21:27 crc kubenswrapper[4811]: I0122 09:21:27.028361 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-zt2qg" event={"ID":"14a1e4fa-5a60-47c5-a2da-e57110ca0b57","Type":"ContainerDied","Data":"726e9b017aa22b710235e164587d5c31326c132ae65bebec4355aa245026cab8"} Jan 22 09:21:27 crc kubenswrapper[4811]: I0122 09:21:27.075404 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-b5fd9ff5-td6xf" podStartSLOduration=3.075375727 podStartE2EDuration="3.075375727s" podCreationTimestamp="2026-01-22 09:21:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:21:27.042386064 +0000 UTC m=+931.364573188" watchObservedRunningTime="2026-01-22 09:21:27.075375727 +0000 UTC m=+931.397562850" Jan 22 09:21:27 crc kubenswrapper[4811]: I0122 09:21:27.388207 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-zmjzd" Jan 22 09:21:27 crc kubenswrapper[4811]: I0122 09:21:27.480152 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e22a065b-3b3c-41a9-ad35-b1c1e594af9b-combined-ca-bundle\") pod \"e22a065b-3b3c-41a9-ad35-b1c1e594af9b\" (UID: \"e22a065b-3b3c-41a9-ad35-b1c1e594af9b\") " Jan 22 09:21:27 crc kubenswrapper[4811]: I0122 09:21:27.480284 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e22a065b-3b3c-41a9-ad35-b1c1e594af9b-db-sync-config-data\") pod \"e22a065b-3b3c-41a9-ad35-b1c1e594af9b\" (UID: \"e22a065b-3b3c-41a9-ad35-b1c1e594af9b\") " Jan 22 09:21:27 crc kubenswrapper[4811]: I0122 09:21:27.480336 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tmqq7\" (UniqueName: \"kubernetes.io/projected/e22a065b-3b3c-41a9-ad35-b1c1e594af9b-kube-api-access-tmqq7\") pod \"e22a065b-3b3c-41a9-ad35-b1c1e594af9b\" (UID: \"e22a065b-3b3c-41a9-ad35-b1c1e594af9b\") " Jan 22 09:21:27 crc kubenswrapper[4811]: I0122 09:21:27.492724 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e22a065b-3b3c-41a9-ad35-b1c1e594af9b-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "e22a065b-3b3c-41a9-ad35-b1c1e594af9b" (UID: "e22a065b-3b3c-41a9-ad35-b1c1e594af9b"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:21:27 crc kubenswrapper[4811]: I0122 09:21:27.506810 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e22a065b-3b3c-41a9-ad35-b1c1e594af9b-kube-api-access-tmqq7" (OuterVolumeSpecName: "kube-api-access-tmqq7") pod "e22a065b-3b3c-41a9-ad35-b1c1e594af9b" (UID: "e22a065b-3b3c-41a9-ad35-b1c1e594af9b"). InnerVolumeSpecName "kube-api-access-tmqq7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:21:27 crc kubenswrapper[4811]: I0122 09:21:27.517779 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e22a065b-3b3c-41a9-ad35-b1c1e594af9b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e22a065b-3b3c-41a9-ad35-b1c1e594af9b" (UID: "e22a065b-3b3c-41a9-ad35-b1c1e594af9b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:21:27 crc kubenswrapper[4811]: I0122 09:21:27.582606 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tmqq7\" (UniqueName: \"kubernetes.io/projected/e22a065b-3b3c-41a9-ad35-b1c1e594af9b-kube-api-access-tmqq7\") on node \"crc\" DevicePath \"\"" Jan 22 09:21:27 crc kubenswrapper[4811]: I0122 09:21:27.582652 4811 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e22a065b-3b3c-41a9-ad35-b1c1e594af9b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 09:21:27 crc kubenswrapper[4811]: I0122 09:21:27.582662 4811 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e22a065b-3b3c-41a9-ad35-b1c1e594af9b-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 09:21:28 crc kubenswrapper[4811]: I0122 09:21:28.035747 4811 generic.go:334] "Generic (PLEG): container finished" podID="1cd5889f-63d3-47a0-8b17-e8ffac0011d3" containerID="355d945e984c35fa416eaab7c76e47e82435510a56e52acccadfc3695b1fe0a3" exitCode=0 Jan 22 09:21:28 crc kubenswrapper[4811]: I0122 09:21:28.035891 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-zztph" event={"ID":"1cd5889f-63d3-47a0-8b17-e8ffac0011d3","Type":"ContainerDied","Data":"355d945e984c35fa416eaab7c76e47e82435510a56e52acccadfc3695b1fe0a3"} Jan 22 09:21:28 crc kubenswrapper[4811]: I0122 09:21:28.038449 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-zmjzd" Jan 22 09:21:28 crc kubenswrapper[4811]: I0122 09:21:28.038733 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-zmjzd" event={"ID":"e22a065b-3b3c-41a9-ad35-b1c1e594af9b","Type":"ContainerDied","Data":"7ce59424be3a3698d875724ffc129fe1009eab96be4aa85155246668e6676809"} Jan 22 09:21:28 crc kubenswrapper[4811]: I0122 09:21:28.038763 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7ce59424be3a3698d875724ffc129fe1009eab96be4aa85155246668e6676809" Jan 22 09:21:28 crc kubenswrapper[4811]: I0122 09:21:28.301551 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-866648ff8f-r22cc"] Jan 22 09:21:28 crc kubenswrapper[4811]: E0122 09:21:28.301910 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e22a065b-3b3c-41a9-ad35-b1c1e594af9b" containerName="barbican-db-sync" Jan 22 09:21:28 crc kubenswrapper[4811]: I0122 09:21:28.301925 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="e22a065b-3b3c-41a9-ad35-b1c1e594af9b" containerName="barbican-db-sync" Jan 22 09:21:28 crc kubenswrapper[4811]: I0122 09:21:28.302102 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="e22a065b-3b3c-41a9-ad35-b1c1e594af9b" containerName="barbican-db-sync" Jan 22 09:21:28 crc kubenswrapper[4811]: I0122 09:21:28.302892 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-866648ff8f-r22cc" Jan 22 09:21:28 crc kubenswrapper[4811]: I0122 09:21:28.311992 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 22 09:21:28 crc kubenswrapper[4811]: I0122 09:21:28.312281 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-4drkf" Jan 22 09:21:28 crc kubenswrapper[4811]: I0122 09:21:28.314221 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 22 09:21:28 crc kubenswrapper[4811]: I0122 09:21:28.362372 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-866648ff8f-r22cc"] Jan 22 09:21:28 crc kubenswrapper[4811]: I0122 09:21:28.426243 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-5c6c94454b-v7bc2"] Jan 22 09:21:28 crc kubenswrapper[4811]: I0122 09:21:28.439094 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5c6c94454b-v7bc2" Jan 22 09:21:28 crc kubenswrapper[4811]: I0122 09:21:28.445245 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/01a5613f-ca39-400f-83e5-8c2e04474ce3-config-data-custom\") pod \"barbican-worker-866648ff8f-r22cc\" (UID: \"01a5613f-ca39-400f-83e5-8c2e04474ce3\") " pod="openstack/barbican-worker-866648ff8f-r22cc" Jan 22 09:21:28 crc kubenswrapper[4811]: I0122 09:21:28.445333 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01a5613f-ca39-400f-83e5-8c2e04474ce3-config-data\") pod \"barbican-worker-866648ff8f-r22cc\" (UID: \"01a5613f-ca39-400f-83e5-8c2e04474ce3\") " pod="openstack/barbican-worker-866648ff8f-r22cc" Jan 22 09:21:28 crc kubenswrapper[4811]: I0122 09:21:28.445416 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlf7k\" (UniqueName: \"kubernetes.io/projected/01a5613f-ca39-400f-83e5-8c2e04474ce3-kube-api-access-hlf7k\") pod \"barbican-worker-866648ff8f-r22cc\" (UID: \"01a5613f-ca39-400f-83e5-8c2e04474ce3\") " pod="openstack/barbican-worker-866648ff8f-r22cc" Jan 22 09:21:28 crc kubenswrapper[4811]: I0122 09:21:28.445693 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/01a5613f-ca39-400f-83e5-8c2e04474ce3-logs\") pod \"barbican-worker-866648ff8f-r22cc\" (UID: \"01a5613f-ca39-400f-83e5-8c2e04474ce3\") " pod="openstack/barbican-worker-866648ff8f-r22cc" Jan 22 09:21:28 crc kubenswrapper[4811]: I0122 09:21:28.445826 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01a5613f-ca39-400f-83e5-8c2e04474ce3-combined-ca-bundle\") pod \"barbican-worker-866648ff8f-r22cc\" (UID: \"01a5613f-ca39-400f-83e5-8c2e04474ce3\") " pod="openstack/barbican-worker-866648ff8f-r22cc" Jan 22 09:21:28 crc kubenswrapper[4811]: I0122 09:21:28.447897 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 22 09:21:28 crc kubenswrapper[4811]: I0122 09:21:28.448167 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-5c6c94454b-v7bc2"] Jan 22 09:21:28 crc kubenswrapper[4811]: I0122 09:21:28.551708 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1bf59625-a642-4155-9e83-46cd7d874f50-config-data-custom\") pod \"barbican-keystone-listener-5c6c94454b-v7bc2\" (UID: \"1bf59625-a642-4155-9e83-46cd7d874f50\") " pod="openstack/barbican-keystone-listener-5c6c94454b-v7bc2" Jan 22 09:21:28 crc kubenswrapper[4811]: I0122 09:21:28.551807 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hlf7k\" (UniqueName: \"kubernetes.io/projected/01a5613f-ca39-400f-83e5-8c2e04474ce3-kube-api-access-hlf7k\") pod \"barbican-worker-866648ff8f-r22cc\" (UID: \"01a5613f-ca39-400f-83e5-8c2e04474ce3\") " pod="openstack/barbican-worker-866648ff8f-r22cc" Jan 22 09:21:28 crc kubenswrapper[4811]: I0122 09:21:28.552049 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/01a5613f-ca39-400f-83e5-8c2e04474ce3-logs\") pod \"barbican-worker-866648ff8f-r22cc\" (UID: \"01a5613f-ca39-400f-83e5-8c2e04474ce3\") " pod="openstack/barbican-worker-866648ff8f-r22cc" Jan 22 09:21:28 crc kubenswrapper[4811]: I0122 09:21:28.552109 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01a5613f-ca39-400f-83e5-8c2e04474ce3-combined-ca-bundle\") pod \"barbican-worker-866648ff8f-r22cc\" (UID: \"01a5613f-ca39-400f-83e5-8c2e04474ce3\") " pod="openstack/barbican-worker-866648ff8f-r22cc" Jan 22 09:21:28 crc kubenswrapper[4811]: I0122 09:21:28.552235 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmvl4\" (UniqueName: \"kubernetes.io/projected/1bf59625-a642-4155-9e83-46cd7d874f50-kube-api-access-kmvl4\") pod \"barbican-keystone-listener-5c6c94454b-v7bc2\" (UID: \"1bf59625-a642-4155-9e83-46cd7d874f50\") " pod="openstack/barbican-keystone-listener-5c6c94454b-v7bc2" Jan 22 09:21:28 crc kubenswrapper[4811]: I0122 09:21:28.552284 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/01a5613f-ca39-400f-83e5-8c2e04474ce3-config-data-custom\") pod \"barbican-worker-866648ff8f-r22cc\" (UID: \"01a5613f-ca39-400f-83e5-8c2e04474ce3\") " pod="openstack/barbican-worker-866648ff8f-r22cc" Jan 22 09:21:28 crc kubenswrapper[4811]: I0122 09:21:28.552405 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1bf59625-a642-4155-9e83-46cd7d874f50-logs\") pod \"barbican-keystone-listener-5c6c94454b-v7bc2\" (UID: \"1bf59625-a642-4155-9e83-46cd7d874f50\") " pod="openstack/barbican-keystone-listener-5c6c94454b-v7bc2" Jan 22 09:21:28 crc kubenswrapper[4811]: I0122 09:21:28.552470 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01a5613f-ca39-400f-83e5-8c2e04474ce3-config-data\") pod \"barbican-worker-866648ff8f-r22cc\" (UID: \"01a5613f-ca39-400f-83e5-8c2e04474ce3\") " pod="openstack/barbican-worker-866648ff8f-r22cc" Jan 22 09:21:28 crc kubenswrapper[4811]: I0122 09:21:28.552499 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bf59625-a642-4155-9e83-46cd7d874f50-combined-ca-bundle\") pod \"barbican-keystone-listener-5c6c94454b-v7bc2\" (UID: \"1bf59625-a642-4155-9e83-46cd7d874f50\") " pod="openstack/barbican-keystone-listener-5c6c94454b-v7bc2" Jan 22 09:21:28 crc kubenswrapper[4811]: I0122 09:21:28.552527 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bf59625-a642-4155-9e83-46cd7d874f50-config-data\") pod \"barbican-keystone-listener-5c6c94454b-v7bc2\" (UID: \"1bf59625-a642-4155-9e83-46cd7d874f50\") " pod="openstack/barbican-keystone-listener-5c6c94454b-v7bc2" Jan 22 09:21:28 crc kubenswrapper[4811]: I0122 09:21:28.553159 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/01a5613f-ca39-400f-83e5-8c2e04474ce3-logs\") pod \"barbican-worker-866648ff8f-r22cc\" (UID: \"01a5613f-ca39-400f-83e5-8c2e04474ce3\") " pod="openstack/barbican-worker-866648ff8f-r22cc" Jan 22 09:21:28 crc kubenswrapper[4811]: I0122 09:21:28.585255 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-548894858c-nn2fm"] Jan 22 09:21:28 crc kubenswrapper[4811]: I0122 09:21:28.589287 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01a5613f-ca39-400f-83e5-8c2e04474ce3-combined-ca-bundle\") pod \"barbican-worker-866648ff8f-r22cc\" (UID: \"01a5613f-ca39-400f-83e5-8c2e04474ce3\") " pod="openstack/barbican-worker-866648ff8f-r22cc" Jan 22 09:21:28 crc kubenswrapper[4811]: I0122 09:21:28.594998 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01a5613f-ca39-400f-83e5-8c2e04474ce3-config-data\") pod \"barbican-worker-866648ff8f-r22cc\" (UID: \"01a5613f-ca39-400f-83e5-8c2e04474ce3\") " pod="openstack/barbican-worker-866648ff8f-r22cc" Jan 22 09:21:28 crc kubenswrapper[4811]: I0122 09:21:28.609159 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/01a5613f-ca39-400f-83e5-8c2e04474ce3-config-data-custom\") pod \"barbican-worker-866648ff8f-r22cc\" (UID: \"01a5613f-ca39-400f-83e5-8c2e04474ce3\") " pod="openstack/barbican-worker-866648ff8f-r22cc" Jan 22 09:21:28 crc kubenswrapper[4811]: I0122 09:21:28.649497 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hlf7k\" (UniqueName: \"kubernetes.io/projected/01a5613f-ca39-400f-83e5-8c2e04474ce3-kube-api-access-hlf7k\") pod \"barbican-worker-866648ff8f-r22cc\" (UID: \"01a5613f-ca39-400f-83e5-8c2e04474ce3\") " pod="openstack/barbican-worker-866648ff8f-r22cc" Jan 22 09:21:28 crc kubenswrapper[4811]: I0122 09:21:28.654752 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmvl4\" (UniqueName: \"kubernetes.io/projected/1bf59625-a642-4155-9e83-46cd7d874f50-kube-api-access-kmvl4\") pod \"barbican-keystone-listener-5c6c94454b-v7bc2\" (UID: \"1bf59625-a642-4155-9e83-46cd7d874f50\") " pod="openstack/barbican-keystone-listener-5c6c94454b-v7bc2" Jan 22 09:21:28 crc kubenswrapper[4811]: I0122 09:21:28.654848 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1bf59625-a642-4155-9e83-46cd7d874f50-logs\") pod \"barbican-keystone-listener-5c6c94454b-v7bc2\" (UID: \"1bf59625-a642-4155-9e83-46cd7d874f50\") " pod="openstack/barbican-keystone-listener-5c6c94454b-v7bc2" Jan 22 09:21:28 crc kubenswrapper[4811]: I0122 09:21:28.654889 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bf59625-a642-4155-9e83-46cd7d874f50-combined-ca-bundle\") pod \"barbican-keystone-listener-5c6c94454b-v7bc2\" (UID: \"1bf59625-a642-4155-9e83-46cd7d874f50\") " pod="openstack/barbican-keystone-listener-5c6c94454b-v7bc2" Jan 22 09:21:28 crc kubenswrapper[4811]: I0122 09:21:28.654915 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bf59625-a642-4155-9e83-46cd7d874f50-config-data\") pod \"barbican-keystone-listener-5c6c94454b-v7bc2\" (UID: \"1bf59625-a642-4155-9e83-46cd7d874f50\") " pod="openstack/barbican-keystone-listener-5c6c94454b-v7bc2" Jan 22 09:21:28 crc kubenswrapper[4811]: I0122 09:21:28.654985 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1bf59625-a642-4155-9e83-46cd7d874f50-config-data-custom\") pod \"barbican-keystone-listener-5c6c94454b-v7bc2\" (UID: \"1bf59625-a642-4155-9e83-46cd7d874f50\") " pod="openstack/barbican-keystone-listener-5c6c94454b-v7bc2" Jan 22 09:21:28 crc kubenswrapper[4811]: I0122 09:21:28.656935 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1bf59625-a642-4155-9e83-46cd7d874f50-logs\") pod \"barbican-keystone-listener-5c6c94454b-v7bc2\" (UID: \"1bf59625-a642-4155-9e83-46cd7d874f50\") " pod="openstack/barbican-keystone-listener-5c6c94454b-v7bc2" Jan 22 09:21:28 crc kubenswrapper[4811]: I0122 09:21:28.663124 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bf59625-a642-4155-9e83-46cd7d874f50-combined-ca-bundle\") pod \"barbican-keystone-listener-5c6c94454b-v7bc2\" (UID: \"1bf59625-a642-4155-9e83-46cd7d874f50\") " pod="openstack/barbican-keystone-listener-5c6c94454b-v7bc2" Jan 22 09:21:28 crc kubenswrapper[4811]: I0122 09:21:28.669510 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bf59625-a642-4155-9e83-46cd7d874f50-config-data\") pod \"barbican-keystone-listener-5c6c94454b-v7bc2\" (UID: \"1bf59625-a642-4155-9e83-46cd7d874f50\") " pod="openstack/barbican-keystone-listener-5c6c94454b-v7bc2" Jan 22 09:21:28 crc kubenswrapper[4811]: I0122 09:21:28.672154 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1bf59625-a642-4155-9e83-46cd7d874f50-config-data-custom\") pod \"barbican-keystone-listener-5c6c94454b-v7bc2\" (UID: \"1bf59625-a642-4155-9e83-46cd7d874f50\") " pod="openstack/barbican-keystone-listener-5c6c94454b-v7bc2" Jan 22 09:21:28 crc kubenswrapper[4811]: I0122 09:21:28.687843 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-744f57c8cc-f74wj"] Jan 22 09:21:28 crc kubenswrapper[4811]: I0122 09:21:28.689143 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-744f57c8cc-f74wj" Jan 22 09:21:28 crc kubenswrapper[4811]: I0122 09:21:28.690146 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-866648ff8f-r22cc" Jan 22 09:21:28 crc kubenswrapper[4811]: I0122 09:21:28.707079 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-744f57c8cc-f74wj"] Jan 22 09:21:28 crc kubenswrapper[4811]: I0122 09:21:28.747096 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmvl4\" (UniqueName: \"kubernetes.io/projected/1bf59625-a642-4155-9e83-46cd7d874f50-kube-api-access-kmvl4\") pod \"barbican-keystone-listener-5c6c94454b-v7bc2\" (UID: \"1bf59625-a642-4155-9e83-46cd7d874f50\") " pod="openstack/barbican-keystone-listener-5c6c94454b-v7bc2" Jan 22 09:21:28 crc kubenswrapper[4811]: I0122 09:21:28.761001 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxgpr\" (UniqueName: \"kubernetes.io/projected/e3f473da-b7d9-45c0-9380-199a07972c9c-kube-api-access-cxgpr\") pod \"dnsmasq-dns-744f57c8cc-f74wj\" (UID: \"e3f473da-b7d9-45c0-9380-199a07972c9c\") " pod="openstack/dnsmasq-dns-744f57c8cc-f74wj" Jan 22 09:21:28 crc kubenswrapper[4811]: I0122 09:21:28.761115 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e3f473da-b7d9-45c0-9380-199a07972c9c-dns-svc\") pod \"dnsmasq-dns-744f57c8cc-f74wj\" (UID: \"e3f473da-b7d9-45c0-9380-199a07972c9c\") " pod="openstack/dnsmasq-dns-744f57c8cc-f74wj" Jan 22 09:21:28 crc kubenswrapper[4811]: I0122 09:21:28.761142 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e3f473da-b7d9-45c0-9380-199a07972c9c-ovsdbserver-nb\") pod \"dnsmasq-dns-744f57c8cc-f74wj\" (UID: \"e3f473da-b7d9-45c0-9380-199a07972c9c\") " pod="openstack/dnsmasq-dns-744f57c8cc-f74wj" Jan 22 09:21:28 crc kubenswrapper[4811]: I0122 09:21:28.761167 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e3f473da-b7d9-45c0-9380-199a07972c9c-ovsdbserver-sb\") pod \"dnsmasq-dns-744f57c8cc-f74wj\" (UID: \"e3f473da-b7d9-45c0-9380-199a07972c9c\") " pod="openstack/dnsmasq-dns-744f57c8cc-f74wj" Jan 22 09:21:28 crc kubenswrapper[4811]: I0122 09:21:28.761229 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e3f473da-b7d9-45c0-9380-199a07972c9c-config\") pod \"dnsmasq-dns-744f57c8cc-f74wj\" (UID: \"e3f473da-b7d9-45c0-9380-199a07972c9c\") " pod="openstack/dnsmasq-dns-744f57c8cc-f74wj" Jan 22 09:21:28 crc kubenswrapper[4811]: I0122 09:21:28.795843 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5c6c94454b-v7bc2" Jan 22 09:21:28 crc kubenswrapper[4811]: I0122 09:21:28.847564 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-7757dc6854-tnspq"] Jan 22 09:21:28 crc kubenswrapper[4811]: I0122 09:21:28.856270 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7757dc6854-tnspq" Jan 22 09:21:28 crc kubenswrapper[4811]: I0122 09:21:28.856269 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7757dc6854-tnspq"] Jan 22 09:21:28 crc kubenswrapper[4811]: I0122 09:21:28.860735 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 22 09:21:28 crc kubenswrapper[4811]: I0122 09:21:28.863662 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e3f473da-b7d9-45c0-9380-199a07972c9c-ovsdbserver-sb\") pod \"dnsmasq-dns-744f57c8cc-f74wj\" (UID: \"e3f473da-b7d9-45c0-9380-199a07972c9c\") " pod="openstack/dnsmasq-dns-744f57c8cc-f74wj" Jan 22 09:21:28 crc kubenswrapper[4811]: I0122 09:21:28.863777 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e3f473da-b7d9-45c0-9380-199a07972c9c-config\") pod \"dnsmasq-dns-744f57c8cc-f74wj\" (UID: \"e3f473da-b7d9-45c0-9380-199a07972c9c\") " pod="openstack/dnsmasq-dns-744f57c8cc-f74wj" Jan 22 09:21:28 crc kubenswrapper[4811]: I0122 09:21:28.863849 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxgpr\" (UniqueName: \"kubernetes.io/projected/e3f473da-b7d9-45c0-9380-199a07972c9c-kube-api-access-cxgpr\") pod \"dnsmasq-dns-744f57c8cc-f74wj\" (UID: \"e3f473da-b7d9-45c0-9380-199a07972c9c\") " pod="openstack/dnsmasq-dns-744f57c8cc-f74wj" Jan 22 09:21:28 crc kubenswrapper[4811]: I0122 09:21:28.864978 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e3f473da-b7d9-45c0-9380-199a07972c9c-ovsdbserver-sb\") pod \"dnsmasq-dns-744f57c8cc-f74wj\" (UID: \"e3f473da-b7d9-45c0-9380-199a07972c9c\") " pod="openstack/dnsmasq-dns-744f57c8cc-f74wj" Jan 22 09:21:28 crc kubenswrapper[4811]: I0122 09:21:28.866475 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e3f473da-b7d9-45c0-9380-199a07972c9c-dns-svc\") pod \"dnsmasq-dns-744f57c8cc-f74wj\" (UID: \"e3f473da-b7d9-45c0-9380-199a07972c9c\") " pod="openstack/dnsmasq-dns-744f57c8cc-f74wj" Jan 22 09:21:28 crc kubenswrapper[4811]: I0122 09:21:28.866518 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e3f473da-b7d9-45c0-9380-199a07972c9c-ovsdbserver-nb\") pod \"dnsmasq-dns-744f57c8cc-f74wj\" (UID: \"e3f473da-b7d9-45c0-9380-199a07972c9c\") " pod="openstack/dnsmasq-dns-744f57c8cc-f74wj" Jan 22 09:21:28 crc kubenswrapper[4811]: I0122 09:21:28.866855 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e3f473da-b7d9-45c0-9380-199a07972c9c-config\") pod \"dnsmasq-dns-744f57c8cc-f74wj\" (UID: \"e3f473da-b7d9-45c0-9380-199a07972c9c\") " pod="openstack/dnsmasq-dns-744f57c8cc-f74wj" Jan 22 09:21:28 crc kubenswrapper[4811]: I0122 09:21:28.867180 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e3f473da-b7d9-45c0-9380-199a07972c9c-ovsdbserver-nb\") pod \"dnsmasq-dns-744f57c8cc-f74wj\" (UID: \"e3f473da-b7d9-45c0-9380-199a07972c9c\") " pod="openstack/dnsmasq-dns-744f57c8cc-f74wj" Jan 22 09:21:28 crc kubenswrapper[4811]: I0122 09:21:28.867658 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e3f473da-b7d9-45c0-9380-199a07972c9c-dns-svc\") pod \"dnsmasq-dns-744f57c8cc-f74wj\" (UID: \"e3f473da-b7d9-45c0-9380-199a07972c9c\") " pod="openstack/dnsmasq-dns-744f57c8cc-f74wj" Jan 22 09:21:28 crc kubenswrapper[4811]: I0122 09:21:28.918198 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxgpr\" (UniqueName: \"kubernetes.io/projected/e3f473da-b7d9-45c0-9380-199a07972c9c-kube-api-access-cxgpr\") pod \"dnsmasq-dns-744f57c8cc-f74wj\" (UID: \"e3f473da-b7d9-45c0-9380-199a07972c9c\") " pod="openstack/dnsmasq-dns-744f57c8cc-f74wj" Jan 22 09:21:28 crc kubenswrapper[4811]: I0122 09:21:28.963857 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-xs5vv"] Jan 22 09:21:28 crc kubenswrapper[4811]: I0122 09:21:28.965413 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xs5vv" Jan 22 09:21:28 crc kubenswrapper[4811]: I0122 09:21:28.967783 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/49ce92f5-a20a-4429-89a1-764b3db3e28a-config-data-custom\") pod \"barbican-api-7757dc6854-tnspq\" (UID: \"49ce92f5-a20a-4429-89a1-764b3db3e28a\") " pod="openstack/barbican-api-7757dc6854-tnspq" Jan 22 09:21:28 crc kubenswrapper[4811]: I0122 09:21:28.967871 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49ce92f5-a20a-4429-89a1-764b3db3e28a-combined-ca-bundle\") pod \"barbican-api-7757dc6854-tnspq\" (UID: \"49ce92f5-a20a-4429-89a1-764b3db3e28a\") " pod="openstack/barbican-api-7757dc6854-tnspq" Jan 22 09:21:28 crc kubenswrapper[4811]: I0122 09:21:28.967955 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49ce92f5-a20a-4429-89a1-764b3db3e28a-config-data\") pod \"barbican-api-7757dc6854-tnspq\" (UID: \"49ce92f5-a20a-4429-89a1-764b3db3e28a\") " pod="openstack/barbican-api-7757dc6854-tnspq" Jan 22 09:21:28 crc kubenswrapper[4811]: I0122 09:21:28.967989 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/49ce92f5-a20a-4429-89a1-764b3db3e28a-logs\") pod \"barbican-api-7757dc6854-tnspq\" (UID: \"49ce92f5-a20a-4429-89a1-764b3db3e28a\") " pod="openstack/barbican-api-7757dc6854-tnspq" Jan 22 09:21:28 crc kubenswrapper[4811]: I0122 09:21:28.968147 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fj2fl\" (UniqueName: \"kubernetes.io/projected/49ce92f5-a20a-4429-89a1-764b3db3e28a-kube-api-access-fj2fl\") pod \"barbican-api-7757dc6854-tnspq\" (UID: \"49ce92f5-a20a-4429-89a1-764b3db3e28a\") " pod="openstack/barbican-api-7757dc6854-tnspq" Jan 22 09:21:28 crc kubenswrapper[4811]: I0122 09:21:28.989565 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xs5vv"] Jan 22 09:21:29 crc kubenswrapper[4811]: I0122 09:21:29.060406 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-744f57c8cc-f74wj" Jan 22 09:21:29 crc kubenswrapper[4811]: I0122 09:21:29.061483 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-548894858c-nn2fm" podUID="91f1af8b-22e5-45b9-ac16-6696840485e4" containerName="dnsmasq-dns" containerID="cri-o://acb1305ea593365b2079387184ef6e638eb0b7028c698ac2fc7d8e1e5e90ee44" gracePeriod=10 Jan 22 09:21:29 crc kubenswrapper[4811]: I0122 09:21:29.072506 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/49ce92f5-a20a-4429-89a1-764b3db3e28a-config-data-custom\") pod \"barbican-api-7757dc6854-tnspq\" (UID: \"49ce92f5-a20a-4429-89a1-764b3db3e28a\") " pod="openstack/barbican-api-7757dc6854-tnspq" Jan 22 09:21:29 crc kubenswrapper[4811]: I0122 09:21:29.072560 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e75e74d-a6bf-48c5-bedb-8fb30806d29e-utilities\") pod \"redhat-operators-xs5vv\" (UID: \"8e75e74d-a6bf-48c5-bedb-8fb30806d29e\") " pod="openshift-marketplace/redhat-operators-xs5vv" Jan 22 09:21:29 crc kubenswrapper[4811]: I0122 09:21:29.072609 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49ce92f5-a20a-4429-89a1-764b3db3e28a-combined-ca-bundle\") pod \"barbican-api-7757dc6854-tnspq\" (UID: \"49ce92f5-a20a-4429-89a1-764b3db3e28a\") " pod="openstack/barbican-api-7757dc6854-tnspq" Jan 22 09:21:29 crc kubenswrapper[4811]: I0122 09:21:29.072637 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e75e74d-a6bf-48c5-bedb-8fb30806d29e-catalog-content\") pod \"redhat-operators-xs5vv\" (UID: \"8e75e74d-a6bf-48c5-bedb-8fb30806d29e\") " pod="openshift-marketplace/redhat-operators-xs5vv" Jan 22 09:21:29 crc kubenswrapper[4811]: I0122 09:21:29.072693 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49ce92f5-a20a-4429-89a1-764b3db3e28a-config-data\") pod \"barbican-api-7757dc6854-tnspq\" (UID: \"49ce92f5-a20a-4429-89a1-764b3db3e28a\") " pod="openstack/barbican-api-7757dc6854-tnspq" Jan 22 09:21:29 crc kubenswrapper[4811]: I0122 09:21:29.072724 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/49ce92f5-a20a-4429-89a1-764b3db3e28a-logs\") pod \"barbican-api-7757dc6854-tnspq\" (UID: \"49ce92f5-a20a-4429-89a1-764b3db3e28a\") " pod="openstack/barbican-api-7757dc6854-tnspq" Jan 22 09:21:29 crc kubenswrapper[4811]: I0122 09:21:29.072842 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9x72c\" (UniqueName: \"kubernetes.io/projected/8e75e74d-a6bf-48c5-bedb-8fb30806d29e-kube-api-access-9x72c\") pod \"redhat-operators-xs5vv\" (UID: \"8e75e74d-a6bf-48c5-bedb-8fb30806d29e\") " pod="openshift-marketplace/redhat-operators-xs5vv" Jan 22 09:21:29 crc kubenswrapper[4811]: I0122 09:21:29.072872 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fj2fl\" (UniqueName: \"kubernetes.io/projected/49ce92f5-a20a-4429-89a1-764b3db3e28a-kube-api-access-fj2fl\") pod \"barbican-api-7757dc6854-tnspq\" (UID: \"49ce92f5-a20a-4429-89a1-764b3db3e28a\") " pod="openstack/barbican-api-7757dc6854-tnspq" Jan 22 09:21:29 crc kubenswrapper[4811]: I0122 09:21:29.076940 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/49ce92f5-a20a-4429-89a1-764b3db3e28a-config-data-custom\") pod \"barbican-api-7757dc6854-tnspq\" (UID: \"49ce92f5-a20a-4429-89a1-764b3db3e28a\") " pod="openstack/barbican-api-7757dc6854-tnspq" Jan 22 09:21:29 crc kubenswrapper[4811]: I0122 09:21:29.077344 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/49ce92f5-a20a-4429-89a1-764b3db3e28a-logs\") pod \"barbican-api-7757dc6854-tnspq\" (UID: \"49ce92f5-a20a-4429-89a1-764b3db3e28a\") " pod="openstack/barbican-api-7757dc6854-tnspq" Jan 22 09:21:29 crc kubenswrapper[4811]: I0122 09:21:29.082194 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49ce92f5-a20a-4429-89a1-764b3db3e28a-combined-ca-bundle\") pod \"barbican-api-7757dc6854-tnspq\" (UID: \"49ce92f5-a20a-4429-89a1-764b3db3e28a\") " pod="openstack/barbican-api-7757dc6854-tnspq" Jan 22 09:21:29 crc kubenswrapper[4811]: I0122 09:21:29.086454 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49ce92f5-a20a-4429-89a1-764b3db3e28a-config-data\") pod \"barbican-api-7757dc6854-tnspq\" (UID: \"49ce92f5-a20a-4429-89a1-764b3db3e28a\") " pod="openstack/barbican-api-7757dc6854-tnspq" Jan 22 09:21:29 crc kubenswrapper[4811]: I0122 09:21:29.098841 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fj2fl\" (UniqueName: \"kubernetes.io/projected/49ce92f5-a20a-4429-89a1-764b3db3e28a-kube-api-access-fj2fl\") pod \"barbican-api-7757dc6854-tnspq\" (UID: \"49ce92f5-a20a-4429-89a1-764b3db3e28a\") " pod="openstack/barbican-api-7757dc6854-tnspq" Jan 22 09:21:29 crc kubenswrapper[4811]: I0122 09:21:29.174528 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9x72c\" (UniqueName: \"kubernetes.io/projected/8e75e74d-a6bf-48c5-bedb-8fb30806d29e-kube-api-access-9x72c\") pod \"redhat-operators-xs5vv\" (UID: \"8e75e74d-a6bf-48c5-bedb-8fb30806d29e\") " pod="openshift-marketplace/redhat-operators-xs5vv" Jan 22 09:21:29 crc kubenswrapper[4811]: I0122 09:21:29.174998 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e75e74d-a6bf-48c5-bedb-8fb30806d29e-utilities\") pod \"redhat-operators-xs5vv\" (UID: \"8e75e74d-a6bf-48c5-bedb-8fb30806d29e\") " pod="openshift-marketplace/redhat-operators-xs5vv" Jan 22 09:21:29 crc kubenswrapper[4811]: I0122 09:21:29.175064 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e75e74d-a6bf-48c5-bedb-8fb30806d29e-catalog-content\") pod \"redhat-operators-xs5vv\" (UID: \"8e75e74d-a6bf-48c5-bedb-8fb30806d29e\") " pod="openshift-marketplace/redhat-operators-xs5vv" Jan 22 09:21:29 crc kubenswrapper[4811]: I0122 09:21:29.175470 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e75e74d-a6bf-48c5-bedb-8fb30806d29e-utilities\") pod \"redhat-operators-xs5vv\" (UID: \"8e75e74d-a6bf-48c5-bedb-8fb30806d29e\") " pod="openshift-marketplace/redhat-operators-xs5vv" Jan 22 09:21:29 crc kubenswrapper[4811]: I0122 09:21:29.176354 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e75e74d-a6bf-48c5-bedb-8fb30806d29e-catalog-content\") pod \"redhat-operators-xs5vv\" (UID: \"8e75e74d-a6bf-48c5-bedb-8fb30806d29e\") " pod="openshift-marketplace/redhat-operators-xs5vv" Jan 22 09:21:29 crc kubenswrapper[4811]: I0122 09:21:29.178794 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7757dc6854-tnspq" Jan 22 09:21:29 crc kubenswrapper[4811]: I0122 09:21:29.195646 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9x72c\" (UniqueName: \"kubernetes.io/projected/8e75e74d-a6bf-48c5-bedb-8fb30806d29e-kube-api-access-9x72c\") pod \"redhat-operators-xs5vv\" (UID: \"8e75e74d-a6bf-48c5-bedb-8fb30806d29e\") " pod="openshift-marketplace/redhat-operators-xs5vv" Jan 22 09:21:29 crc kubenswrapper[4811]: I0122 09:21:29.296270 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xs5vv" Jan 22 09:21:29 crc kubenswrapper[4811]: I0122 09:21:29.818879 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-zztph" Jan 22 09:21:29 crc kubenswrapper[4811]: I0122 09:21:29.833226 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-zt2qg" Jan 22 09:21:29 crc kubenswrapper[4811]: I0122 09:21:29.900096 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1cd5889f-63d3-47a0-8b17-e8ffac0011d3-config-data\") pod \"1cd5889f-63d3-47a0-8b17-e8ffac0011d3\" (UID: \"1cd5889f-63d3-47a0-8b17-e8ffac0011d3\") " Jan 22 09:21:29 crc kubenswrapper[4811]: I0122 09:21:29.900154 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cd5889f-63d3-47a0-8b17-e8ffac0011d3-combined-ca-bundle\") pod \"1cd5889f-63d3-47a0-8b17-e8ffac0011d3\" (UID: \"1cd5889f-63d3-47a0-8b17-e8ffac0011d3\") " Jan 22 09:21:29 crc kubenswrapper[4811]: I0122 09:21:29.900172 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q6wfm\" (UniqueName: \"kubernetes.io/projected/1cd5889f-63d3-47a0-8b17-e8ffac0011d3-kube-api-access-q6wfm\") pod \"1cd5889f-63d3-47a0-8b17-e8ffac0011d3\" (UID: \"1cd5889f-63d3-47a0-8b17-e8ffac0011d3\") " Jan 22 09:21:29 crc kubenswrapper[4811]: I0122 09:21:29.900221 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1cd5889f-63d3-47a0-8b17-e8ffac0011d3-etc-machine-id\") pod \"1cd5889f-63d3-47a0-8b17-e8ffac0011d3\" (UID: \"1cd5889f-63d3-47a0-8b17-e8ffac0011d3\") " Jan 22 09:21:29 crc kubenswrapper[4811]: I0122 09:21:29.900260 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1cd5889f-63d3-47a0-8b17-e8ffac0011d3-scripts\") pod \"1cd5889f-63d3-47a0-8b17-e8ffac0011d3\" (UID: \"1cd5889f-63d3-47a0-8b17-e8ffac0011d3\") " Jan 22 09:21:29 crc kubenswrapper[4811]: I0122 09:21:29.900286 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1cd5889f-63d3-47a0-8b17-e8ffac0011d3-db-sync-config-data\") pod \"1cd5889f-63d3-47a0-8b17-e8ffac0011d3\" (UID: \"1cd5889f-63d3-47a0-8b17-e8ffac0011d3\") " Jan 22 09:21:29 crc kubenswrapper[4811]: I0122 09:21:29.901025 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1cd5889f-63d3-47a0-8b17-e8ffac0011d3-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "1cd5889f-63d3-47a0-8b17-e8ffac0011d3" (UID: "1cd5889f-63d3-47a0-8b17-e8ffac0011d3"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 09:21:29 crc kubenswrapper[4811]: I0122 09:21:29.903818 4811 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1cd5889f-63d3-47a0-8b17-e8ffac0011d3-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 22 09:21:29 crc kubenswrapper[4811]: I0122 09:21:29.914265 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1cd5889f-63d3-47a0-8b17-e8ffac0011d3-scripts" (OuterVolumeSpecName: "scripts") pod "1cd5889f-63d3-47a0-8b17-e8ffac0011d3" (UID: "1cd5889f-63d3-47a0-8b17-e8ffac0011d3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:21:29 crc kubenswrapper[4811]: I0122 09:21:29.921569 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1cd5889f-63d3-47a0-8b17-e8ffac0011d3-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "1cd5889f-63d3-47a0-8b17-e8ffac0011d3" (UID: "1cd5889f-63d3-47a0-8b17-e8ffac0011d3"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:21:29 crc kubenswrapper[4811]: I0122 09:21:29.943157 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1cd5889f-63d3-47a0-8b17-e8ffac0011d3-kube-api-access-q6wfm" (OuterVolumeSpecName: "kube-api-access-q6wfm") pod "1cd5889f-63d3-47a0-8b17-e8ffac0011d3" (UID: "1cd5889f-63d3-47a0-8b17-e8ffac0011d3"). InnerVolumeSpecName "kube-api-access-q6wfm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:21:29 crc kubenswrapper[4811]: I0122 09:21:29.976676 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1cd5889f-63d3-47a0-8b17-e8ffac0011d3-config-data" (OuterVolumeSpecName: "config-data") pod "1cd5889f-63d3-47a0-8b17-e8ffac0011d3" (UID: "1cd5889f-63d3-47a0-8b17-e8ffac0011d3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:21:29 crc kubenswrapper[4811]: I0122 09:21:29.978287 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1cd5889f-63d3-47a0-8b17-e8ffac0011d3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1cd5889f-63d3-47a0-8b17-e8ffac0011d3" (UID: "1cd5889f-63d3-47a0-8b17-e8ffac0011d3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:21:30 crc kubenswrapper[4811]: I0122 09:21:30.006541 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14a1e4fa-5a60-47c5-a2da-e57110ca0b57-config-data\") pod \"14a1e4fa-5a60-47c5-a2da-e57110ca0b57\" (UID: \"14a1e4fa-5a60-47c5-a2da-e57110ca0b57\") " Jan 22 09:21:30 crc kubenswrapper[4811]: I0122 09:21:30.006778 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/14a1e4fa-5a60-47c5-a2da-e57110ca0b57-fernet-keys\") pod \"14a1e4fa-5a60-47c5-a2da-e57110ca0b57\" (UID: \"14a1e4fa-5a60-47c5-a2da-e57110ca0b57\") " Jan 22 09:21:30 crc kubenswrapper[4811]: I0122 09:21:30.006868 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/14a1e4fa-5a60-47c5-a2da-e57110ca0b57-credential-keys\") pod \"14a1e4fa-5a60-47c5-a2da-e57110ca0b57\" (UID: \"14a1e4fa-5a60-47c5-a2da-e57110ca0b57\") " Jan 22 09:21:30 crc kubenswrapper[4811]: I0122 09:21:30.006977 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cmtqw\" (UniqueName: \"kubernetes.io/projected/14a1e4fa-5a60-47c5-a2da-e57110ca0b57-kube-api-access-cmtqw\") pod \"14a1e4fa-5a60-47c5-a2da-e57110ca0b57\" (UID: \"14a1e4fa-5a60-47c5-a2da-e57110ca0b57\") " Jan 22 09:21:30 crc kubenswrapper[4811]: I0122 09:21:30.007061 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14a1e4fa-5a60-47c5-a2da-e57110ca0b57-combined-ca-bundle\") pod \"14a1e4fa-5a60-47c5-a2da-e57110ca0b57\" (UID: \"14a1e4fa-5a60-47c5-a2da-e57110ca0b57\") " Jan 22 09:21:30 crc kubenswrapper[4811]: I0122 09:21:30.007360 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/14a1e4fa-5a60-47c5-a2da-e57110ca0b57-scripts\") pod \"14a1e4fa-5a60-47c5-a2da-e57110ca0b57\" (UID: \"14a1e4fa-5a60-47c5-a2da-e57110ca0b57\") " Jan 22 09:21:30 crc kubenswrapper[4811]: I0122 09:21:30.007992 4811 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1cd5889f-63d3-47a0-8b17-e8ffac0011d3-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 09:21:30 crc kubenswrapper[4811]: I0122 09:21:30.008066 4811 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cd5889f-63d3-47a0-8b17-e8ffac0011d3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 09:21:30 crc kubenswrapper[4811]: I0122 09:21:30.008122 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q6wfm\" (UniqueName: \"kubernetes.io/projected/1cd5889f-63d3-47a0-8b17-e8ffac0011d3-kube-api-access-q6wfm\") on node \"crc\" DevicePath \"\"" Jan 22 09:21:30 crc kubenswrapper[4811]: I0122 09:21:30.008171 4811 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1cd5889f-63d3-47a0-8b17-e8ffac0011d3-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 09:21:30 crc kubenswrapper[4811]: I0122 09:21:30.008219 4811 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1cd5889f-63d3-47a0-8b17-e8ffac0011d3-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 09:21:30 crc kubenswrapper[4811]: I0122 09:21:30.014603 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14a1e4fa-5a60-47c5-a2da-e57110ca0b57-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "14a1e4fa-5a60-47c5-a2da-e57110ca0b57" (UID: "14a1e4fa-5a60-47c5-a2da-e57110ca0b57"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:21:30 crc kubenswrapper[4811]: I0122 09:21:30.014998 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14a1e4fa-5a60-47c5-a2da-e57110ca0b57-kube-api-access-cmtqw" (OuterVolumeSpecName: "kube-api-access-cmtqw") pod "14a1e4fa-5a60-47c5-a2da-e57110ca0b57" (UID: "14a1e4fa-5a60-47c5-a2da-e57110ca0b57"). InnerVolumeSpecName "kube-api-access-cmtqw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:21:30 crc kubenswrapper[4811]: I0122 09:21:30.020012 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14a1e4fa-5a60-47c5-a2da-e57110ca0b57-scripts" (OuterVolumeSpecName: "scripts") pod "14a1e4fa-5a60-47c5-a2da-e57110ca0b57" (UID: "14a1e4fa-5a60-47c5-a2da-e57110ca0b57"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:21:30 crc kubenswrapper[4811]: I0122 09:21:30.034299 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14a1e4fa-5a60-47c5-a2da-e57110ca0b57-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "14a1e4fa-5a60-47c5-a2da-e57110ca0b57" (UID: "14a1e4fa-5a60-47c5-a2da-e57110ca0b57"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:21:30 crc kubenswrapper[4811]: I0122 09:21:30.048765 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14a1e4fa-5a60-47c5-a2da-e57110ca0b57-config-data" (OuterVolumeSpecName: "config-data") pod "14a1e4fa-5a60-47c5-a2da-e57110ca0b57" (UID: "14a1e4fa-5a60-47c5-a2da-e57110ca0b57"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:21:30 crc kubenswrapper[4811]: I0122 09:21:30.064825 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14a1e4fa-5a60-47c5-a2da-e57110ca0b57-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "14a1e4fa-5a60-47c5-a2da-e57110ca0b57" (UID: "14a1e4fa-5a60-47c5-a2da-e57110ca0b57"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:21:30 crc kubenswrapper[4811]: I0122 09:21:30.078930 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-zt2qg" event={"ID":"14a1e4fa-5a60-47c5-a2da-e57110ca0b57","Type":"ContainerDied","Data":"6d34ea78b75b67d1f13bc858222a74c8c08d7dbfca8f109c4a6373a297604a88"} Jan 22 09:21:30 crc kubenswrapper[4811]: I0122 09:21:30.078980 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6d34ea78b75b67d1f13bc858222a74c8c08d7dbfca8f109c4a6373a297604a88" Jan 22 09:21:30 crc kubenswrapper[4811]: I0122 09:21:30.079043 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-zt2qg" Jan 22 09:21:30 crc kubenswrapper[4811]: I0122 09:21:30.084990 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-zztph" event={"ID":"1cd5889f-63d3-47a0-8b17-e8ffac0011d3","Type":"ContainerDied","Data":"aa4cca89ee2fff064a09ad2133ada22854189dd6d3ab4a5226a2b4f72716c1e5"} Jan 22 09:21:30 crc kubenswrapper[4811]: I0122 09:21:30.085022 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aa4cca89ee2fff064a09ad2133ada22854189dd6d3ab4a5226a2b4f72716c1e5" Jan 22 09:21:30 crc kubenswrapper[4811]: I0122 09:21:30.085065 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-zztph" Jan 22 09:21:30 crc kubenswrapper[4811]: I0122 09:21:30.102957 4811 generic.go:334] "Generic (PLEG): container finished" podID="91f1af8b-22e5-45b9-ac16-6696840485e4" containerID="acb1305ea593365b2079387184ef6e638eb0b7028c698ac2fc7d8e1e5e90ee44" exitCode=0 Jan 22 09:21:30 crc kubenswrapper[4811]: I0122 09:21:30.103205 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-548894858c-nn2fm" event={"ID":"91f1af8b-22e5-45b9-ac16-6696840485e4","Type":"ContainerDied","Data":"acb1305ea593365b2079387184ef6e638eb0b7028c698ac2fc7d8e1e5e90ee44"} Jan 22 09:21:30 crc kubenswrapper[4811]: I0122 09:21:30.110280 4811 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14a1e4fa-5a60-47c5-a2da-e57110ca0b57-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 09:21:30 crc kubenswrapper[4811]: I0122 09:21:30.110309 4811 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/14a1e4fa-5a60-47c5-a2da-e57110ca0b57-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 22 09:21:30 crc kubenswrapper[4811]: I0122 09:21:30.110319 4811 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/14a1e4fa-5a60-47c5-a2da-e57110ca0b57-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 22 09:21:30 crc kubenswrapper[4811]: I0122 09:21:30.110327 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cmtqw\" (UniqueName: \"kubernetes.io/projected/14a1e4fa-5a60-47c5-a2da-e57110ca0b57-kube-api-access-cmtqw\") on node \"crc\" DevicePath \"\"" Jan 22 09:21:30 crc kubenswrapper[4811]: I0122 09:21:30.110337 4811 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14a1e4fa-5a60-47c5-a2da-e57110ca0b57-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 09:21:30 crc kubenswrapper[4811]: I0122 09:21:30.110346 4811 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/14a1e4fa-5a60-47c5-a2da-e57110ca0b57-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 09:21:30 crc kubenswrapper[4811]: I0122 09:21:30.309336 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 22 09:21:30 crc kubenswrapper[4811]: E0122 09:21:30.309597 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14a1e4fa-5a60-47c5-a2da-e57110ca0b57" containerName="keystone-bootstrap" Jan 22 09:21:30 crc kubenswrapper[4811]: I0122 09:21:30.309614 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="14a1e4fa-5a60-47c5-a2da-e57110ca0b57" containerName="keystone-bootstrap" Jan 22 09:21:30 crc kubenswrapper[4811]: E0122 09:21:30.309644 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cd5889f-63d3-47a0-8b17-e8ffac0011d3" containerName="cinder-db-sync" Jan 22 09:21:30 crc kubenswrapper[4811]: I0122 09:21:30.309650 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cd5889f-63d3-47a0-8b17-e8ffac0011d3" containerName="cinder-db-sync" Jan 22 09:21:30 crc kubenswrapper[4811]: I0122 09:21:30.309782 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="1cd5889f-63d3-47a0-8b17-e8ffac0011d3" containerName="cinder-db-sync" Jan 22 09:21:30 crc kubenswrapper[4811]: I0122 09:21:30.309799 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="14a1e4fa-5a60-47c5-a2da-e57110ca0b57" containerName="keystone-bootstrap" Jan 22 09:21:30 crc kubenswrapper[4811]: I0122 09:21:30.310456 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 22 09:21:30 crc kubenswrapper[4811]: I0122 09:21:30.324449 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 22 09:21:30 crc kubenswrapper[4811]: I0122 09:21:30.324734 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-j2wv7" Jan 22 09:21:30 crc kubenswrapper[4811]: I0122 09:21:30.324867 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 22 09:21:30 crc kubenswrapper[4811]: I0122 09:21:30.325034 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 22 09:21:30 crc kubenswrapper[4811]: I0122 09:21:30.366788 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 22 09:21:30 crc kubenswrapper[4811]: I0122 09:21:30.429676 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/80f3e64e-0e9f-49b8-9c7e-78e08634ba92-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"80f3e64e-0e9f-49b8-9c7e-78e08634ba92\") " pod="openstack/cinder-scheduler-0" Jan 22 09:21:30 crc kubenswrapper[4811]: I0122 09:21:30.429752 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/80f3e64e-0e9f-49b8-9c7e-78e08634ba92-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"80f3e64e-0e9f-49b8-9c7e-78e08634ba92\") " pod="openstack/cinder-scheduler-0" Jan 22 09:21:30 crc kubenswrapper[4811]: I0122 09:21:30.429768 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvk5z\" (UniqueName: \"kubernetes.io/projected/80f3e64e-0e9f-49b8-9c7e-78e08634ba92-kube-api-access-xvk5z\") pod \"cinder-scheduler-0\" (UID: \"80f3e64e-0e9f-49b8-9c7e-78e08634ba92\") " pod="openstack/cinder-scheduler-0" Jan 22 09:21:30 crc kubenswrapper[4811]: I0122 09:21:30.429865 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80f3e64e-0e9f-49b8-9c7e-78e08634ba92-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"80f3e64e-0e9f-49b8-9c7e-78e08634ba92\") " pod="openstack/cinder-scheduler-0" Jan 22 09:21:30 crc kubenswrapper[4811]: I0122 09:21:30.429889 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80f3e64e-0e9f-49b8-9c7e-78e08634ba92-config-data\") pod \"cinder-scheduler-0\" (UID: \"80f3e64e-0e9f-49b8-9c7e-78e08634ba92\") " pod="openstack/cinder-scheduler-0" Jan 22 09:21:30 crc kubenswrapper[4811]: I0122 09:21:30.430140 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/80f3e64e-0e9f-49b8-9c7e-78e08634ba92-scripts\") pod \"cinder-scheduler-0\" (UID: \"80f3e64e-0e9f-49b8-9c7e-78e08634ba92\") " pod="openstack/cinder-scheduler-0" Jan 22 09:21:30 crc kubenswrapper[4811]: I0122 09:21:30.451675 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-744f57c8cc-f74wj"] Jan 22 09:21:30 crc kubenswrapper[4811]: I0122 09:21:30.459095 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7474d577dc-bfmkd"] Jan 22 09:21:30 crc kubenswrapper[4811]: I0122 09:21:30.462877 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7474d577dc-bfmkd" Jan 22 09:21:30 crc kubenswrapper[4811]: I0122 09:21:30.533459 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/80f3e64e-0e9f-49b8-9c7e-78e08634ba92-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"80f3e64e-0e9f-49b8-9c7e-78e08634ba92\") " pod="openstack/cinder-scheduler-0" Jan 22 09:21:30 crc kubenswrapper[4811]: I0122 09:21:30.533670 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/80f3e64e-0e9f-49b8-9c7e-78e08634ba92-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"80f3e64e-0e9f-49b8-9c7e-78e08634ba92\") " pod="openstack/cinder-scheduler-0" Jan 22 09:21:30 crc kubenswrapper[4811]: I0122 09:21:30.533751 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvk5z\" (UniqueName: \"kubernetes.io/projected/80f3e64e-0e9f-49b8-9c7e-78e08634ba92-kube-api-access-xvk5z\") pod \"cinder-scheduler-0\" (UID: \"80f3e64e-0e9f-49b8-9c7e-78e08634ba92\") " pod="openstack/cinder-scheduler-0" Jan 22 09:21:30 crc kubenswrapper[4811]: I0122 09:21:30.533902 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80f3e64e-0e9f-49b8-9c7e-78e08634ba92-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"80f3e64e-0e9f-49b8-9c7e-78e08634ba92\") " pod="openstack/cinder-scheduler-0" Jan 22 09:21:30 crc kubenswrapper[4811]: I0122 09:21:30.534002 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80f3e64e-0e9f-49b8-9c7e-78e08634ba92-config-data\") pod \"cinder-scheduler-0\" (UID: \"80f3e64e-0e9f-49b8-9c7e-78e08634ba92\") " pod="openstack/cinder-scheduler-0" Jan 22 09:21:30 crc kubenswrapper[4811]: I0122 09:21:30.534123 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/80f3e64e-0e9f-49b8-9c7e-78e08634ba92-scripts\") pod \"cinder-scheduler-0\" (UID: \"80f3e64e-0e9f-49b8-9c7e-78e08634ba92\") " pod="openstack/cinder-scheduler-0" Jan 22 09:21:30 crc kubenswrapper[4811]: I0122 09:21:30.535122 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/80f3e64e-0e9f-49b8-9c7e-78e08634ba92-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"80f3e64e-0e9f-49b8-9c7e-78e08634ba92\") " pod="openstack/cinder-scheduler-0" Jan 22 09:21:30 crc kubenswrapper[4811]: I0122 09:21:30.540576 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7474d577dc-bfmkd"] Jan 22 09:21:30 crc kubenswrapper[4811]: I0122 09:21:30.545910 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/80f3e64e-0e9f-49b8-9c7e-78e08634ba92-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"80f3e64e-0e9f-49b8-9c7e-78e08634ba92\") " pod="openstack/cinder-scheduler-0" Jan 22 09:21:30 crc kubenswrapper[4811]: I0122 09:21:30.546453 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80f3e64e-0e9f-49b8-9c7e-78e08634ba92-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"80f3e64e-0e9f-49b8-9c7e-78e08634ba92\") " pod="openstack/cinder-scheduler-0" Jan 22 09:21:30 crc kubenswrapper[4811]: I0122 09:21:30.546592 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80f3e64e-0e9f-49b8-9c7e-78e08634ba92-config-data\") pod \"cinder-scheduler-0\" (UID: \"80f3e64e-0e9f-49b8-9c7e-78e08634ba92\") " pod="openstack/cinder-scheduler-0" Jan 22 09:21:30 crc kubenswrapper[4811]: I0122 09:21:30.548027 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/80f3e64e-0e9f-49b8-9c7e-78e08634ba92-scripts\") pod \"cinder-scheduler-0\" (UID: \"80f3e64e-0e9f-49b8-9c7e-78e08634ba92\") " pod="openstack/cinder-scheduler-0" Jan 22 09:21:30 crc kubenswrapper[4811]: I0122 09:21:30.553446 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvk5z\" (UniqueName: \"kubernetes.io/projected/80f3e64e-0e9f-49b8-9c7e-78e08634ba92-kube-api-access-xvk5z\") pod \"cinder-scheduler-0\" (UID: \"80f3e64e-0e9f-49b8-9c7e-78e08634ba92\") " pod="openstack/cinder-scheduler-0" Jan 22 09:21:30 crc kubenswrapper[4811]: I0122 09:21:30.596823 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 22 09:21:30 crc kubenswrapper[4811]: I0122 09:21:30.600468 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 22 09:21:30 crc kubenswrapper[4811]: I0122 09:21:30.604363 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 22 09:21:30 crc kubenswrapper[4811]: I0122 09:21:30.616838 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 22 09:21:30 crc kubenswrapper[4811]: I0122 09:21:30.644068 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1831d8da-ec75-458b-b289-5119052f2216-dns-svc\") pod \"dnsmasq-dns-7474d577dc-bfmkd\" (UID: \"1831d8da-ec75-458b-b289-5119052f2216\") " pod="openstack/dnsmasq-dns-7474d577dc-bfmkd" Jan 22 09:21:30 crc kubenswrapper[4811]: I0122 09:21:30.644354 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1831d8da-ec75-458b-b289-5119052f2216-config\") pod \"dnsmasq-dns-7474d577dc-bfmkd\" (UID: \"1831d8da-ec75-458b-b289-5119052f2216\") " pod="openstack/dnsmasq-dns-7474d577dc-bfmkd" Jan 22 09:21:30 crc kubenswrapper[4811]: I0122 09:21:30.644488 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1831d8da-ec75-458b-b289-5119052f2216-ovsdbserver-nb\") pod \"dnsmasq-dns-7474d577dc-bfmkd\" (UID: \"1831d8da-ec75-458b-b289-5119052f2216\") " pod="openstack/dnsmasq-dns-7474d577dc-bfmkd" Jan 22 09:21:30 crc kubenswrapper[4811]: I0122 09:21:30.644779 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1831d8da-ec75-458b-b289-5119052f2216-ovsdbserver-sb\") pod \"dnsmasq-dns-7474d577dc-bfmkd\" (UID: \"1831d8da-ec75-458b-b289-5119052f2216\") " pod="openstack/dnsmasq-dns-7474d577dc-bfmkd" Jan 22 09:21:30 crc kubenswrapper[4811]: I0122 09:21:30.644855 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wrz6\" (UniqueName: \"kubernetes.io/projected/1831d8da-ec75-458b-b289-5119052f2216-kube-api-access-9wrz6\") pod \"dnsmasq-dns-7474d577dc-bfmkd\" (UID: \"1831d8da-ec75-458b-b289-5119052f2216\") " pod="openstack/dnsmasq-dns-7474d577dc-bfmkd" Jan 22 09:21:30 crc kubenswrapper[4811]: I0122 09:21:30.689702 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 22 09:21:30 crc kubenswrapper[4811]: I0122 09:21:30.745968 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9d0235ec-7dc3-44f6-ba77-171db91d88b3-config-data-custom\") pod \"cinder-api-0\" (UID: \"9d0235ec-7dc3-44f6-ba77-171db91d88b3\") " pod="openstack/cinder-api-0" Jan 22 09:21:30 crc kubenswrapper[4811]: I0122 09:21:30.746038 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9d0235ec-7dc3-44f6-ba77-171db91d88b3-logs\") pod \"cinder-api-0\" (UID: \"9d0235ec-7dc3-44f6-ba77-171db91d88b3\") " pod="openstack/cinder-api-0" Jan 22 09:21:30 crc kubenswrapper[4811]: I0122 09:21:30.746068 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d0235ec-7dc3-44f6-ba77-171db91d88b3-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"9d0235ec-7dc3-44f6-ba77-171db91d88b3\") " pod="openstack/cinder-api-0" Jan 22 09:21:30 crc kubenswrapper[4811]: I0122 09:21:30.746115 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d0235ec-7dc3-44f6-ba77-171db91d88b3-config-data\") pod \"cinder-api-0\" (UID: \"9d0235ec-7dc3-44f6-ba77-171db91d88b3\") " pod="openstack/cinder-api-0" Jan 22 09:21:30 crc kubenswrapper[4811]: I0122 09:21:30.746179 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wrz6\" (UniqueName: \"kubernetes.io/projected/1831d8da-ec75-458b-b289-5119052f2216-kube-api-access-9wrz6\") pod \"dnsmasq-dns-7474d577dc-bfmkd\" (UID: \"1831d8da-ec75-458b-b289-5119052f2216\") " pod="openstack/dnsmasq-dns-7474d577dc-bfmkd" Jan 22 09:21:30 crc kubenswrapper[4811]: I0122 09:21:30.746196 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1831d8da-ec75-458b-b289-5119052f2216-ovsdbserver-sb\") pod \"dnsmasq-dns-7474d577dc-bfmkd\" (UID: \"1831d8da-ec75-458b-b289-5119052f2216\") " pod="openstack/dnsmasq-dns-7474d577dc-bfmkd" Jan 22 09:21:30 crc kubenswrapper[4811]: I0122 09:21:30.746257 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46ls5\" (UniqueName: \"kubernetes.io/projected/9d0235ec-7dc3-44f6-ba77-171db91d88b3-kube-api-access-46ls5\") pod \"cinder-api-0\" (UID: \"9d0235ec-7dc3-44f6-ba77-171db91d88b3\") " pod="openstack/cinder-api-0" Jan 22 09:21:30 crc kubenswrapper[4811]: I0122 09:21:30.746279 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d0235ec-7dc3-44f6-ba77-171db91d88b3-scripts\") pod \"cinder-api-0\" (UID: \"9d0235ec-7dc3-44f6-ba77-171db91d88b3\") " pod="openstack/cinder-api-0" Jan 22 09:21:30 crc kubenswrapper[4811]: I0122 09:21:30.746301 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9d0235ec-7dc3-44f6-ba77-171db91d88b3-etc-machine-id\") pod \"cinder-api-0\" (UID: \"9d0235ec-7dc3-44f6-ba77-171db91d88b3\") " pod="openstack/cinder-api-0" Jan 22 09:21:30 crc kubenswrapper[4811]: I0122 09:21:30.746335 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1831d8da-ec75-458b-b289-5119052f2216-dns-svc\") pod \"dnsmasq-dns-7474d577dc-bfmkd\" (UID: \"1831d8da-ec75-458b-b289-5119052f2216\") " pod="openstack/dnsmasq-dns-7474d577dc-bfmkd" Jan 22 09:21:30 crc kubenswrapper[4811]: I0122 09:21:30.746352 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1831d8da-ec75-458b-b289-5119052f2216-config\") pod \"dnsmasq-dns-7474d577dc-bfmkd\" (UID: \"1831d8da-ec75-458b-b289-5119052f2216\") " pod="openstack/dnsmasq-dns-7474d577dc-bfmkd" Jan 22 09:21:30 crc kubenswrapper[4811]: I0122 09:21:30.746369 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1831d8da-ec75-458b-b289-5119052f2216-ovsdbserver-nb\") pod \"dnsmasq-dns-7474d577dc-bfmkd\" (UID: \"1831d8da-ec75-458b-b289-5119052f2216\") " pod="openstack/dnsmasq-dns-7474d577dc-bfmkd" Jan 22 09:21:30 crc kubenswrapper[4811]: I0122 09:21:30.747204 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1831d8da-ec75-458b-b289-5119052f2216-ovsdbserver-nb\") pod \"dnsmasq-dns-7474d577dc-bfmkd\" (UID: \"1831d8da-ec75-458b-b289-5119052f2216\") " pod="openstack/dnsmasq-dns-7474d577dc-bfmkd" Jan 22 09:21:30 crc kubenswrapper[4811]: I0122 09:21:30.748001 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1831d8da-ec75-458b-b289-5119052f2216-ovsdbserver-sb\") pod \"dnsmasq-dns-7474d577dc-bfmkd\" (UID: \"1831d8da-ec75-458b-b289-5119052f2216\") " pod="openstack/dnsmasq-dns-7474d577dc-bfmkd" Jan 22 09:21:30 crc kubenswrapper[4811]: I0122 09:21:30.748523 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1831d8da-ec75-458b-b289-5119052f2216-dns-svc\") pod \"dnsmasq-dns-7474d577dc-bfmkd\" (UID: \"1831d8da-ec75-458b-b289-5119052f2216\") " pod="openstack/dnsmasq-dns-7474d577dc-bfmkd" Jan 22 09:21:30 crc kubenswrapper[4811]: I0122 09:21:30.748732 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1831d8da-ec75-458b-b289-5119052f2216-config\") pod \"dnsmasq-dns-7474d577dc-bfmkd\" (UID: \"1831d8da-ec75-458b-b289-5119052f2216\") " pod="openstack/dnsmasq-dns-7474d577dc-bfmkd" Jan 22 09:21:30 crc kubenswrapper[4811]: I0122 09:21:30.768661 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wrz6\" (UniqueName: \"kubernetes.io/projected/1831d8da-ec75-458b-b289-5119052f2216-kube-api-access-9wrz6\") pod \"dnsmasq-dns-7474d577dc-bfmkd\" (UID: \"1831d8da-ec75-458b-b289-5119052f2216\") " pod="openstack/dnsmasq-dns-7474d577dc-bfmkd" Jan 22 09:21:30 crc kubenswrapper[4811]: I0122 09:21:30.785430 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7474d577dc-bfmkd" Jan 22 09:21:30 crc kubenswrapper[4811]: I0122 09:21:30.847817 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-46ls5\" (UniqueName: \"kubernetes.io/projected/9d0235ec-7dc3-44f6-ba77-171db91d88b3-kube-api-access-46ls5\") pod \"cinder-api-0\" (UID: \"9d0235ec-7dc3-44f6-ba77-171db91d88b3\") " pod="openstack/cinder-api-0" Jan 22 09:21:30 crc kubenswrapper[4811]: I0122 09:21:30.847864 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d0235ec-7dc3-44f6-ba77-171db91d88b3-scripts\") pod \"cinder-api-0\" (UID: \"9d0235ec-7dc3-44f6-ba77-171db91d88b3\") " pod="openstack/cinder-api-0" Jan 22 09:21:30 crc kubenswrapper[4811]: I0122 09:21:30.847898 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9d0235ec-7dc3-44f6-ba77-171db91d88b3-etc-machine-id\") pod \"cinder-api-0\" (UID: \"9d0235ec-7dc3-44f6-ba77-171db91d88b3\") " pod="openstack/cinder-api-0" Jan 22 09:21:30 crc kubenswrapper[4811]: I0122 09:21:30.847966 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9d0235ec-7dc3-44f6-ba77-171db91d88b3-config-data-custom\") pod \"cinder-api-0\" (UID: \"9d0235ec-7dc3-44f6-ba77-171db91d88b3\") " pod="openstack/cinder-api-0" Jan 22 09:21:30 crc kubenswrapper[4811]: I0122 09:21:30.847999 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9d0235ec-7dc3-44f6-ba77-171db91d88b3-logs\") pod \"cinder-api-0\" (UID: \"9d0235ec-7dc3-44f6-ba77-171db91d88b3\") " pod="openstack/cinder-api-0" Jan 22 09:21:30 crc kubenswrapper[4811]: I0122 09:21:30.848023 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d0235ec-7dc3-44f6-ba77-171db91d88b3-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"9d0235ec-7dc3-44f6-ba77-171db91d88b3\") " pod="openstack/cinder-api-0" Jan 22 09:21:30 crc kubenswrapper[4811]: I0122 09:21:30.848052 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d0235ec-7dc3-44f6-ba77-171db91d88b3-config-data\") pod \"cinder-api-0\" (UID: \"9d0235ec-7dc3-44f6-ba77-171db91d88b3\") " pod="openstack/cinder-api-0" Jan 22 09:21:30 crc kubenswrapper[4811]: I0122 09:21:30.848586 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9d0235ec-7dc3-44f6-ba77-171db91d88b3-etc-machine-id\") pod \"cinder-api-0\" (UID: \"9d0235ec-7dc3-44f6-ba77-171db91d88b3\") " pod="openstack/cinder-api-0" Jan 22 09:21:30 crc kubenswrapper[4811]: I0122 09:21:30.848932 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9d0235ec-7dc3-44f6-ba77-171db91d88b3-logs\") pod \"cinder-api-0\" (UID: \"9d0235ec-7dc3-44f6-ba77-171db91d88b3\") " pod="openstack/cinder-api-0" Jan 22 09:21:30 crc kubenswrapper[4811]: I0122 09:21:30.852162 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d0235ec-7dc3-44f6-ba77-171db91d88b3-config-data\") pod \"cinder-api-0\" (UID: \"9d0235ec-7dc3-44f6-ba77-171db91d88b3\") " pod="openstack/cinder-api-0" Jan 22 09:21:30 crc kubenswrapper[4811]: I0122 09:21:30.855868 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d0235ec-7dc3-44f6-ba77-171db91d88b3-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"9d0235ec-7dc3-44f6-ba77-171db91d88b3\") " pod="openstack/cinder-api-0" Jan 22 09:21:30 crc kubenswrapper[4811]: I0122 09:21:30.878104 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d0235ec-7dc3-44f6-ba77-171db91d88b3-scripts\") pod \"cinder-api-0\" (UID: \"9d0235ec-7dc3-44f6-ba77-171db91d88b3\") " pod="openstack/cinder-api-0" Jan 22 09:21:30 crc kubenswrapper[4811]: I0122 09:21:30.879117 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9d0235ec-7dc3-44f6-ba77-171db91d88b3-config-data-custom\") pod \"cinder-api-0\" (UID: \"9d0235ec-7dc3-44f6-ba77-171db91d88b3\") " pod="openstack/cinder-api-0" Jan 22 09:21:30 crc kubenswrapper[4811]: I0122 09:21:30.884083 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-46ls5\" (UniqueName: \"kubernetes.io/projected/9d0235ec-7dc3-44f6-ba77-171db91d88b3-kube-api-access-46ls5\") pod \"cinder-api-0\" (UID: \"9d0235ec-7dc3-44f6-ba77-171db91d88b3\") " pod="openstack/cinder-api-0" Jan 22 09:21:30 crc kubenswrapper[4811]: I0122 09:21:30.977123 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 22 09:21:31 crc kubenswrapper[4811]: I0122 09:21:31.027285 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-5bf9c84c75-rbgml"] Jan 22 09:21:31 crc kubenswrapper[4811]: I0122 09:21:31.028260 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5bf9c84c75-rbgml" Jan 22 09:21:31 crc kubenswrapper[4811]: I0122 09:21:31.031525 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 22 09:21:31 crc kubenswrapper[4811]: I0122 09:21:31.031649 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 22 09:21:31 crc kubenswrapper[4811]: I0122 09:21:31.031761 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Jan 22 09:21:31 crc kubenswrapper[4811]: I0122 09:21:31.031794 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 22 09:21:31 crc kubenswrapper[4811]: I0122 09:21:31.031703 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Jan 22 09:21:31 crc kubenswrapper[4811]: I0122 09:21:31.031711 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-9nlpl" Jan 22 09:21:31 crc kubenswrapper[4811]: I0122 09:21:31.047654 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5bf9c84c75-rbgml"] Jan 22 09:21:31 crc kubenswrapper[4811]: I0122 09:21:31.155519 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/42068723-76f8-4a1a-8210-f0a70f10897a-credential-keys\") pod \"keystone-5bf9c84c75-rbgml\" (UID: \"42068723-76f8-4a1a-8210-f0a70f10897a\") " pod="openstack/keystone-5bf9c84c75-rbgml" Jan 22 09:21:31 crc kubenswrapper[4811]: I0122 09:21:31.155588 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/42068723-76f8-4a1a-8210-f0a70f10897a-public-tls-certs\") pod \"keystone-5bf9c84c75-rbgml\" (UID: \"42068723-76f8-4a1a-8210-f0a70f10897a\") " pod="openstack/keystone-5bf9c84c75-rbgml" Jan 22 09:21:31 crc kubenswrapper[4811]: I0122 09:21:31.155790 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jt79\" (UniqueName: \"kubernetes.io/projected/42068723-76f8-4a1a-8210-f0a70f10897a-kube-api-access-8jt79\") pod \"keystone-5bf9c84c75-rbgml\" (UID: \"42068723-76f8-4a1a-8210-f0a70f10897a\") " pod="openstack/keystone-5bf9c84c75-rbgml" Jan 22 09:21:31 crc kubenswrapper[4811]: I0122 09:21:31.156002 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42068723-76f8-4a1a-8210-f0a70f10897a-config-data\") pod \"keystone-5bf9c84c75-rbgml\" (UID: \"42068723-76f8-4a1a-8210-f0a70f10897a\") " pod="openstack/keystone-5bf9c84c75-rbgml" Jan 22 09:21:31 crc kubenswrapper[4811]: I0122 09:21:31.156169 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42068723-76f8-4a1a-8210-f0a70f10897a-combined-ca-bundle\") pod \"keystone-5bf9c84c75-rbgml\" (UID: \"42068723-76f8-4a1a-8210-f0a70f10897a\") " pod="openstack/keystone-5bf9c84c75-rbgml" Jan 22 09:21:31 crc kubenswrapper[4811]: I0122 09:21:31.156209 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42068723-76f8-4a1a-8210-f0a70f10897a-scripts\") pod \"keystone-5bf9c84c75-rbgml\" (UID: \"42068723-76f8-4a1a-8210-f0a70f10897a\") " pod="openstack/keystone-5bf9c84c75-rbgml" Jan 22 09:21:31 crc kubenswrapper[4811]: I0122 09:21:31.156269 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/42068723-76f8-4a1a-8210-f0a70f10897a-fernet-keys\") pod \"keystone-5bf9c84c75-rbgml\" (UID: \"42068723-76f8-4a1a-8210-f0a70f10897a\") " pod="openstack/keystone-5bf9c84c75-rbgml" Jan 22 09:21:31 crc kubenswrapper[4811]: I0122 09:21:31.156310 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/42068723-76f8-4a1a-8210-f0a70f10897a-internal-tls-certs\") pod \"keystone-5bf9c84c75-rbgml\" (UID: \"42068723-76f8-4a1a-8210-f0a70f10897a\") " pod="openstack/keystone-5bf9c84c75-rbgml" Jan 22 09:21:31 crc kubenswrapper[4811]: I0122 09:21:31.259145 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42068723-76f8-4a1a-8210-f0a70f10897a-combined-ca-bundle\") pod \"keystone-5bf9c84c75-rbgml\" (UID: \"42068723-76f8-4a1a-8210-f0a70f10897a\") " pod="openstack/keystone-5bf9c84c75-rbgml" Jan 22 09:21:31 crc kubenswrapper[4811]: I0122 09:21:31.259542 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42068723-76f8-4a1a-8210-f0a70f10897a-scripts\") pod \"keystone-5bf9c84c75-rbgml\" (UID: \"42068723-76f8-4a1a-8210-f0a70f10897a\") " pod="openstack/keystone-5bf9c84c75-rbgml" Jan 22 09:21:31 crc kubenswrapper[4811]: I0122 09:21:31.259594 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/42068723-76f8-4a1a-8210-f0a70f10897a-fernet-keys\") pod \"keystone-5bf9c84c75-rbgml\" (UID: \"42068723-76f8-4a1a-8210-f0a70f10897a\") " pod="openstack/keystone-5bf9c84c75-rbgml" Jan 22 09:21:31 crc kubenswrapper[4811]: I0122 09:21:31.259638 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/42068723-76f8-4a1a-8210-f0a70f10897a-internal-tls-certs\") pod \"keystone-5bf9c84c75-rbgml\" (UID: \"42068723-76f8-4a1a-8210-f0a70f10897a\") " pod="openstack/keystone-5bf9c84c75-rbgml" Jan 22 09:21:31 crc kubenswrapper[4811]: I0122 09:21:31.259728 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/42068723-76f8-4a1a-8210-f0a70f10897a-credential-keys\") pod \"keystone-5bf9c84c75-rbgml\" (UID: \"42068723-76f8-4a1a-8210-f0a70f10897a\") " pod="openstack/keystone-5bf9c84c75-rbgml" Jan 22 09:21:31 crc kubenswrapper[4811]: I0122 09:21:31.259790 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/42068723-76f8-4a1a-8210-f0a70f10897a-public-tls-certs\") pod \"keystone-5bf9c84c75-rbgml\" (UID: \"42068723-76f8-4a1a-8210-f0a70f10897a\") " pod="openstack/keystone-5bf9c84c75-rbgml" Jan 22 09:21:31 crc kubenswrapper[4811]: I0122 09:21:31.259854 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8jt79\" (UniqueName: \"kubernetes.io/projected/42068723-76f8-4a1a-8210-f0a70f10897a-kube-api-access-8jt79\") pod \"keystone-5bf9c84c75-rbgml\" (UID: \"42068723-76f8-4a1a-8210-f0a70f10897a\") " pod="openstack/keystone-5bf9c84c75-rbgml" Jan 22 09:21:31 crc kubenswrapper[4811]: I0122 09:21:31.259941 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42068723-76f8-4a1a-8210-f0a70f10897a-config-data\") pod \"keystone-5bf9c84c75-rbgml\" (UID: \"42068723-76f8-4a1a-8210-f0a70f10897a\") " pod="openstack/keystone-5bf9c84c75-rbgml" Jan 22 09:21:31 crc kubenswrapper[4811]: I0122 09:21:31.264840 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/42068723-76f8-4a1a-8210-f0a70f10897a-credential-keys\") pod \"keystone-5bf9c84c75-rbgml\" (UID: \"42068723-76f8-4a1a-8210-f0a70f10897a\") " pod="openstack/keystone-5bf9c84c75-rbgml" Jan 22 09:21:31 crc kubenswrapper[4811]: I0122 09:21:31.265127 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/42068723-76f8-4a1a-8210-f0a70f10897a-fernet-keys\") pod \"keystone-5bf9c84c75-rbgml\" (UID: \"42068723-76f8-4a1a-8210-f0a70f10897a\") " pod="openstack/keystone-5bf9c84c75-rbgml" Jan 22 09:21:31 crc kubenswrapper[4811]: I0122 09:21:31.267118 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/42068723-76f8-4a1a-8210-f0a70f10897a-internal-tls-certs\") pod \"keystone-5bf9c84c75-rbgml\" (UID: \"42068723-76f8-4a1a-8210-f0a70f10897a\") " pod="openstack/keystone-5bf9c84c75-rbgml" Jan 22 09:21:31 crc kubenswrapper[4811]: I0122 09:21:31.269263 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42068723-76f8-4a1a-8210-f0a70f10897a-combined-ca-bundle\") pod \"keystone-5bf9c84c75-rbgml\" (UID: \"42068723-76f8-4a1a-8210-f0a70f10897a\") " pod="openstack/keystone-5bf9c84c75-rbgml" Jan 22 09:21:31 crc kubenswrapper[4811]: I0122 09:21:31.271076 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42068723-76f8-4a1a-8210-f0a70f10897a-config-data\") pod \"keystone-5bf9c84c75-rbgml\" (UID: \"42068723-76f8-4a1a-8210-f0a70f10897a\") " pod="openstack/keystone-5bf9c84c75-rbgml" Jan 22 09:21:31 crc kubenswrapper[4811]: I0122 09:21:31.278085 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42068723-76f8-4a1a-8210-f0a70f10897a-scripts\") pod \"keystone-5bf9c84c75-rbgml\" (UID: \"42068723-76f8-4a1a-8210-f0a70f10897a\") " pod="openstack/keystone-5bf9c84c75-rbgml" Jan 22 09:21:31 crc kubenswrapper[4811]: I0122 09:21:31.279034 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/42068723-76f8-4a1a-8210-f0a70f10897a-public-tls-certs\") pod \"keystone-5bf9c84c75-rbgml\" (UID: \"42068723-76f8-4a1a-8210-f0a70f10897a\") " pod="openstack/keystone-5bf9c84c75-rbgml" Jan 22 09:21:31 crc kubenswrapper[4811]: I0122 09:21:31.281267 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jt79\" (UniqueName: \"kubernetes.io/projected/42068723-76f8-4a1a-8210-f0a70f10897a-kube-api-access-8jt79\") pod \"keystone-5bf9c84c75-rbgml\" (UID: \"42068723-76f8-4a1a-8210-f0a70f10897a\") " pod="openstack/keystone-5bf9c84c75-rbgml" Jan 22 09:21:31 crc kubenswrapper[4811]: I0122 09:21:31.352385 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5bf9c84c75-rbgml" Jan 22 09:21:33 crc kubenswrapper[4811]: I0122 09:21:33.383001 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 22 09:21:34 crc kubenswrapper[4811]: I0122 09:21:34.949727 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-rnmr4" Jan 22 09:21:35 crc kubenswrapper[4811]: I0122 09:21:35.037331 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-548894858c-nn2fm" Jan 22 09:21:35 crc kubenswrapper[4811]: I0122 09:21:35.147198 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7z4fm\" (UniqueName: \"kubernetes.io/projected/91f1af8b-22e5-45b9-ac16-6696840485e4-kube-api-access-7z4fm\") pod \"91f1af8b-22e5-45b9-ac16-6696840485e4\" (UID: \"91f1af8b-22e5-45b9-ac16-6696840485e4\") " Jan 22 09:21:35 crc kubenswrapper[4811]: I0122 09:21:35.147255 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7b642751-b1e4-4488-b305-fed7f4fcd9fa-logs\") pod \"7b642751-b1e4-4488-b305-fed7f4fcd9fa\" (UID: \"7b642751-b1e4-4488-b305-fed7f4fcd9fa\") " Jan 22 09:21:35 crc kubenswrapper[4811]: I0122 09:21:35.147329 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/91f1af8b-22e5-45b9-ac16-6696840485e4-dns-svc\") pod \"91f1af8b-22e5-45b9-ac16-6696840485e4\" (UID: \"91f1af8b-22e5-45b9-ac16-6696840485e4\") " Jan 22 09:21:35 crc kubenswrapper[4811]: I0122 09:21:35.147369 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b642751-b1e4-4488-b305-fed7f4fcd9fa-config-data\") pod \"7b642751-b1e4-4488-b305-fed7f4fcd9fa\" (UID: \"7b642751-b1e4-4488-b305-fed7f4fcd9fa\") " Jan 22 09:21:35 crc kubenswrapper[4811]: I0122 09:21:35.147405 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91f1af8b-22e5-45b9-ac16-6696840485e4-config\") pod \"91f1af8b-22e5-45b9-ac16-6696840485e4\" (UID: \"91f1af8b-22e5-45b9-ac16-6696840485e4\") " Jan 22 09:21:35 crc kubenswrapper[4811]: I0122 09:21:35.147422 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/91f1af8b-22e5-45b9-ac16-6696840485e4-ovsdbserver-sb\") pod \"91f1af8b-22e5-45b9-ac16-6696840485e4\" (UID: \"91f1af8b-22e5-45b9-ac16-6696840485e4\") " Jan 22 09:21:35 crc kubenswrapper[4811]: I0122 09:21:35.147458 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9dp2c\" (UniqueName: \"kubernetes.io/projected/7b642751-b1e4-4488-b305-fed7f4fcd9fa-kube-api-access-9dp2c\") pod \"7b642751-b1e4-4488-b305-fed7f4fcd9fa\" (UID: \"7b642751-b1e4-4488-b305-fed7f4fcd9fa\") " Jan 22 09:21:35 crc kubenswrapper[4811]: I0122 09:21:35.147594 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b642751-b1e4-4488-b305-fed7f4fcd9fa-combined-ca-bundle\") pod \"7b642751-b1e4-4488-b305-fed7f4fcd9fa\" (UID: \"7b642751-b1e4-4488-b305-fed7f4fcd9fa\") " Jan 22 09:21:35 crc kubenswrapper[4811]: I0122 09:21:35.147660 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b642751-b1e4-4488-b305-fed7f4fcd9fa-scripts\") pod \"7b642751-b1e4-4488-b305-fed7f4fcd9fa\" (UID: \"7b642751-b1e4-4488-b305-fed7f4fcd9fa\") " Jan 22 09:21:35 crc kubenswrapper[4811]: I0122 09:21:35.147675 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/91f1af8b-22e5-45b9-ac16-6696840485e4-ovsdbserver-nb\") pod \"91f1af8b-22e5-45b9-ac16-6696840485e4\" (UID: \"91f1af8b-22e5-45b9-ac16-6696840485e4\") " Jan 22 09:21:35 crc kubenswrapper[4811]: I0122 09:21:35.148881 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b642751-b1e4-4488-b305-fed7f4fcd9fa-logs" (OuterVolumeSpecName: "logs") pod "7b642751-b1e4-4488-b305-fed7f4fcd9fa" (UID: "7b642751-b1e4-4488-b305-fed7f4fcd9fa"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:21:35 crc kubenswrapper[4811]: I0122 09:21:35.153803 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91f1af8b-22e5-45b9-ac16-6696840485e4-kube-api-access-7z4fm" (OuterVolumeSpecName: "kube-api-access-7z4fm") pod "91f1af8b-22e5-45b9-ac16-6696840485e4" (UID: "91f1af8b-22e5-45b9-ac16-6696840485e4"). InnerVolumeSpecName "kube-api-access-7z4fm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:21:35 crc kubenswrapper[4811]: I0122 09:21:35.163066 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b642751-b1e4-4488-b305-fed7f4fcd9fa-kube-api-access-9dp2c" (OuterVolumeSpecName: "kube-api-access-9dp2c") pod "7b642751-b1e4-4488-b305-fed7f4fcd9fa" (UID: "7b642751-b1e4-4488-b305-fed7f4fcd9fa"). InnerVolumeSpecName "kube-api-access-9dp2c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:21:35 crc kubenswrapper[4811]: I0122 09:21:35.163335 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7z4fm\" (UniqueName: \"kubernetes.io/projected/91f1af8b-22e5-45b9-ac16-6696840485e4-kube-api-access-7z4fm\") on node \"crc\" DevicePath \"\"" Jan 22 09:21:35 crc kubenswrapper[4811]: I0122 09:21:35.164240 4811 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7b642751-b1e4-4488-b305-fed7f4fcd9fa-logs\") on node \"crc\" DevicePath \"\"" Jan 22 09:21:35 crc kubenswrapper[4811]: I0122 09:21:35.211833 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b642751-b1e4-4488-b305-fed7f4fcd9fa-scripts" (OuterVolumeSpecName: "scripts") pod "7b642751-b1e4-4488-b305-fed7f4fcd9fa" (UID: "7b642751-b1e4-4488-b305-fed7f4fcd9fa"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:21:35 crc kubenswrapper[4811]: I0122 09:21:35.228576 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-rnmr4" event={"ID":"7b642751-b1e4-4488-b305-fed7f4fcd9fa","Type":"ContainerDied","Data":"c8e2ffb6e740654e0fafdd010d459fc712427c7a78b0a1d873c07ceb7cebd5dc"} Jan 22 09:21:35 crc kubenswrapper[4811]: I0122 09:21:35.228639 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c8e2ffb6e740654e0fafdd010d459fc712427c7a78b0a1d873c07ceb7cebd5dc" Jan 22 09:21:35 crc kubenswrapper[4811]: I0122 09:21:35.228702 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-rnmr4" Jan 22 09:21:35 crc kubenswrapper[4811]: I0122 09:21:35.235117 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-548894858c-nn2fm" event={"ID":"91f1af8b-22e5-45b9-ac16-6696840485e4","Type":"ContainerDied","Data":"e7c2939caf45e2d57c2f42a8edb1656eba34a5c5e8c51aa5067a157b50954f2f"} Jan 22 09:21:35 crc kubenswrapper[4811]: I0122 09:21:35.235168 4811 scope.go:117] "RemoveContainer" containerID="acb1305ea593365b2079387184ef6e638eb0b7028c698ac2fc7d8e1e5e90ee44" Jan 22 09:21:35 crc kubenswrapper[4811]: I0122 09:21:35.235299 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-548894858c-nn2fm" Jan 22 09:21:35 crc kubenswrapper[4811]: I0122 09:21:35.246580 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b642751-b1e4-4488-b305-fed7f4fcd9fa-config-data" (OuterVolumeSpecName: "config-data") pod "7b642751-b1e4-4488-b305-fed7f4fcd9fa" (UID: "7b642751-b1e4-4488-b305-fed7f4fcd9fa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:21:35 crc kubenswrapper[4811]: I0122 09:21:35.265832 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9dp2c\" (UniqueName: \"kubernetes.io/projected/7b642751-b1e4-4488-b305-fed7f4fcd9fa-kube-api-access-9dp2c\") on node \"crc\" DevicePath \"\"" Jan 22 09:21:35 crc kubenswrapper[4811]: I0122 09:21:35.265858 4811 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b642751-b1e4-4488-b305-fed7f4fcd9fa-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 09:21:35 crc kubenswrapper[4811]: I0122 09:21:35.265867 4811 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b642751-b1e4-4488-b305-fed7f4fcd9fa-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 09:21:35 crc kubenswrapper[4811]: I0122 09:21:35.269858 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91f1af8b-22e5-45b9-ac16-6696840485e4-config" (OuterVolumeSpecName: "config") pod "91f1af8b-22e5-45b9-ac16-6696840485e4" (UID: "91f1af8b-22e5-45b9-ac16-6696840485e4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:21:35 crc kubenswrapper[4811]: I0122 09:21:35.363182 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-7c8cfbdccc-5kmj8"] Jan 22 09:21:35 crc kubenswrapper[4811]: E0122 09:21:35.363537 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b642751-b1e4-4488-b305-fed7f4fcd9fa" containerName="placement-db-sync" Jan 22 09:21:35 crc kubenswrapper[4811]: I0122 09:21:35.363550 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b642751-b1e4-4488-b305-fed7f4fcd9fa" containerName="placement-db-sync" Jan 22 09:21:35 crc kubenswrapper[4811]: E0122 09:21:35.363565 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91f1af8b-22e5-45b9-ac16-6696840485e4" containerName="dnsmasq-dns" Jan 22 09:21:35 crc kubenswrapper[4811]: I0122 09:21:35.363571 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="91f1af8b-22e5-45b9-ac16-6696840485e4" containerName="dnsmasq-dns" Jan 22 09:21:35 crc kubenswrapper[4811]: E0122 09:21:35.363584 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91f1af8b-22e5-45b9-ac16-6696840485e4" containerName="init" Jan 22 09:21:35 crc kubenswrapper[4811]: I0122 09:21:35.363590 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="91f1af8b-22e5-45b9-ac16-6696840485e4" containerName="init" Jan 22 09:21:35 crc kubenswrapper[4811]: I0122 09:21:35.363734 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="91f1af8b-22e5-45b9-ac16-6696840485e4" containerName="dnsmasq-dns" Jan 22 09:21:35 crc kubenswrapper[4811]: I0122 09:21:35.363763 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b642751-b1e4-4488-b305-fed7f4fcd9fa" containerName="placement-db-sync" Jan 22 09:21:35 crc kubenswrapper[4811]: I0122 09:21:35.364529 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-7c8cfbdccc-5kmj8" Jan 22 09:21:35 crc kubenswrapper[4811]: I0122 09:21:35.370771 4811 scope.go:117] "RemoveContainer" containerID="d2cddbbc31452a4aa28c7399cc73e16401277ab8a9e9fe9ba00888f4798a59d9" Jan 22 09:21:35 crc kubenswrapper[4811]: I0122 09:21:35.373132 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/220f6baa-23c9-4cf8-b91f-5245734fc341-logs\") pod \"barbican-worker-7c8cfbdccc-5kmj8\" (UID: \"220f6baa-23c9-4cf8-b91f-5245734fc341\") " pod="openstack/barbican-worker-7c8cfbdccc-5kmj8" Jan 22 09:21:35 crc kubenswrapper[4811]: I0122 09:21:35.373181 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/220f6baa-23c9-4cf8-b91f-5245734fc341-combined-ca-bundle\") pod \"barbican-worker-7c8cfbdccc-5kmj8\" (UID: \"220f6baa-23c9-4cf8-b91f-5245734fc341\") " pod="openstack/barbican-worker-7c8cfbdccc-5kmj8" Jan 22 09:21:35 crc kubenswrapper[4811]: I0122 09:21:35.373293 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/220f6baa-23c9-4cf8-b91f-5245734fc341-config-data\") pod \"barbican-worker-7c8cfbdccc-5kmj8\" (UID: \"220f6baa-23c9-4cf8-b91f-5245734fc341\") " pod="openstack/barbican-worker-7c8cfbdccc-5kmj8" Jan 22 09:21:35 crc kubenswrapper[4811]: I0122 09:21:35.373338 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4sdx\" (UniqueName: \"kubernetes.io/projected/220f6baa-23c9-4cf8-b91f-5245734fc341-kube-api-access-d4sdx\") pod \"barbican-worker-7c8cfbdccc-5kmj8\" (UID: \"220f6baa-23c9-4cf8-b91f-5245734fc341\") " pod="openstack/barbican-worker-7c8cfbdccc-5kmj8" Jan 22 09:21:35 crc kubenswrapper[4811]: I0122 09:21:35.373359 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/220f6baa-23c9-4cf8-b91f-5245734fc341-config-data-custom\") pod \"barbican-worker-7c8cfbdccc-5kmj8\" (UID: \"220f6baa-23c9-4cf8-b91f-5245734fc341\") " pod="openstack/barbican-worker-7c8cfbdccc-5kmj8" Jan 22 09:21:35 crc kubenswrapper[4811]: I0122 09:21:35.373417 4811 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91f1af8b-22e5-45b9-ac16-6696840485e4-config\") on node \"crc\" DevicePath \"\"" Jan 22 09:21:35 crc kubenswrapper[4811]: I0122 09:21:35.382731 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b642751-b1e4-4488-b305-fed7f4fcd9fa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7b642751-b1e4-4488-b305-fed7f4fcd9fa" (UID: "7b642751-b1e4-4488-b305-fed7f4fcd9fa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:21:35 crc kubenswrapper[4811]: I0122 09:21:35.392118 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-5f966bcc4-4n44q"] Jan 22 09:21:35 crc kubenswrapper[4811]: I0122 09:21:35.393455 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5f966bcc4-4n44q" Jan 22 09:21:35 crc kubenswrapper[4811]: I0122 09:21:35.409901 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91f1af8b-22e5-45b9-ac16-6696840485e4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "91f1af8b-22e5-45b9-ac16-6696840485e4" (UID: "91f1af8b-22e5-45b9-ac16-6696840485e4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:21:35 crc kubenswrapper[4811]: I0122 09:21:35.417673 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-7c8cfbdccc-5kmj8"] Jan 22 09:21:35 crc kubenswrapper[4811]: I0122 09:21:35.425069 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91f1af8b-22e5-45b9-ac16-6696840485e4-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "91f1af8b-22e5-45b9-ac16-6696840485e4" (UID: "91f1af8b-22e5-45b9-ac16-6696840485e4"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:21:35 crc kubenswrapper[4811]: I0122 09:21:35.440672 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-5f966bcc4-4n44q"] Jan 22 09:21:35 crc kubenswrapper[4811]: I0122 09:21:35.455090 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91f1af8b-22e5-45b9-ac16-6696840485e4-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "91f1af8b-22e5-45b9-ac16-6696840485e4" (UID: "91f1af8b-22e5-45b9-ac16-6696840485e4"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:21:35 crc kubenswrapper[4811]: I0122 09:21:35.476553 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4sdx\" (UniqueName: \"kubernetes.io/projected/220f6baa-23c9-4cf8-b91f-5245734fc341-kube-api-access-d4sdx\") pod \"barbican-worker-7c8cfbdccc-5kmj8\" (UID: \"220f6baa-23c9-4cf8-b91f-5245734fc341\") " pod="openstack/barbican-worker-7c8cfbdccc-5kmj8" Jan 22 09:21:35 crc kubenswrapper[4811]: I0122 09:21:35.476652 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/220f6baa-23c9-4cf8-b91f-5245734fc341-config-data-custom\") pod \"barbican-worker-7c8cfbdccc-5kmj8\" (UID: \"220f6baa-23c9-4cf8-b91f-5245734fc341\") " pod="openstack/barbican-worker-7c8cfbdccc-5kmj8" Jan 22 09:21:35 crc kubenswrapper[4811]: I0122 09:21:35.476780 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/220f6baa-23c9-4cf8-b91f-5245734fc341-logs\") pod \"barbican-worker-7c8cfbdccc-5kmj8\" (UID: \"220f6baa-23c9-4cf8-b91f-5245734fc341\") " pod="openstack/barbican-worker-7c8cfbdccc-5kmj8" Jan 22 09:21:35 crc kubenswrapper[4811]: I0122 09:21:35.476835 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/220f6baa-23c9-4cf8-b91f-5245734fc341-combined-ca-bundle\") pod \"barbican-worker-7c8cfbdccc-5kmj8\" (UID: \"220f6baa-23c9-4cf8-b91f-5245734fc341\") " pod="openstack/barbican-worker-7c8cfbdccc-5kmj8" Jan 22 09:21:35 crc kubenswrapper[4811]: I0122 09:21:35.476959 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/220f6baa-23c9-4cf8-b91f-5245734fc341-config-data\") pod \"barbican-worker-7c8cfbdccc-5kmj8\" (UID: \"220f6baa-23c9-4cf8-b91f-5245734fc341\") " pod="openstack/barbican-worker-7c8cfbdccc-5kmj8" Jan 22 09:21:35 crc kubenswrapper[4811]: I0122 09:21:35.478075 4811 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b642751-b1e4-4488-b305-fed7f4fcd9fa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 09:21:35 crc kubenswrapper[4811]: I0122 09:21:35.480114 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/220f6baa-23c9-4cf8-b91f-5245734fc341-logs\") pod \"barbican-worker-7c8cfbdccc-5kmj8\" (UID: \"220f6baa-23c9-4cf8-b91f-5245734fc341\") " pod="openstack/barbican-worker-7c8cfbdccc-5kmj8" Jan 22 09:21:35 crc kubenswrapper[4811]: I0122 09:21:35.480174 4811 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/91f1af8b-22e5-45b9-ac16-6696840485e4-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 22 09:21:35 crc kubenswrapper[4811]: I0122 09:21:35.480198 4811 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/91f1af8b-22e5-45b9-ac16-6696840485e4-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 22 09:21:35 crc kubenswrapper[4811]: I0122 09:21:35.480220 4811 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/91f1af8b-22e5-45b9-ac16-6696840485e4-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 22 09:21:35 crc kubenswrapper[4811]: I0122 09:21:35.503876 4811 patch_prober.go:28] interesting pod/machine-config-daemon-txvcq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 09:21:35 crc kubenswrapper[4811]: I0122 09:21:35.504231 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 09:21:35 crc kubenswrapper[4811]: I0122 09:21:35.504271 4811 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" Jan 22 09:21:35 crc kubenswrapper[4811]: I0122 09:21:35.505160 4811 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"db7394b1a0d63dd1b71a9c2dafe49c27f24a0ba7e76972ad14dd0e5bca8208b9"} pod="openshift-machine-config-operator/machine-config-daemon-txvcq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 09:21:35 crc kubenswrapper[4811]: I0122 09:21:35.505208 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" containerName="machine-config-daemon" containerID="cri-o://db7394b1a0d63dd1b71a9c2dafe49c27f24a0ba7e76972ad14dd0e5bca8208b9" gracePeriod=600 Jan 22 09:21:35 crc kubenswrapper[4811]: I0122 09:21:35.509279 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4sdx\" (UniqueName: \"kubernetes.io/projected/220f6baa-23c9-4cf8-b91f-5245734fc341-kube-api-access-d4sdx\") pod \"barbican-worker-7c8cfbdccc-5kmj8\" (UID: \"220f6baa-23c9-4cf8-b91f-5245734fc341\") " pod="openstack/barbican-worker-7c8cfbdccc-5kmj8" Jan 22 09:21:35 crc kubenswrapper[4811]: I0122 09:21:35.509581 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/220f6baa-23c9-4cf8-b91f-5245734fc341-config-data-custom\") pod \"barbican-worker-7c8cfbdccc-5kmj8\" (UID: \"220f6baa-23c9-4cf8-b91f-5245734fc341\") " pod="openstack/barbican-worker-7c8cfbdccc-5kmj8" Jan 22 09:21:35 crc kubenswrapper[4811]: I0122 09:21:35.511054 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/220f6baa-23c9-4cf8-b91f-5245734fc341-config-data\") pod \"barbican-worker-7c8cfbdccc-5kmj8\" (UID: \"220f6baa-23c9-4cf8-b91f-5245734fc341\") " pod="openstack/barbican-worker-7c8cfbdccc-5kmj8" Jan 22 09:21:35 crc kubenswrapper[4811]: I0122 09:21:35.521893 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/220f6baa-23c9-4cf8-b91f-5245734fc341-combined-ca-bundle\") pod \"barbican-worker-7c8cfbdccc-5kmj8\" (UID: \"220f6baa-23c9-4cf8-b91f-5245734fc341\") " pod="openstack/barbican-worker-7c8cfbdccc-5kmj8" Jan 22 09:21:35 crc kubenswrapper[4811]: I0122 09:21:35.579319 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-76dcc8f4f4-dv85h"] Jan 22 09:21:35 crc kubenswrapper[4811]: I0122 09:21:35.582696 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-76dcc8f4f4-dv85h" Jan 22 09:21:35 crc kubenswrapper[4811]: I0122 09:21:35.587831 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Jan 22 09:21:35 crc kubenswrapper[4811]: I0122 09:21:35.588137 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Jan 22 09:21:35 crc kubenswrapper[4811]: I0122 09:21:35.600914 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/312cf490-6d44-416e-8238-06667bf8efee-config-data-custom\") pod \"barbican-keystone-listener-5f966bcc4-4n44q\" (UID: \"312cf490-6d44-416e-8238-06667bf8efee\") " pod="openstack/barbican-keystone-listener-5f966bcc4-4n44q" Jan 22 09:21:35 crc kubenswrapper[4811]: I0122 09:21:35.601179 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/312cf490-6d44-416e-8238-06667bf8efee-logs\") pod \"barbican-keystone-listener-5f966bcc4-4n44q\" (UID: \"312cf490-6d44-416e-8238-06667bf8efee\") " pod="openstack/barbican-keystone-listener-5f966bcc4-4n44q" Jan 22 09:21:35 crc kubenswrapper[4811]: I0122 09:21:35.601248 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cg84\" (UniqueName: \"kubernetes.io/projected/312cf490-6d44-416e-8238-06667bf8efee-kube-api-access-5cg84\") pod \"barbican-keystone-listener-5f966bcc4-4n44q\" (UID: \"312cf490-6d44-416e-8238-06667bf8efee\") " pod="openstack/barbican-keystone-listener-5f966bcc4-4n44q" Jan 22 09:21:35 crc kubenswrapper[4811]: I0122 09:21:35.601426 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/312cf490-6d44-416e-8238-06667bf8efee-combined-ca-bundle\") pod \"barbican-keystone-listener-5f966bcc4-4n44q\" (UID: \"312cf490-6d44-416e-8238-06667bf8efee\") " pod="openstack/barbican-keystone-listener-5f966bcc4-4n44q" Jan 22 09:21:35 crc kubenswrapper[4811]: I0122 09:21:35.601514 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/312cf490-6d44-416e-8238-06667bf8efee-config-data\") pod \"barbican-keystone-listener-5f966bcc4-4n44q\" (UID: \"312cf490-6d44-416e-8238-06667bf8efee\") " pod="openstack/barbican-keystone-listener-5f966bcc4-4n44q" Jan 22 09:21:35 crc kubenswrapper[4811]: I0122 09:21:35.607286 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-76dcc8f4f4-dv85h"] Jan 22 09:21:35 crc kubenswrapper[4811]: I0122 09:21:35.646618 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xs5vv"] Jan 22 09:21:35 crc kubenswrapper[4811]: I0122 09:21:35.710302 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/63cdd7a1-0295-4009-8c18-b3b3e24770b3-logs\") pod \"barbican-api-76dcc8f4f4-dv85h\" (UID: \"63cdd7a1-0295-4009-8c18-b3b3e24770b3\") " pod="openstack/barbican-api-76dcc8f4f4-dv85h" Jan 22 09:21:35 crc kubenswrapper[4811]: I0122 09:21:35.710343 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/312cf490-6d44-416e-8238-06667bf8efee-config-data\") pod \"barbican-keystone-listener-5f966bcc4-4n44q\" (UID: \"312cf490-6d44-416e-8238-06667bf8efee\") " pod="openstack/barbican-keystone-listener-5f966bcc4-4n44q" Jan 22 09:21:35 crc kubenswrapper[4811]: I0122 09:21:35.710374 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63cdd7a1-0295-4009-8c18-b3b3e24770b3-combined-ca-bundle\") pod \"barbican-api-76dcc8f4f4-dv85h\" (UID: \"63cdd7a1-0295-4009-8c18-b3b3e24770b3\") " pod="openstack/barbican-api-76dcc8f4f4-dv85h" Jan 22 09:21:35 crc kubenswrapper[4811]: I0122 09:21:35.710444 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/312cf490-6d44-416e-8238-06667bf8efee-config-data-custom\") pod \"barbican-keystone-listener-5f966bcc4-4n44q\" (UID: \"312cf490-6d44-416e-8238-06667bf8efee\") " pod="openstack/barbican-keystone-listener-5f966bcc4-4n44q" Jan 22 09:21:35 crc kubenswrapper[4811]: I0122 09:21:35.710472 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/63cdd7a1-0295-4009-8c18-b3b3e24770b3-config-data-custom\") pod \"barbican-api-76dcc8f4f4-dv85h\" (UID: \"63cdd7a1-0295-4009-8c18-b3b3e24770b3\") " pod="openstack/barbican-api-76dcc8f4f4-dv85h" Jan 22 09:21:35 crc kubenswrapper[4811]: I0122 09:21:35.710512 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gghk\" (UniqueName: \"kubernetes.io/projected/63cdd7a1-0295-4009-8c18-b3b3e24770b3-kube-api-access-8gghk\") pod \"barbican-api-76dcc8f4f4-dv85h\" (UID: \"63cdd7a1-0295-4009-8c18-b3b3e24770b3\") " pod="openstack/barbican-api-76dcc8f4f4-dv85h" Jan 22 09:21:35 crc kubenswrapper[4811]: I0122 09:21:35.710531 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/63cdd7a1-0295-4009-8c18-b3b3e24770b3-internal-tls-certs\") pod \"barbican-api-76dcc8f4f4-dv85h\" (UID: \"63cdd7a1-0295-4009-8c18-b3b3e24770b3\") " pod="openstack/barbican-api-76dcc8f4f4-dv85h" Jan 22 09:21:35 crc kubenswrapper[4811]: I0122 09:21:35.710551 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/312cf490-6d44-416e-8238-06667bf8efee-logs\") pod \"barbican-keystone-listener-5f966bcc4-4n44q\" (UID: \"312cf490-6d44-416e-8238-06667bf8efee\") " pod="openstack/barbican-keystone-listener-5f966bcc4-4n44q" Jan 22 09:21:35 crc kubenswrapper[4811]: I0122 09:21:35.710578 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5cg84\" (UniqueName: \"kubernetes.io/projected/312cf490-6d44-416e-8238-06667bf8efee-kube-api-access-5cg84\") pod \"barbican-keystone-listener-5f966bcc4-4n44q\" (UID: \"312cf490-6d44-416e-8238-06667bf8efee\") " pod="openstack/barbican-keystone-listener-5f966bcc4-4n44q" Jan 22 09:21:35 crc kubenswrapper[4811]: I0122 09:21:35.710604 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63cdd7a1-0295-4009-8c18-b3b3e24770b3-config-data\") pod \"barbican-api-76dcc8f4f4-dv85h\" (UID: \"63cdd7a1-0295-4009-8c18-b3b3e24770b3\") " pod="openstack/barbican-api-76dcc8f4f4-dv85h" Jan 22 09:21:35 crc kubenswrapper[4811]: I0122 09:21:35.710683 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/312cf490-6d44-416e-8238-06667bf8efee-combined-ca-bundle\") pod \"barbican-keystone-listener-5f966bcc4-4n44q\" (UID: \"312cf490-6d44-416e-8238-06667bf8efee\") " pod="openstack/barbican-keystone-listener-5f966bcc4-4n44q" Jan 22 09:21:35 crc kubenswrapper[4811]: I0122 09:21:35.710709 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/63cdd7a1-0295-4009-8c18-b3b3e24770b3-public-tls-certs\") pod \"barbican-api-76dcc8f4f4-dv85h\" (UID: \"63cdd7a1-0295-4009-8c18-b3b3e24770b3\") " pod="openstack/barbican-api-76dcc8f4f4-dv85h" Jan 22 09:21:35 crc kubenswrapper[4811]: I0122 09:21:35.714216 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/312cf490-6d44-416e-8238-06667bf8efee-logs\") pod \"barbican-keystone-listener-5f966bcc4-4n44q\" (UID: \"312cf490-6d44-416e-8238-06667bf8efee\") " pod="openstack/barbican-keystone-listener-5f966bcc4-4n44q" Jan 22 09:21:35 crc kubenswrapper[4811]: I0122 09:21:35.719270 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-548894858c-nn2fm"] Jan 22 09:21:35 crc kubenswrapper[4811]: I0122 09:21:35.720436 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/312cf490-6d44-416e-8238-06667bf8efee-combined-ca-bundle\") pod \"barbican-keystone-listener-5f966bcc4-4n44q\" (UID: \"312cf490-6d44-416e-8238-06667bf8efee\") " pod="openstack/barbican-keystone-listener-5f966bcc4-4n44q" Jan 22 09:21:35 crc kubenswrapper[4811]: I0122 09:21:35.720758 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-7c8cfbdccc-5kmj8" Jan 22 09:21:35 crc kubenswrapper[4811]: I0122 09:21:35.721817 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/312cf490-6d44-416e-8238-06667bf8efee-config-data\") pod \"barbican-keystone-listener-5f966bcc4-4n44q\" (UID: \"312cf490-6d44-416e-8238-06667bf8efee\") " pod="openstack/barbican-keystone-listener-5f966bcc4-4n44q" Jan 22 09:21:35 crc kubenswrapper[4811]: I0122 09:21:35.729899 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/312cf490-6d44-416e-8238-06667bf8efee-config-data-custom\") pod \"barbican-keystone-listener-5f966bcc4-4n44q\" (UID: \"312cf490-6d44-416e-8238-06667bf8efee\") " pod="openstack/barbican-keystone-listener-5f966bcc4-4n44q" Jan 22 09:21:35 crc kubenswrapper[4811]: I0122 09:21:35.734736 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-548894858c-nn2fm"] Jan 22 09:21:35 crc kubenswrapper[4811]: I0122 09:21:35.738688 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-5c6c94454b-v7bc2"] Jan 22 09:21:35 crc kubenswrapper[4811]: I0122 09:21:35.745765 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5cg84\" (UniqueName: \"kubernetes.io/projected/312cf490-6d44-416e-8238-06667bf8efee-kube-api-access-5cg84\") pod \"barbican-keystone-listener-5f966bcc4-4n44q\" (UID: \"312cf490-6d44-416e-8238-06667bf8efee\") " pod="openstack/barbican-keystone-listener-5f966bcc4-4n44q" Jan 22 09:21:35 crc kubenswrapper[4811]: I0122 09:21:35.748266 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5f966bcc4-4n44q" Jan 22 09:21:35 crc kubenswrapper[4811]: I0122 09:21:35.815834 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8gghk\" (UniqueName: \"kubernetes.io/projected/63cdd7a1-0295-4009-8c18-b3b3e24770b3-kube-api-access-8gghk\") pod \"barbican-api-76dcc8f4f4-dv85h\" (UID: \"63cdd7a1-0295-4009-8c18-b3b3e24770b3\") " pod="openstack/barbican-api-76dcc8f4f4-dv85h" Jan 22 09:21:35 crc kubenswrapper[4811]: I0122 09:21:35.815887 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/63cdd7a1-0295-4009-8c18-b3b3e24770b3-internal-tls-certs\") pod \"barbican-api-76dcc8f4f4-dv85h\" (UID: \"63cdd7a1-0295-4009-8c18-b3b3e24770b3\") " pod="openstack/barbican-api-76dcc8f4f4-dv85h" Jan 22 09:21:35 crc kubenswrapper[4811]: I0122 09:21:35.815982 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63cdd7a1-0295-4009-8c18-b3b3e24770b3-config-data\") pod \"barbican-api-76dcc8f4f4-dv85h\" (UID: \"63cdd7a1-0295-4009-8c18-b3b3e24770b3\") " pod="openstack/barbican-api-76dcc8f4f4-dv85h" Jan 22 09:21:35 crc kubenswrapper[4811]: I0122 09:21:35.816085 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/63cdd7a1-0295-4009-8c18-b3b3e24770b3-public-tls-certs\") pod \"barbican-api-76dcc8f4f4-dv85h\" (UID: \"63cdd7a1-0295-4009-8c18-b3b3e24770b3\") " pod="openstack/barbican-api-76dcc8f4f4-dv85h" Jan 22 09:21:35 crc kubenswrapper[4811]: I0122 09:21:35.816152 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/63cdd7a1-0295-4009-8c18-b3b3e24770b3-logs\") pod \"barbican-api-76dcc8f4f4-dv85h\" (UID: \"63cdd7a1-0295-4009-8c18-b3b3e24770b3\") " pod="openstack/barbican-api-76dcc8f4f4-dv85h" Jan 22 09:21:35 crc kubenswrapper[4811]: I0122 09:21:35.816208 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63cdd7a1-0295-4009-8c18-b3b3e24770b3-combined-ca-bundle\") pod \"barbican-api-76dcc8f4f4-dv85h\" (UID: \"63cdd7a1-0295-4009-8c18-b3b3e24770b3\") " pod="openstack/barbican-api-76dcc8f4f4-dv85h" Jan 22 09:21:35 crc kubenswrapper[4811]: I0122 09:21:35.816319 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/63cdd7a1-0295-4009-8c18-b3b3e24770b3-config-data-custom\") pod \"barbican-api-76dcc8f4f4-dv85h\" (UID: \"63cdd7a1-0295-4009-8c18-b3b3e24770b3\") " pod="openstack/barbican-api-76dcc8f4f4-dv85h" Jan 22 09:21:35 crc kubenswrapper[4811]: I0122 09:21:35.817729 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/63cdd7a1-0295-4009-8c18-b3b3e24770b3-logs\") pod \"barbican-api-76dcc8f4f4-dv85h\" (UID: \"63cdd7a1-0295-4009-8c18-b3b3e24770b3\") " pod="openstack/barbican-api-76dcc8f4f4-dv85h" Jan 22 09:21:35 crc kubenswrapper[4811]: I0122 09:21:35.821379 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/63cdd7a1-0295-4009-8c18-b3b3e24770b3-public-tls-certs\") pod \"barbican-api-76dcc8f4f4-dv85h\" (UID: \"63cdd7a1-0295-4009-8c18-b3b3e24770b3\") " pod="openstack/barbican-api-76dcc8f4f4-dv85h" Jan 22 09:21:35 crc kubenswrapper[4811]: I0122 09:21:35.822732 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/63cdd7a1-0295-4009-8c18-b3b3e24770b3-internal-tls-certs\") pod \"barbican-api-76dcc8f4f4-dv85h\" (UID: \"63cdd7a1-0295-4009-8c18-b3b3e24770b3\") " pod="openstack/barbican-api-76dcc8f4f4-dv85h" Jan 22 09:21:35 crc kubenswrapper[4811]: I0122 09:21:35.825454 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/63cdd7a1-0295-4009-8c18-b3b3e24770b3-config-data-custom\") pod \"barbican-api-76dcc8f4f4-dv85h\" (UID: \"63cdd7a1-0295-4009-8c18-b3b3e24770b3\") " pod="openstack/barbican-api-76dcc8f4f4-dv85h" Jan 22 09:21:35 crc kubenswrapper[4811]: I0122 09:21:35.831167 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63cdd7a1-0295-4009-8c18-b3b3e24770b3-config-data\") pod \"barbican-api-76dcc8f4f4-dv85h\" (UID: \"63cdd7a1-0295-4009-8c18-b3b3e24770b3\") " pod="openstack/barbican-api-76dcc8f4f4-dv85h" Jan 22 09:21:35 crc kubenswrapper[4811]: I0122 09:21:35.833296 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63cdd7a1-0295-4009-8c18-b3b3e24770b3-combined-ca-bundle\") pod \"barbican-api-76dcc8f4f4-dv85h\" (UID: \"63cdd7a1-0295-4009-8c18-b3b3e24770b3\") " pod="openstack/barbican-api-76dcc8f4f4-dv85h" Jan 22 09:21:35 crc kubenswrapper[4811]: I0122 09:21:35.859676 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8gghk\" (UniqueName: \"kubernetes.io/projected/63cdd7a1-0295-4009-8c18-b3b3e24770b3-kube-api-access-8gghk\") pod \"barbican-api-76dcc8f4f4-dv85h\" (UID: \"63cdd7a1-0295-4009-8c18-b3b3e24770b3\") " pod="openstack/barbican-api-76dcc8f4f4-dv85h" Jan 22 09:21:35 crc kubenswrapper[4811]: I0122 09:21:35.965011 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 22 09:21:35 crc kubenswrapper[4811]: I0122 09:21:35.982430 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7474d577dc-bfmkd"] Jan 22 09:21:36 crc kubenswrapper[4811]: I0122 09:21:36.009289 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-76dcc8f4f4-dv85h" Jan 22 09:21:36 crc kubenswrapper[4811]: I0122 09:21:36.055099 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91f1af8b-22e5-45b9-ac16-6696840485e4" path="/var/lib/kubelet/pods/91f1af8b-22e5-45b9-ac16-6696840485e4/volumes" Jan 22 09:21:36 crc kubenswrapper[4811]: W0122 09:21:36.055511 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9d0235ec_7dc3_44f6_ba77_171db91d88b3.slice/crio-de11e84e025c481b6407a59c8fc6a262c3bc5d01e269314eb5798dd443b041e6 WatchSource:0}: Error finding container de11e84e025c481b6407a59c8fc6a262c3bc5d01e269314eb5798dd443b041e6: Status 404 returned error can't find the container with id de11e84e025c481b6407a59c8fc6a262c3bc5d01e269314eb5798dd443b041e6 Jan 22 09:21:36 crc kubenswrapper[4811]: I0122 09:21:36.055878 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5bf9c84c75-rbgml"] Jan 22 09:21:36 crc kubenswrapper[4811]: I0122 09:21:36.190312 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-6976f89774-xh5fd"] Jan 22 09:21:36 crc kubenswrapper[4811]: I0122 09:21:36.199879 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6976f89774-xh5fd" Jan 22 09:21:36 crc kubenswrapper[4811]: I0122 09:21:36.203984 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-hbjd7" Jan 22 09:21:36 crc kubenswrapper[4811]: I0122 09:21:36.204190 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 22 09:21:36 crc kubenswrapper[4811]: I0122 09:21:36.204402 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Jan 22 09:21:36 crc kubenswrapper[4811]: I0122 09:21:36.205138 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 22 09:21:36 crc kubenswrapper[4811]: I0122 09:21:36.208851 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Jan 22 09:21:36 crc kubenswrapper[4811]: I0122 09:21:36.234128 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/32532baf-c6cc-4f91-91f5-7f81462d369a-public-tls-certs\") pod \"placement-6976f89774-xh5fd\" (UID: \"32532baf-c6cc-4f91-91f5-7f81462d369a\") " pod="openstack/placement-6976f89774-xh5fd" Jan 22 09:21:36 crc kubenswrapper[4811]: I0122 09:21:36.234181 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sn4hn\" (UniqueName: \"kubernetes.io/projected/32532baf-c6cc-4f91-91f5-7f81462d369a-kube-api-access-sn4hn\") pod \"placement-6976f89774-xh5fd\" (UID: \"32532baf-c6cc-4f91-91f5-7f81462d369a\") " pod="openstack/placement-6976f89774-xh5fd" Jan 22 09:21:36 crc kubenswrapper[4811]: I0122 09:21:36.234231 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/32532baf-c6cc-4f91-91f5-7f81462d369a-scripts\") pod \"placement-6976f89774-xh5fd\" (UID: \"32532baf-c6cc-4f91-91f5-7f81462d369a\") " pod="openstack/placement-6976f89774-xh5fd" Jan 22 09:21:36 crc kubenswrapper[4811]: I0122 09:21:36.234272 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/32532baf-c6cc-4f91-91f5-7f81462d369a-logs\") pod \"placement-6976f89774-xh5fd\" (UID: \"32532baf-c6cc-4f91-91f5-7f81462d369a\") " pod="openstack/placement-6976f89774-xh5fd" Jan 22 09:21:36 crc kubenswrapper[4811]: I0122 09:21:36.234293 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32532baf-c6cc-4f91-91f5-7f81462d369a-config-data\") pod \"placement-6976f89774-xh5fd\" (UID: \"32532baf-c6cc-4f91-91f5-7f81462d369a\") " pod="openstack/placement-6976f89774-xh5fd" Jan 22 09:21:36 crc kubenswrapper[4811]: I0122 09:21:36.234320 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32532baf-c6cc-4f91-91f5-7f81462d369a-combined-ca-bundle\") pod \"placement-6976f89774-xh5fd\" (UID: \"32532baf-c6cc-4f91-91f5-7f81462d369a\") " pod="openstack/placement-6976f89774-xh5fd" Jan 22 09:21:36 crc kubenswrapper[4811]: I0122 09:21:36.234355 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/32532baf-c6cc-4f91-91f5-7f81462d369a-internal-tls-certs\") pod \"placement-6976f89774-xh5fd\" (UID: \"32532baf-c6cc-4f91-91f5-7f81462d369a\") " pod="openstack/placement-6976f89774-xh5fd" Jan 22 09:21:36 crc kubenswrapper[4811]: I0122 09:21:36.238124 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6976f89774-xh5fd"] Jan 22 09:21:36 crc kubenswrapper[4811]: I0122 09:21:36.248794 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7757dc6854-tnspq"] Jan 22 09:21:36 crc kubenswrapper[4811]: I0122 09:21:36.274901 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-866648ff8f-r22cc"] Jan 22 09:21:36 crc kubenswrapper[4811]: I0122 09:21:36.282226 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"173400f9-c99e-4737-b27c-cff0bdb5ee94","Type":"ContainerStarted","Data":"d5579584315ce5bf7b0ba9cd5484b6e73082240e3a580577b72df6d7a77f1636"} Jan 22 09:21:36 crc kubenswrapper[4811]: I0122 09:21:36.315185 4811 generic.go:334] "Generic (PLEG): container finished" podID="8e75e74d-a6bf-48c5-bedb-8fb30806d29e" containerID="f5bf066bfef2789231511e3b08f5222491585ceed2463b00c6f36d138f5fccc1" exitCode=0 Jan 22 09:21:36 crc kubenswrapper[4811]: I0122 09:21:36.316231 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xs5vv" event={"ID":"8e75e74d-a6bf-48c5-bedb-8fb30806d29e","Type":"ContainerDied","Data":"f5bf066bfef2789231511e3b08f5222491585ceed2463b00c6f36d138f5fccc1"} Jan 22 09:21:36 crc kubenswrapper[4811]: I0122 09:21:36.316275 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xs5vv" event={"ID":"8e75e74d-a6bf-48c5-bedb-8fb30806d29e","Type":"ContainerStarted","Data":"1e96e6bcf82ef503c5fd18323d48f4e04c27f96e302498619b40c9a80ba2d38d"} Jan 22 09:21:36 crc kubenswrapper[4811]: I0122 09:21:36.341212 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/32532baf-c6cc-4f91-91f5-7f81462d369a-logs\") pod \"placement-6976f89774-xh5fd\" (UID: \"32532baf-c6cc-4f91-91f5-7f81462d369a\") " pod="openstack/placement-6976f89774-xh5fd" Jan 22 09:21:36 crc kubenswrapper[4811]: I0122 09:21:36.341274 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32532baf-c6cc-4f91-91f5-7f81462d369a-config-data\") pod \"placement-6976f89774-xh5fd\" (UID: \"32532baf-c6cc-4f91-91f5-7f81462d369a\") " pod="openstack/placement-6976f89774-xh5fd" Jan 22 09:21:36 crc kubenswrapper[4811]: I0122 09:21:36.341338 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32532baf-c6cc-4f91-91f5-7f81462d369a-combined-ca-bundle\") pod \"placement-6976f89774-xh5fd\" (UID: \"32532baf-c6cc-4f91-91f5-7f81462d369a\") " pod="openstack/placement-6976f89774-xh5fd" Jan 22 09:21:36 crc kubenswrapper[4811]: I0122 09:21:36.341394 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/32532baf-c6cc-4f91-91f5-7f81462d369a-internal-tls-certs\") pod \"placement-6976f89774-xh5fd\" (UID: \"32532baf-c6cc-4f91-91f5-7f81462d369a\") " pod="openstack/placement-6976f89774-xh5fd" Jan 22 09:21:36 crc kubenswrapper[4811]: I0122 09:21:36.341597 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/32532baf-c6cc-4f91-91f5-7f81462d369a-logs\") pod \"placement-6976f89774-xh5fd\" (UID: \"32532baf-c6cc-4f91-91f5-7f81462d369a\") " pod="openstack/placement-6976f89774-xh5fd" Jan 22 09:21:36 crc kubenswrapper[4811]: I0122 09:21:36.342142 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5bf9c84c75-rbgml" event={"ID":"42068723-76f8-4a1a-8210-f0a70f10897a","Type":"ContainerStarted","Data":"6aa670ae4db2b9c9ef5c8c35090362ca8132b610e47fe99b2c4cccb78c99e924"} Jan 22 09:21:36 crc kubenswrapper[4811]: I0122 09:21:36.343406 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/32532baf-c6cc-4f91-91f5-7f81462d369a-public-tls-certs\") pod \"placement-6976f89774-xh5fd\" (UID: \"32532baf-c6cc-4f91-91f5-7f81462d369a\") " pod="openstack/placement-6976f89774-xh5fd" Jan 22 09:21:36 crc kubenswrapper[4811]: I0122 09:21:36.343473 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sn4hn\" (UniqueName: \"kubernetes.io/projected/32532baf-c6cc-4f91-91f5-7f81462d369a-kube-api-access-sn4hn\") pod \"placement-6976f89774-xh5fd\" (UID: \"32532baf-c6cc-4f91-91f5-7f81462d369a\") " pod="openstack/placement-6976f89774-xh5fd" Jan 22 09:21:36 crc kubenswrapper[4811]: I0122 09:21:36.346350 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32532baf-c6cc-4f91-91f5-7f81462d369a-combined-ca-bundle\") pod \"placement-6976f89774-xh5fd\" (UID: \"32532baf-c6cc-4f91-91f5-7f81462d369a\") " pod="openstack/placement-6976f89774-xh5fd" Jan 22 09:21:36 crc kubenswrapper[4811]: I0122 09:21:36.354245 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/32532baf-c6cc-4f91-91f5-7f81462d369a-scripts\") pod \"placement-6976f89774-xh5fd\" (UID: \"32532baf-c6cc-4f91-91f5-7f81462d369a\") " pod="openstack/placement-6976f89774-xh5fd" Jan 22 09:21:36 crc kubenswrapper[4811]: I0122 09:21:36.357386 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7474d577dc-bfmkd" event={"ID":"1831d8da-ec75-458b-b289-5119052f2216","Type":"ContainerStarted","Data":"076d5c248a8628587cbe1a9dff8a8e8d8262a394ba5d30aa689dc714619c46e4"} Jan 22 09:21:36 crc kubenswrapper[4811]: I0122 09:21:36.363617 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sn4hn\" (UniqueName: \"kubernetes.io/projected/32532baf-c6cc-4f91-91f5-7f81462d369a-kube-api-access-sn4hn\") pod \"placement-6976f89774-xh5fd\" (UID: \"32532baf-c6cc-4f91-91f5-7f81462d369a\") " pod="openstack/placement-6976f89774-xh5fd" Jan 22 09:21:36 crc kubenswrapper[4811]: I0122 09:21:36.366098 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/32532baf-c6cc-4f91-91f5-7f81462d369a-public-tls-certs\") pod \"placement-6976f89774-xh5fd\" (UID: \"32532baf-c6cc-4f91-91f5-7f81462d369a\") " pod="openstack/placement-6976f89774-xh5fd" Jan 22 09:21:36 crc kubenswrapper[4811]: I0122 09:21:36.366207 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/32532baf-c6cc-4f91-91f5-7f81462d369a-internal-tls-certs\") pod \"placement-6976f89774-xh5fd\" (UID: \"32532baf-c6cc-4f91-91f5-7f81462d369a\") " pod="openstack/placement-6976f89774-xh5fd" Jan 22 09:21:36 crc kubenswrapper[4811]: I0122 09:21:36.378036 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32532baf-c6cc-4f91-91f5-7f81462d369a-config-data\") pod \"placement-6976f89774-xh5fd\" (UID: \"32532baf-c6cc-4f91-91f5-7f81462d369a\") " pod="openstack/placement-6976f89774-xh5fd" Jan 22 09:21:36 crc kubenswrapper[4811]: I0122 09:21:36.378405 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/32532baf-c6cc-4f91-91f5-7f81462d369a-scripts\") pod \"placement-6976f89774-xh5fd\" (UID: \"32532baf-c6cc-4f91-91f5-7f81462d369a\") " pod="openstack/placement-6976f89774-xh5fd" Jan 22 09:21:36 crc kubenswrapper[4811]: I0122 09:21:36.402004 4811 generic.go:334] "Generic (PLEG): container finished" podID="84068a6b-e189-419b-87f5-f31428f6eafe" containerID="db7394b1a0d63dd1b71a9c2dafe49c27f24a0ba7e76972ad14dd0e5bca8208b9" exitCode=0 Jan 22 09:21:36 crc kubenswrapper[4811]: I0122 09:21:36.402060 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" event={"ID":"84068a6b-e189-419b-87f5-f31428f6eafe","Type":"ContainerDied","Data":"db7394b1a0d63dd1b71a9c2dafe49c27f24a0ba7e76972ad14dd0e5bca8208b9"} Jan 22 09:21:36 crc kubenswrapper[4811]: I0122 09:21:36.402086 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" event={"ID":"84068a6b-e189-419b-87f5-f31428f6eafe","Type":"ContainerStarted","Data":"48a246c7a0e2d8e856bc2e774a41e4c4a571a73e6dcfe43b9eda16ad78191748"} Jan 22 09:21:36 crc kubenswrapper[4811]: I0122 09:21:36.402103 4811 scope.go:117] "RemoveContainer" containerID="73255475db73f5da91cb9bd8424c8edb822b1f0ea5ba4103c87f2ef8a2771756" Jan 22 09:21:36 crc kubenswrapper[4811]: I0122 09:21:36.452815 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-744f57c8cc-f74wj"] Jan 22 09:21:36 crc kubenswrapper[4811]: I0122 09:21:36.494877 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 22 09:21:36 crc kubenswrapper[4811]: I0122 09:21:36.496917 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5c6c94454b-v7bc2" event={"ID":"1bf59625-a642-4155-9e83-46cd7d874f50","Type":"ContainerStarted","Data":"3e0d41bb2c4f4956b8e2a25ffb6d7de5a7cd8eba4e9c0d2515459afec0616707"} Jan 22 09:21:36 crc kubenswrapper[4811]: I0122 09:21:36.517108 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"9d0235ec-7dc3-44f6-ba77-171db91d88b3","Type":"ContainerStarted","Data":"de11e84e025c481b6407a59c8fc6a262c3bc5d01e269314eb5798dd443b041e6"} Jan 22 09:21:36 crc kubenswrapper[4811]: W0122 09:21:36.540012 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod80f3e64e_0e9f_49b8_9c7e_78e08634ba92.slice/crio-23dedf52d26f84cc7cb8cd631e680d17293d66219c79afbc932b424388e23914 WatchSource:0}: Error finding container 23dedf52d26f84cc7cb8cd631e680d17293d66219c79afbc932b424388e23914: Status 404 returned error can't find the container with id 23dedf52d26f84cc7cb8cd631e680d17293d66219c79afbc932b424388e23914 Jan 22 09:21:36 crc kubenswrapper[4811]: I0122 09:21:36.570979 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6976f89774-xh5fd" Jan 22 09:21:36 crc kubenswrapper[4811]: I0122 09:21:36.645933 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-5f966bcc4-4n44q"] Jan 22 09:21:36 crc kubenswrapper[4811]: I0122 09:21:36.683537 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-7c8cfbdccc-5kmj8"] Jan 22 09:21:36 crc kubenswrapper[4811]: I0122 09:21:36.908619 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-76dcc8f4f4-dv85h"] Jan 22 09:21:37 crc kubenswrapper[4811]: I0122 09:21:37.213351 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6976f89774-xh5fd"] Jan 22 09:21:37 crc kubenswrapper[4811]: W0122 09:21:37.243795 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod32532baf_c6cc_4f91_91f5_7f81462d369a.slice/crio-54065c5e1afc3ca0e6ff8feab34ac40084f48a70fee2d0cdf6e9e2ec65a679f2 WatchSource:0}: Error finding container 54065c5e1afc3ca0e6ff8feab34ac40084f48a70fee2d0cdf6e9e2ec65a679f2: Status 404 returned error can't find the container with id 54065c5e1afc3ca0e6ff8feab34ac40084f48a70fee2d0cdf6e9e2ec65a679f2 Jan 22 09:21:37 crc kubenswrapper[4811]: I0122 09:21:37.534605 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"9d0235ec-7dc3-44f6-ba77-171db91d88b3","Type":"ContainerStarted","Data":"9dce536a3e8b57c5a1cad8c8feb7c3df19e73b7dd34f32d4775b93afeafa6fb0"} Jan 22 09:21:37 crc kubenswrapper[4811]: I0122 09:21:37.540868 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7757dc6854-tnspq" event={"ID":"49ce92f5-a20a-4429-89a1-764b3db3e28a","Type":"ContainerStarted","Data":"db6ab904511cf6e9244e2d0d01cf6c92b3afe6a4ed5ca4f1d4b6334f256a171b"} Jan 22 09:21:37 crc kubenswrapper[4811]: I0122 09:21:37.540939 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7757dc6854-tnspq" event={"ID":"49ce92f5-a20a-4429-89a1-764b3db3e28a","Type":"ContainerStarted","Data":"7b5b159f764e828b353e7ad7a1a67bcbaaebf098aaa6bc41d5033425dd503506"} Jan 22 09:21:37 crc kubenswrapper[4811]: I0122 09:21:37.540969 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7757dc6854-tnspq" Jan 22 09:21:37 crc kubenswrapper[4811]: I0122 09:21:37.541005 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7757dc6854-tnspq" Jan 22 09:21:37 crc kubenswrapper[4811]: I0122 09:21:37.541014 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7757dc6854-tnspq" event={"ID":"49ce92f5-a20a-4429-89a1-764b3db3e28a","Type":"ContainerStarted","Data":"0a4c47cc71b948ac84a7c48976de701c8c045320f78dd59b5f086e262efe154d"} Jan 22 09:21:37 crc kubenswrapper[4811]: I0122 09:21:37.541861 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"80f3e64e-0e9f-49b8-9c7e-78e08634ba92","Type":"ContainerStarted","Data":"23dedf52d26f84cc7cb8cd631e680d17293d66219c79afbc932b424388e23914"} Jan 22 09:21:37 crc kubenswrapper[4811]: I0122 09:21:37.547100 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-866648ff8f-r22cc" event={"ID":"01a5613f-ca39-400f-83e5-8c2e04474ce3","Type":"ContainerStarted","Data":"30a41faaeb381e1073d802aabc45d1059f9a87d0e53e785dec1433ed9b7527a0"} Jan 22 09:21:37 crc kubenswrapper[4811]: I0122 09:21:37.550662 4811 generic.go:334] "Generic (PLEG): container finished" podID="e3f473da-b7d9-45c0-9380-199a07972c9c" containerID="40ae714240c01443068b2b2ad3bc90ee782351243b020456daedd9d5f3ca1ec4" exitCode=0 Jan 22 09:21:37 crc kubenswrapper[4811]: I0122 09:21:37.550695 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-744f57c8cc-f74wj" event={"ID":"e3f473da-b7d9-45c0-9380-199a07972c9c","Type":"ContainerDied","Data":"40ae714240c01443068b2b2ad3bc90ee782351243b020456daedd9d5f3ca1ec4"} Jan 22 09:21:37 crc kubenswrapper[4811]: I0122 09:21:37.550734 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-744f57c8cc-f74wj" event={"ID":"e3f473da-b7d9-45c0-9380-199a07972c9c","Type":"ContainerStarted","Data":"22454250d6b69dde6a020a4e5ec4210e2ea3db82c3bccbc07b67369b782dbce8"} Jan 22 09:21:37 crc kubenswrapper[4811]: I0122 09:21:37.552795 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7c8cfbdccc-5kmj8" event={"ID":"220f6baa-23c9-4cf8-b91f-5245734fc341","Type":"ContainerStarted","Data":"99509cfcbb25394f2440a4f0032bf3c47beab8f4a5165a15a6b02a1836003a0c"} Jan 22 09:21:37 crc kubenswrapper[4811]: I0122 09:21:37.568334 4811 generic.go:334] "Generic (PLEG): container finished" podID="1831d8da-ec75-458b-b289-5119052f2216" containerID="b3ac66eef8f8810a19ec0a0c5e9a3e7d603561b5b0feefdbe0d94e51bc8ae371" exitCode=0 Jan 22 09:21:37 crc kubenswrapper[4811]: I0122 09:21:37.568479 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7474d577dc-bfmkd" event={"ID":"1831d8da-ec75-458b-b289-5119052f2216","Type":"ContainerDied","Data":"b3ac66eef8f8810a19ec0a0c5e9a3e7d603561b5b0feefdbe0d94e51bc8ae371"} Jan 22 09:21:37 crc kubenswrapper[4811]: I0122 09:21:37.574142 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-7757dc6854-tnspq" podStartSLOduration=9.5741192 podStartE2EDuration="9.5741192s" podCreationTimestamp="2026-01-22 09:21:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:21:37.566219435 +0000 UTC m=+941.888406559" watchObservedRunningTime="2026-01-22 09:21:37.5741192 +0000 UTC m=+941.896306324" Jan 22 09:21:37 crc kubenswrapper[4811]: I0122 09:21:37.576529 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-76dcc8f4f4-dv85h" event={"ID":"63cdd7a1-0295-4009-8c18-b3b3e24770b3","Type":"ContainerStarted","Data":"ecfd7e26a6ca600f4d263b52ac44c9460e1a15bc43ac66a281fe3363b3ee957e"} Jan 22 09:21:37 crc kubenswrapper[4811]: I0122 09:21:37.576567 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-76dcc8f4f4-dv85h" event={"ID":"63cdd7a1-0295-4009-8c18-b3b3e24770b3","Type":"ContainerStarted","Data":"f89a5c38b8db069a49b2bf2a0a98fa8d5e2ecef639eefbd9d0c892dbf728c2e9"} Jan 22 09:21:37 crc kubenswrapper[4811]: I0122 09:21:37.588691 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6976f89774-xh5fd" event={"ID":"32532baf-c6cc-4f91-91f5-7f81462d369a","Type":"ContainerStarted","Data":"54065c5e1afc3ca0e6ff8feab34ac40084f48a70fee2d0cdf6e9e2ec65a679f2"} Jan 22 09:21:37 crc kubenswrapper[4811]: I0122 09:21:37.598562 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xs5vv" event={"ID":"8e75e74d-a6bf-48c5-bedb-8fb30806d29e","Type":"ContainerStarted","Data":"2943a71f7a5f384c12e44fcd7922d176ba525ee5172ce3baf9607b87215d480a"} Jan 22 09:21:37 crc kubenswrapper[4811]: I0122 09:21:37.602570 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5bf9c84c75-rbgml" event={"ID":"42068723-76f8-4a1a-8210-f0a70f10897a","Type":"ContainerStarted","Data":"ceb6e1893a90377829feb964413454d3794ae3e96cae8e466b8fb7c850d21200"} Jan 22 09:21:37 crc kubenswrapper[4811]: I0122 09:21:37.603218 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-5bf9c84c75-rbgml" Jan 22 09:21:37 crc kubenswrapper[4811]: I0122 09:21:37.615432 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5f966bcc4-4n44q" event={"ID":"312cf490-6d44-416e-8238-06667bf8efee","Type":"ContainerStarted","Data":"cad90d3868edf072e01196306222c605777df7aa894f0c0d55f3812df081afad"} Jan 22 09:21:37 crc kubenswrapper[4811]: I0122 09:21:37.742140 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-5bf9c84c75-rbgml" podStartSLOduration=6.742119755 podStartE2EDuration="6.742119755s" podCreationTimestamp="2026-01-22 09:21:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:21:37.661194811 +0000 UTC m=+941.983381934" watchObservedRunningTime="2026-01-22 09:21:37.742119755 +0000 UTC m=+942.064306878" Jan 22 09:21:37 crc kubenswrapper[4811]: I0122 09:21:37.970727 4811 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-548894858c-nn2fm" podUID="91f1af8b-22e5-45b9-ac16-6696840485e4" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.136:5353: i/o timeout" Jan 22 09:21:38 crc kubenswrapper[4811]: I0122 09:21:38.129730 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-744f57c8cc-f74wj" Jan 22 09:21:38 crc kubenswrapper[4811]: I0122 09:21:38.226196 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e3f473da-b7d9-45c0-9380-199a07972c9c-config\") pod \"e3f473da-b7d9-45c0-9380-199a07972c9c\" (UID: \"e3f473da-b7d9-45c0-9380-199a07972c9c\") " Jan 22 09:21:38 crc kubenswrapper[4811]: I0122 09:21:38.226255 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e3f473da-b7d9-45c0-9380-199a07972c9c-ovsdbserver-nb\") pod \"e3f473da-b7d9-45c0-9380-199a07972c9c\" (UID: \"e3f473da-b7d9-45c0-9380-199a07972c9c\") " Jan 22 09:21:38 crc kubenswrapper[4811]: I0122 09:21:38.226277 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e3f473da-b7d9-45c0-9380-199a07972c9c-dns-svc\") pod \"e3f473da-b7d9-45c0-9380-199a07972c9c\" (UID: \"e3f473da-b7d9-45c0-9380-199a07972c9c\") " Jan 22 09:21:38 crc kubenswrapper[4811]: I0122 09:21:38.226525 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cxgpr\" (UniqueName: \"kubernetes.io/projected/e3f473da-b7d9-45c0-9380-199a07972c9c-kube-api-access-cxgpr\") pod \"e3f473da-b7d9-45c0-9380-199a07972c9c\" (UID: \"e3f473da-b7d9-45c0-9380-199a07972c9c\") " Jan 22 09:21:38 crc kubenswrapper[4811]: I0122 09:21:38.226552 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e3f473da-b7d9-45c0-9380-199a07972c9c-ovsdbserver-sb\") pod \"e3f473da-b7d9-45c0-9380-199a07972c9c\" (UID: \"e3f473da-b7d9-45c0-9380-199a07972c9c\") " Jan 22 09:21:38 crc kubenswrapper[4811]: I0122 09:21:38.236745 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3f473da-b7d9-45c0-9380-199a07972c9c-kube-api-access-cxgpr" (OuterVolumeSpecName: "kube-api-access-cxgpr") pod "e3f473da-b7d9-45c0-9380-199a07972c9c" (UID: "e3f473da-b7d9-45c0-9380-199a07972c9c"). InnerVolumeSpecName "kube-api-access-cxgpr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:21:38 crc kubenswrapper[4811]: I0122 09:21:38.246857 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e3f473da-b7d9-45c0-9380-199a07972c9c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e3f473da-b7d9-45c0-9380-199a07972c9c" (UID: "e3f473da-b7d9-45c0-9380-199a07972c9c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:21:38 crc kubenswrapper[4811]: I0122 09:21:38.255051 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e3f473da-b7d9-45c0-9380-199a07972c9c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e3f473da-b7d9-45c0-9380-199a07972c9c" (UID: "e3f473da-b7d9-45c0-9380-199a07972c9c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:21:38 crc kubenswrapper[4811]: I0122 09:21:38.257936 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e3f473da-b7d9-45c0-9380-199a07972c9c-config" (OuterVolumeSpecName: "config") pod "e3f473da-b7d9-45c0-9380-199a07972c9c" (UID: "e3f473da-b7d9-45c0-9380-199a07972c9c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:21:38 crc kubenswrapper[4811]: I0122 09:21:38.274424 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e3f473da-b7d9-45c0-9380-199a07972c9c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e3f473da-b7d9-45c0-9380-199a07972c9c" (UID: "e3f473da-b7d9-45c0-9380-199a07972c9c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:21:38 crc kubenswrapper[4811]: I0122 09:21:38.331543 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cxgpr\" (UniqueName: \"kubernetes.io/projected/e3f473da-b7d9-45c0-9380-199a07972c9c-kube-api-access-cxgpr\") on node \"crc\" DevicePath \"\"" Jan 22 09:21:38 crc kubenswrapper[4811]: I0122 09:21:38.331574 4811 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e3f473da-b7d9-45c0-9380-199a07972c9c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 22 09:21:38 crc kubenswrapper[4811]: I0122 09:21:38.331584 4811 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e3f473da-b7d9-45c0-9380-199a07972c9c-config\") on node \"crc\" DevicePath \"\"" Jan 22 09:21:38 crc kubenswrapper[4811]: I0122 09:21:38.331593 4811 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e3f473da-b7d9-45c0-9380-199a07972c9c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 22 09:21:38 crc kubenswrapper[4811]: I0122 09:21:38.331603 4811 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e3f473da-b7d9-45c0-9380-199a07972c9c-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 22 09:21:38 crc kubenswrapper[4811]: I0122 09:21:38.734327 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7474d577dc-bfmkd" event={"ID":"1831d8da-ec75-458b-b289-5119052f2216","Type":"ContainerStarted","Data":"f0ee131d5ea82416c65eff726e2aa48d9cf15c6bf1e82292111b464aedb62aa2"} Jan 22 09:21:38 crc kubenswrapper[4811]: I0122 09:21:38.734814 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7474d577dc-bfmkd" Jan 22 09:21:38 crc kubenswrapper[4811]: I0122 09:21:38.744434 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-76dcc8f4f4-dv85h" event={"ID":"63cdd7a1-0295-4009-8c18-b3b3e24770b3","Type":"ContainerStarted","Data":"b2c491b76a3033ddf83db2ea48f78113e3e9a50572f7a9128f20f27e7823f773"} Jan 22 09:21:38 crc kubenswrapper[4811]: I0122 09:21:38.744756 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-76dcc8f4f4-dv85h" Jan 22 09:21:38 crc kubenswrapper[4811]: I0122 09:21:38.744833 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-76dcc8f4f4-dv85h" Jan 22 09:21:38 crc kubenswrapper[4811]: I0122 09:21:38.756659 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"9d0235ec-7dc3-44f6-ba77-171db91d88b3","Type":"ContainerStarted","Data":"ded6a7b46f0d44e1998bfd29b7b1a1c926316cc3b854471298f7bb91dc5ace85"} Jan 22 09:21:38 crc kubenswrapper[4811]: I0122 09:21:38.756798 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="9d0235ec-7dc3-44f6-ba77-171db91d88b3" containerName="cinder-api-log" containerID="cri-o://9dce536a3e8b57c5a1cad8c8feb7c3df19e73b7dd34f32d4775b93afeafa6fb0" gracePeriod=30 Jan 22 09:21:38 crc kubenswrapper[4811]: I0122 09:21:38.756873 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="9d0235ec-7dc3-44f6-ba77-171db91d88b3" containerName="cinder-api" containerID="cri-o://ded6a7b46f0d44e1998bfd29b7b1a1c926316cc3b854471298f7bb91dc5ace85" gracePeriod=30 Jan 22 09:21:38 crc kubenswrapper[4811]: I0122 09:21:38.756977 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 22 09:21:38 crc kubenswrapper[4811]: I0122 09:21:38.763821 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7474d577dc-bfmkd" podStartSLOduration=8.76380522 podStartE2EDuration="8.76380522s" podCreationTimestamp="2026-01-22 09:21:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:21:38.75694822 +0000 UTC m=+943.079135343" watchObservedRunningTime="2026-01-22 09:21:38.76380522 +0000 UTC m=+943.085992343" Jan 22 09:21:38 crc kubenswrapper[4811]: I0122 09:21:38.778992 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6976f89774-xh5fd" event={"ID":"32532baf-c6cc-4f91-91f5-7f81462d369a","Type":"ContainerStarted","Data":"969ce4ddc87b7a8821d04f9aa11f9064914797c61fbfb3a8c32f678552e0a6ef"} Jan 22 09:21:38 crc kubenswrapper[4811]: I0122 09:21:38.779056 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6976f89774-xh5fd" Jan 22 09:21:38 crc kubenswrapper[4811]: I0122 09:21:38.779067 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6976f89774-xh5fd" event={"ID":"32532baf-c6cc-4f91-91f5-7f81462d369a","Type":"ContainerStarted","Data":"3c3bf369b771baa3a7108b908418629733ebdc4993f24e983ceaa3a0471aa9e0"} Jan 22 09:21:38 crc kubenswrapper[4811]: I0122 09:21:38.779088 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6976f89774-xh5fd" Jan 22 09:21:38 crc kubenswrapper[4811]: I0122 09:21:38.781192 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=8.781171894 podStartE2EDuration="8.781171894s" podCreationTimestamp="2026-01-22 09:21:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:21:38.77389285 +0000 UTC m=+943.096079973" watchObservedRunningTime="2026-01-22 09:21:38.781171894 +0000 UTC m=+943.103359018" Jan 22 09:21:38 crc kubenswrapper[4811]: I0122 09:21:38.784594 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-744f57c8cc-f74wj" event={"ID":"e3f473da-b7d9-45c0-9380-199a07972c9c","Type":"ContainerDied","Data":"22454250d6b69dde6a020a4e5ec4210e2ea3db82c3bccbc07b67369b782dbce8"} Jan 22 09:21:38 crc kubenswrapper[4811]: I0122 09:21:38.784659 4811 scope.go:117] "RemoveContainer" containerID="40ae714240c01443068b2b2ad3bc90ee782351243b020456daedd9d5f3ca1ec4" Jan 22 09:21:38 crc kubenswrapper[4811]: I0122 09:21:38.784853 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-744f57c8cc-f74wj" Jan 22 09:21:38 crc kubenswrapper[4811]: I0122 09:21:38.793749 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-76dcc8f4f4-dv85h" podStartSLOduration=3.793730805 podStartE2EDuration="3.793730805s" podCreationTimestamp="2026-01-22 09:21:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:21:38.790878466 +0000 UTC m=+943.113065589" watchObservedRunningTime="2026-01-22 09:21:38.793730805 +0000 UTC m=+943.115917928" Jan 22 09:21:38 crc kubenswrapper[4811]: I0122 09:21:38.825142 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-6976f89774-xh5fd" podStartSLOduration=2.8251122239999997 podStartE2EDuration="2.825112224s" podCreationTimestamp="2026-01-22 09:21:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:21:38.821879839 +0000 UTC m=+943.144066962" watchObservedRunningTime="2026-01-22 09:21:38.825112224 +0000 UTC m=+943.147299347" Jan 22 09:21:38 crc kubenswrapper[4811]: I0122 09:21:38.909308 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-744f57c8cc-f74wj"] Jan 22 09:21:38 crc kubenswrapper[4811]: I0122 09:21:38.938214 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-744f57c8cc-f74wj"] Jan 22 09:21:39 crc kubenswrapper[4811]: I0122 09:21:39.798788 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"80f3e64e-0e9f-49b8-9c7e-78e08634ba92","Type":"ContainerStarted","Data":"074abccf97b616cd189f81d3e407c934a9e30ad015215dd231c297a4cd13f288"} Jan 22 09:21:39 crc kubenswrapper[4811]: I0122 09:21:39.808067 4811 generic.go:334] "Generic (PLEG): container finished" podID="8e75e74d-a6bf-48c5-bedb-8fb30806d29e" containerID="2943a71f7a5f384c12e44fcd7922d176ba525ee5172ce3baf9607b87215d480a" exitCode=0 Jan 22 09:21:39 crc kubenswrapper[4811]: I0122 09:21:39.808135 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xs5vv" event={"ID":"8e75e74d-a6bf-48c5-bedb-8fb30806d29e","Type":"ContainerDied","Data":"2943a71f7a5f384c12e44fcd7922d176ba525ee5172ce3baf9607b87215d480a"} Jan 22 09:21:39 crc kubenswrapper[4811]: I0122 09:21:39.842325 4811 generic.go:334] "Generic (PLEG): container finished" podID="9d0235ec-7dc3-44f6-ba77-171db91d88b3" containerID="ded6a7b46f0d44e1998bfd29b7b1a1c926316cc3b854471298f7bb91dc5ace85" exitCode=0 Jan 22 09:21:39 crc kubenswrapper[4811]: I0122 09:21:39.842359 4811 generic.go:334] "Generic (PLEG): container finished" podID="9d0235ec-7dc3-44f6-ba77-171db91d88b3" containerID="9dce536a3e8b57c5a1cad8c8feb7c3df19e73b7dd34f32d4775b93afeafa6fb0" exitCode=143 Jan 22 09:21:39 crc kubenswrapper[4811]: I0122 09:21:39.843295 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"9d0235ec-7dc3-44f6-ba77-171db91d88b3","Type":"ContainerDied","Data":"ded6a7b46f0d44e1998bfd29b7b1a1c926316cc3b854471298f7bb91dc5ace85"} Jan 22 09:21:39 crc kubenswrapper[4811]: I0122 09:21:39.843328 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"9d0235ec-7dc3-44f6-ba77-171db91d88b3","Type":"ContainerDied","Data":"9dce536a3e8b57c5a1cad8c8feb7c3df19e73b7dd34f32d4775b93afeafa6fb0"} Jan 22 09:21:40 crc kubenswrapper[4811]: I0122 09:21:40.003083 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3f473da-b7d9-45c0-9380-199a07972c9c" path="/var/lib/kubelet/pods/e3f473da-b7d9-45c0-9380-199a07972c9c/volumes" Jan 22 09:21:40 crc kubenswrapper[4811]: I0122 09:21:40.112488 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 22 09:21:40 crc kubenswrapper[4811]: I0122 09:21:40.181603 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9d0235ec-7dc3-44f6-ba77-171db91d88b3-logs\") pod \"9d0235ec-7dc3-44f6-ba77-171db91d88b3\" (UID: \"9d0235ec-7dc3-44f6-ba77-171db91d88b3\") " Jan 22 09:21:40 crc kubenswrapper[4811]: I0122 09:21:40.181695 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d0235ec-7dc3-44f6-ba77-171db91d88b3-scripts\") pod \"9d0235ec-7dc3-44f6-ba77-171db91d88b3\" (UID: \"9d0235ec-7dc3-44f6-ba77-171db91d88b3\") " Jan 22 09:21:40 crc kubenswrapper[4811]: I0122 09:21:40.181717 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9d0235ec-7dc3-44f6-ba77-171db91d88b3-config-data-custom\") pod \"9d0235ec-7dc3-44f6-ba77-171db91d88b3\" (UID: \"9d0235ec-7dc3-44f6-ba77-171db91d88b3\") " Jan 22 09:21:40 crc kubenswrapper[4811]: I0122 09:21:40.181785 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9d0235ec-7dc3-44f6-ba77-171db91d88b3-etc-machine-id\") pod \"9d0235ec-7dc3-44f6-ba77-171db91d88b3\" (UID: \"9d0235ec-7dc3-44f6-ba77-171db91d88b3\") " Jan 22 09:21:40 crc kubenswrapper[4811]: I0122 09:21:40.181985 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-46ls5\" (UniqueName: \"kubernetes.io/projected/9d0235ec-7dc3-44f6-ba77-171db91d88b3-kube-api-access-46ls5\") pod \"9d0235ec-7dc3-44f6-ba77-171db91d88b3\" (UID: \"9d0235ec-7dc3-44f6-ba77-171db91d88b3\") " Jan 22 09:21:40 crc kubenswrapper[4811]: I0122 09:21:40.182051 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d0235ec-7dc3-44f6-ba77-171db91d88b3-combined-ca-bundle\") pod \"9d0235ec-7dc3-44f6-ba77-171db91d88b3\" (UID: \"9d0235ec-7dc3-44f6-ba77-171db91d88b3\") " Jan 22 09:21:40 crc kubenswrapper[4811]: I0122 09:21:40.182080 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d0235ec-7dc3-44f6-ba77-171db91d88b3-config-data\") pod \"9d0235ec-7dc3-44f6-ba77-171db91d88b3\" (UID: \"9d0235ec-7dc3-44f6-ba77-171db91d88b3\") " Jan 22 09:21:40 crc kubenswrapper[4811]: I0122 09:21:40.182087 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d0235ec-7dc3-44f6-ba77-171db91d88b3-logs" (OuterVolumeSpecName: "logs") pod "9d0235ec-7dc3-44f6-ba77-171db91d88b3" (UID: "9d0235ec-7dc3-44f6-ba77-171db91d88b3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:21:40 crc kubenswrapper[4811]: I0122 09:21:40.182135 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9d0235ec-7dc3-44f6-ba77-171db91d88b3-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "9d0235ec-7dc3-44f6-ba77-171db91d88b3" (UID: "9d0235ec-7dc3-44f6-ba77-171db91d88b3"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 09:21:40 crc kubenswrapper[4811]: I0122 09:21:40.182848 4811 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9d0235ec-7dc3-44f6-ba77-171db91d88b3-logs\") on node \"crc\" DevicePath \"\"" Jan 22 09:21:40 crc kubenswrapper[4811]: I0122 09:21:40.182865 4811 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9d0235ec-7dc3-44f6-ba77-171db91d88b3-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 22 09:21:40 crc kubenswrapper[4811]: I0122 09:21:40.188691 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d0235ec-7dc3-44f6-ba77-171db91d88b3-scripts" (OuterVolumeSpecName: "scripts") pod "9d0235ec-7dc3-44f6-ba77-171db91d88b3" (UID: "9d0235ec-7dc3-44f6-ba77-171db91d88b3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:21:40 crc kubenswrapper[4811]: I0122 09:21:40.189995 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d0235ec-7dc3-44f6-ba77-171db91d88b3-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "9d0235ec-7dc3-44f6-ba77-171db91d88b3" (UID: "9d0235ec-7dc3-44f6-ba77-171db91d88b3"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:21:40 crc kubenswrapper[4811]: I0122 09:21:40.191261 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d0235ec-7dc3-44f6-ba77-171db91d88b3-kube-api-access-46ls5" (OuterVolumeSpecName: "kube-api-access-46ls5") pod "9d0235ec-7dc3-44f6-ba77-171db91d88b3" (UID: "9d0235ec-7dc3-44f6-ba77-171db91d88b3"). InnerVolumeSpecName "kube-api-access-46ls5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:21:40 crc kubenswrapper[4811]: I0122 09:21:40.249708 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d0235ec-7dc3-44f6-ba77-171db91d88b3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9d0235ec-7dc3-44f6-ba77-171db91d88b3" (UID: "9d0235ec-7dc3-44f6-ba77-171db91d88b3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:21:40 crc kubenswrapper[4811]: I0122 09:21:40.273550 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d0235ec-7dc3-44f6-ba77-171db91d88b3-config-data" (OuterVolumeSpecName: "config-data") pod "9d0235ec-7dc3-44f6-ba77-171db91d88b3" (UID: "9d0235ec-7dc3-44f6-ba77-171db91d88b3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:21:40 crc kubenswrapper[4811]: I0122 09:21:40.284366 4811 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d0235ec-7dc3-44f6-ba77-171db91d88b3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 09:21:40 crc kubenswrapper[4811]: I0122 09:21:40.284394 4811 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d0235ec-7dc3-44f6-ba77-171db91d88b3-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 09:21:40 crc kubenswrapper[4811]: I0122 09:21:40.284406 4811 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d0235ec-7dc3-44f6-ba77-171db91d88b3-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 09:21:40 crc kubenswrapper[4811]: I0122 09:21:40.284415 4811 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9d0235ec-7dc3-44f6-ba77-171db91d88b3-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 22 09:21:40 crc kubenswrapper[4811]: I0122 09:21:40.284424 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-46ls5\" (UniqueName: \"kubernetes.io/projected/9d0235ec-7dc3-44f6-ba77-171db91d88b3-kube-api-access-46ls5\") on node \"crc\" DevicePath \"\"" Jan 22 09:21:40 crc kubenswrapper[4811]: I0122 09:21:40.852754 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5c6c94454b-v7bc2" event={"ID":"1bf59625-a642-4155-9e83-46cd7d874f50","Type":"ContainerStarted","Data":"8ae67b7358dac12b27ea1a3a2b1e227264356434cd73a449f3c836fe68c276b0"} Jan 22 09:21:40 crc kubenswrapper[4811]: I0122 09:21:40.853338 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5c6c94454b-v7bc2" event={"ID":"1bf59625-a642-4155-9e83-46cd7d874f50","Type":"ContainerStarted","Data":"70687370876fb9c06d492e64f3cbfe43cd971421d7c0d968ae8ae83ad637ec23"} Jan 22 09:21:40 crc kubenswrapper[4811]: I0122 09:21:40.879377 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"9d0235ec-7dc3-44f6-ba77-171db91d88b3","Type":"ContainerDied","Data":"de11e84e025c481b6407a59c8fc6a262c3bc5d01e269314eb5798dd443b041e6"} Jan 22 09:21:40 crc kubenswrapper[4811]: I0122 09:21:40.879432 4811 scope.go:117] "RemoveContainer" containerID="ded6a7b46f0d44e1998bfd29b7b1a1c926316cc3b854471298f7bb91dc5ace85" Jan 22 09:21:40 crc kubenswrapper[4811]: I0122 09:21:40.879541 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 22 09:21:40 crc kubenswrapper[4811]: I0122 09:21:40.917827 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"80f3e64e-0e9f-49b8-9c7e-78e08634ba92","Type":"ContainerStarted","Data":"33c2413d57b56626dabcb11ae8b5afefb9da4a8234a3b363b7b9fd22d0c9714d"} Jan 22 09:21:40 crc kubenswrapper[4811]: I0122 09:21:40.940859 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-5c6c94454b-v7bc2" podStartSLOduration=9.047315845 podStartE2EDuration="12.94084464s" podCreationTimestamp="2026-01-22 09:21:28 +0000 UTC" firstStartedPulling="2026-01-22 09:21:35.81360111 +0000 UTC m=+940.135788233" lastFinishedPulling="2026-01-22 09:21:39.707129905 +0000 UTC m=+944.029317028" observedRunningTime="2026-01-22 09:21:40.878676943 +0000 UTC m=+945.200864066" watchObservedRunningTime="2026-01-22 09:21:40.94084464 +0000 UTC m=+945.263031763" Jan 22 09:21:40 crc kubenswrapper[4811]: I0122 09:21:40.942295 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-866648ff8f-r22cc" event={"ID":"01a5613f-ca39-400f-83e5-8c2e04474ce3","Type":"ContainerStarted","Data":"9d723fb40e3dabcf16caf21044d982948bb4251c0307bc7d8d78b6d2b7c68a78"} Jan 22 09:21:40 crc kubenswrapper[4811]: I0122 09:21:40.942332 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-866648ff8f-r22cc" event={"ID":"01a5613f-ca39-400f-83e5-8c2e04474ce3","Type":"ContainerStarted","Data":"174c9ca94d9320b70b1f3c249d7f442607a11ee44400d4b212a9e12cfd61be75"} Jan 22 09:21:40 crc kubenswrapper[4811]: I0122 09:21:40.944044 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 22 09:21:40 crc kubenswrapper[4811]: I0122 09:21:40.970181 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 22 09:21:40 crc kubenswrapper[4811]: I0122 09:21:40.970883 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xs5vv" event={"ID":"8e75e74d-a6bf-48c5-bedb-8fb30806d29e","Type":"ContainerStarted","Data":"aee4f296e96eff80133eac3ffcfc74b6ced9111cd32179b5e35f0ccbbba56382"} Jan 22 09:21:40 crc kubenswrapper[4811]: I0122 09:21:40.986333 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 22 09:21:40 crc kubenswrapper[4811]: E0122 09:21:40.986704 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d0235ec-7dc3-44f6-ba77-171db91d88b3" containerName="cinder-api" Jan 22 09:21:40 crc kubenswrapper[4811]: I0122 09:21:40.986733 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d0235ec-7dc3-44f6-ba77-171db91d88b3" containerName="cinder-api" Jan 22 09:21:40 crc kubenswrapper[4811]: E0122 09:21:40.986766 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3f473da-b7d9-45c0-9380-199a07972c9c" containerName="init" Jan 22 09:21:40 crc kubenswrapper[4811]: I0122 09:21:40.986776 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3f473da-b7d9-45c0-9380-199a07972c9c" containerName="init" Jan 22 09:21:40 crc kubenswrapper[4811]: E0122 09:21:40.986788 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d0235ec-7dc3-44f6-ba77-171db91d88b3" containerName="cinder-api-log" Jan 22 09:21:40 crc kubenswrapper[4811]: I0122 09:21:40.986794 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d0235ec-7dc3-44f6-ba77-171db91d88b3" containerName="cinder-api-log" Jan 22 09:21:40 crc kubenswrapper[4811]: I0122 09:21:40.986955 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d0235ec-7dc3-44f6-ba77-171db91d88b3" containerName="cinder-api-log" Jan 22 09:21:40 crc kubenswrapper[4811]: I0122 09:21:40.986984 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d0235ec-7dc3-44f6-ba77-171db91d88b3" containerName="cinder-api" Jan 22 09:21:40 crc kubenswrapper[4811]: I0122 09:21:40.986996 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3f473da-b7d9-45c0-9380-199a07972c9c" containerName="init" Jan 22 09:21:40 crc kubenswrapper[4811]: I0122 09:21:40.987745 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7c8cfbdccc-5kmj8" event={"ID":"220f6baa-23c9-4cf8-b91f-5245734fc341","Type":"ContainerStarted","Data":"72146178f7c62cd91739e6fc5da9b401a893855eedbae80e1150dcc642218a6e"} Jan 22 09:21:40 crc kubenswrapper[4811]: I0122 09:21:40.987773 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7c8cfbdccc-5kmj8" event={"ID":"220f6baa-23c9-4cf8-b91f-5245734fc341","Type":"ContainerStarted","Data":"bd27abbbfb399a42b7dccae84924c2956eeefafef0d8c479f4dffb0ff5fd3401"} Jan 22 09:21:40 crc kubenswrapper[4811]: I0122 09:21:40.987852 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 22 09:21:40 crc kubenswrapper[4811]: I0122 09:21:40.993217 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 22 09:21:40 crc kubenswrapper[4811]: I0122 09:21:40.995341 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 22 09:21:40 crc kubenswrapper[4811]: I0122 09:21:40.995526 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 22 09:21:41 crc kubenswrapper[4811]: I0122 09:21:40.997481 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=9.491664386 podStartE2EDuration="10.997466029s" podCreationTimestamp="2026-01-22 09:21:30 +0000 UTC" firstStartedPulling="2026-01-22 09:21:36.545330012 +0000 UTC m=+940.867517136" lastFinishedPulling="2026-01-22 09:21:38.051131656 +0000 UTC m=+942.373318779" observedRunningTime="2026-01-22 09:21:40.960845702 +0000 UTC m=+945.283032825" watchObservedRunningTime="2026-01-22 09:21:40.997466029 +0000 UTC m=+945.319653152" Jan 22 09:21:41 crc kubenswrapper[4811]: I0122 09:21:41.023445 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5f966bcc4-4n44q" event={"ID":"312cf490-6d44-416e-8238-06667bf8efee","Type":"ContainerStarted","Data":"d33a81d6feab9fcdad6c6bfbc920217ed0b31696f81f543c691230663a909534"} Jan 22 09:21:41 crc kubenswrapper[4811]: I0122 09:21:41.023476 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5f966bcc4-4n44q" event={"ID":"312cf490-6d44-416e-8238-06667bf8efee","Type":"ContainerStarted","Data":"698172da2182050c40cf3f03b6b25f28cbebe1eba90c924047fed1d899812c9b"} Jan 22 09:21:41 crc kubenswrapper[4811]: I0122 09:21:41.034804 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 22 09:21:41 crc kubenswrapper[4811]: I0122 09:21:41.037086 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-866648ff8f-r22cc" podStartSLOduration=9.626794048 podStartE2EDuration="13.037069491s" podCreationTimestamp="2026-01-22 09:21:28 +0000 UTC" firstStartedPulling="2026-01-22 09:21:36.313900043 +0000 UTC m=+940.636087165" lastFinishedPulling="2026-01-22 09:21:39.724175484 +0000 UTC m=+944.046362608" observedRunningTime="2026-01-22 09:21:40.976058776 +0000 UTC m=+945.298245899" watchObservedRunningTime="2026-01-22 09:21:41.037069491 +0000 UTC m=+945.359256614" Jan 22 09:21:41 crc kubenswrapper[4811]: I0122 09:21:41.071946 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-xs5vv" podStartSLOduration=9.031023976 podStartE2EDuration="13.07192446s" podCreationTimestamp="2026-01-22 09:21:28 +0000 UTC" firstStartedPulling="2026-01-22 09:21:36.329846579 +0000 UTC m=+940.652033702" lastFinishedPulling="2026-01-22 09:21:40.370747063 +0000 UTC m=+944.692934186" observedRunningTime="2026-01-22 09:21:41.016190474 +0000 UTC m=+945.338377596" watchObservedRunningTime="2026-01-22 09:21:41.07192446 +0000 UTC m=+945.394111583" Jan 22 09:21:41 crc kubenswrapper[4811]: I0122 09:21:41.110466 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-7c8cfbdccc-5kmj8" podStartSLOduration=3.13505262 podStartE2EDuration="6.110448799s" podCreationTimestamp="2026-01-22 09:21:35 +0000 UTC" firstStartedPulling="2026-01-22 09:21:36.776513678 +0000 UTC m=+941.098700801" lastFinishedPulling="2026-01-22 09:21:39.751909857 +0000 UTC m=+944.074096980" observedRunningTime="2026-01-22 09:21:41.07377007 +0000 UTC m=+945.395957194" watchObservedRunningTime="2026-01-22 09:21:41.110448799 +0000 UTC m=+945.432635921" Jan 22 09:21:41 crc kubenswrapper[4811]: I0122 09:21:41.119857 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zznf\" (UniqueName: \"kubernetes.io/projected/8191719a-7bd8-44c9-9a24-65074b9bfa10-kube-api-access-2zznf\") pod \"cinder-api-0\" (UID: \"8191719a-7bd8-44c9-9a24-65074b9bfa10\") " pod="openstack/cinder-api-0" Jan 22 09:21:41 crc kubenswrapper[4811]: I0122 09:21:41.120070 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8191719a-7bd8-44c9-9a24-65074b9bfa10-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"8191719a-7bd8-44c9-9a24-65074b9bfa10\") " pod="openstack/cinder-api-0" Jan 22 09:21:41 crc kubenswrapper[4811]: I0122 09:21:41.120211 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8191719a-7bd8-44c9-9a24-65074b9bfa10-logs\") pod \"cinder-api-0\" (UID: \"8191719a-7bd8-44c9-9a24-65074b9bfa10\") " pod="openstack/cinder-api-0" Jan 22 09:21:41 crc kubenswrapper[4811]: I0122 09:21:41.120296 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8191719a-7bd8-44c9-9a24-65074b9bfa10-etc-machine-id\") pod \"cinder-api-0\" (UID: \"8191719a-7bd8-44c9-9a24-65074b9bfa10\") " pod="openstack/cinder-api-0" Jan 22 09:21:41 crc kubenswrapper[4811]: I0122 09:21:41.120352 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8191719a-7bd8-44c9-9a24-65074b9bfa10-scripts\") pod \"cinder-api-0\" (UID: \"8191719a-7bd8-44c9-9a24-65074b9bfa10\") " pod="openstack/cinder-api-0" Jan 22 09:21:41 crc kubenswrapper[4811]: I0122 09:21:41.120375 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8191719a-7bd8-44c9-9a24-65074b9bfa10-public-tls-certs\") pod \"cinder-api-0\" (UID: \"8191719a-7bd8-44c9-9a24-65074b9bfa10\") " pod="openstack/cinder-api-0" Jan 22 09:21:41 crc kubenswrapper[4811]: I0122 09:21:41.120449 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8191719a-7bd8-44c9-9a24-65074b9bfa10-config-data\") pod \"cinder-api-0\" (UID: \"8191719a-7bd8-44c9-9a24-65074b9bfa10\") " pod="openstack/cinder-api-0" Jan 22 09:21:41 crc kubenswrapper[4811]: I0122 09:21:41.120492 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8191719a-7bd8-44c9-9a24-65074b9bfa10-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"8191719a-7bd8-44c9-9a24-65074b9bfa10\") " pod="openstack/cinder-api-0" Jan 22 09:21:41 crc kubenswrapper[4811]: I0122 09:21:41.120582 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8191719a-7bd8-44c9-9a24-65074b9bfa10-config-data-custom\") pod \"cinder-api-0\" (UID: \"8191719a-7bd8-44c9-9a24-65074b9bfa10\") " pod="openstack/cinder-api-0" Jan 22 09:21:41 crc kubenswrapper[4811]: I0122 09:21:41.139730 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-866648ff8f-r22cc"] Jan 22 09:21:41 crc kubenswrapper[4811]: I0122 09:21:41.155132 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-5f966bcc4-4n44q" podStartSLOduration=3.230712996 podStartE2EDuration="6.155113813s" podCreationTimestamp="2026-01-22 09:21:35 +0000 UTC" firstStartedPulling="2026-01-22 09:21:36.776595573 +0000 UTC m=+941.098782696" lastFinishedPulling="2026-01-22 09:21:39.70099639 +0000 UTC m=+944.023183513" observedRunningTime="2026-01-22 09:21:41.087516769 +0000 UTC m=+945.409703893" watchObservedRunningTime="2026-01-22 09:21:41.155113813 +0000 UTC m=+945.477300936" Jan 22 09:21:41 crc kubenswrapper[4811]: I0122 09:21:41.180795 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-5c6c94454b-v7bc2"] Jan 22 09:21:41 crc kubenswrapper[4811]: I0122 09:21:41.223595 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8191719a-7bd8-44c9-9a24-65074b9bfa10-config-data-custom\") pod \"cinder-api-0\" (UID: \"8191719a-7bd8-44c9-9a24-65074b9bfa10\") " pod="openstack/cinder-api-0" Jan 22 09:21:41 crc kubenswrapper[4811]: I0122 09:21:41.223724 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2zznf\" (UniqueName: \"kubernetes.io/projected/8191719a-7bd8-44c9-9a24-65074b9bfa10-kube-api-access-2zznf\") pod \"cinder-api-0\" (UID: \"8191719a-7bd8-44c9-9a24-65074b9bfa10\") " pod="openstack/cinder-api-0" Jan 22 09:21:41 crc kubenswrapper[4811]: I0122 09:21:41.223797 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8191719a-7bd8-44c9-9a24-65074b9bfa10-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"8191719a-7bd8-44c9-9a24-65074b9bfa10\") " pod="openstack/cinder-api-0" Jan 22 09:21:41 crc kubenswrapper[4811]: I0122 09:21:41.223898 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8191719a-7bd8-44c9-9a24-65074b9bfa10-logs\") pod \"cinder-api-0\" (UID: \"8191719a-7bd8-44c9-9a24-65074b9bfa10\") " pod="openstack/cinder-api-0" Jan 22 09:21:41 crc kubenswrapper[4811]: I0122 09:21:41.223946 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8191719a-7bd8-44c9-9a24-65074b9bfa10-etc-machine-id\") pod \"cinder-api-0\" (UID: \"8191719a-7bd8-44c9-9a24-65074b9bfa10\") " pod="openstack/cinder-api-0" Jan 22 09:21:41 crc kubenswrapper[4811]: I0122 09:21:41.223978 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8191719a-7bd8-44c9-9a24-65074b9bfa10-scripts\") pod \"cinder-api-0\" (UID: \"8191719a-7bd8-44c9-9a24-65074b9bfa10\") " pod="openstack/cinder-api-0" Jan 22 09:21:41 crc kubenswrapper[4811]: I0122 09:21:41.223997 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8191719a-7bd8-44c9-9a24-65074b9bfa10-public-tls-certs\") pod \"cinder-api-0\" (UID: \"8191719a-7bd8-44c9-9a24-65074b9bfa10\") " pod="openstack/cinder-api-0" Jan 22 09:21:41 crc kubenswrapper[4811]: I0122 09:21:41.224040 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8191719a-7bd8-44c9-9a24-65074b9bfa10-config-data\") pod \"cinder-api-0\" (UID: \"8191719a-7bd8-44c9-9a24-65074b9bfa10\") " pod="openstack/cinder-api-0" Jan 22 09:21:41 crc kubenswrapper[4811]: I0122 09:21:41.224067 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8191719a-7bd8-44c9-9a24-65074b9bfa10-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"8191719a-7bd8-44c9-9a24-65074b9bfa10\") " pod="openstack/cinder-api-0" Jan 22 09:21:41 crc kubenswrapper[4811]: I0122 09:21:41.226938 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8191719a-7bd8-44c9-9a24-65074b9bfa10-logs\") pod \"cinder-api-0\" (UID: \"8191719a-7bd8-44c9-9a24-65074b9bfa10\") " pod="openstack/cinder-api-0" Jan 22 09:21:41 crc kubenswrapper[4811]: I0122 09:21:41.229926 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8191719a-7bd8-44c9-9a24-65074b9bfa10-etc-machine-id\") pod \"cinder-api-0\" (UID: \"8191719a-7bd8-44c9-9a24-65074b9bfa10\") " pod="openstack/cinder-api-0" Jan 22 09:21:41 crc kubenswrapper[4811]: I0122 09:21:41.241319 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8191719a-7bd8-44c9-9a24-65074b9bfa10-public-tls-certs\") pod \"cinder-api-0\" (UID: \"8191719a-7bd8-44c9-9a24-65074b9bfa10\") " pod="openstack/cinder-api-0" Jan 22 09:21:41 crc kubenswrapper[4811]: I0122 09:21:41.242243 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8191719a-7bd8-44c9-9a24-65074b9bfa10-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"8191719a-7bd8-44c9-9a24-65074b9bfa10\") " pod="openstack/cinder-api-0" Jan 22 09:21:41 crc kubenswrapper[4811]: I0122 09:21:41.242392 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8191719a-7bd8-44c9-9a24-65074b9bfa10-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"8191719a-7bd8-44c9-9a24-65074b9bfa10\") " pod="openstack/cinder-api-0" Jan 22 09:21:41 crc kubenswrapper[4811]: I0122 09:21:41.243862 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8191719a-7bd8-44c9-9a24-65074b9bfa10-config-data-custom\") pod \"cinder-api-0\" (UID: \"8191719a-7bd8-44c9-9a24-65074b9bfa10\") " pod="openstack/cinder-api-0" Jan 22 09:21:41 crc kubenswrapper[4811]: I0122 09:21:41.250280 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8191719a-7bd8-44c9-9a24-65074b9bfa10-scripts\") pod \"cinder-api-0\" (UID: \"8191719a-7bd8-44c9-9a24-65074b9bfa10\") " pod="openstack/cinder-api-0" Jan 22 09:21:41 crc kubenswrapper[4811]: I0122 09:21:41.250533 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8191719a-7bd8-44c9-9a24-65074b9bfa10-config-data\") pod \"cinder-api-0\" (UID: \"8191719a-7bd8-44c9-9a24-65074b9bfa10\") " pod="openstack/cinder-api-0" Jan 22 09:21:41 crc kubenswrapper[4811]: I0122 09:21:41.257169 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zznf\" (UniqueName: \"kubernetes.io/projected/8191719a-7bd8-44c9-9a24-65074b9bfa10-kube-api-access-2zznf\") pod \"cinder-api-0\" (UID: \"8191719a-7bd8-44c9-9a24-65074b9bfa10\") " pod="openstack/cinder-api-0" Jan 22 09:21:41 crc kubenswrapper[4811]: I0122 09:21:41.317747 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 22 09:21:41 crc kubenswrapper[4811]: I0122 09:21:41.788590 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-hz9h2"] Jan 22 09:21:41 crc kubenswrapper[4811]: I0122 09:21:41.791935 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hz9h2" Jan 22 09:21:41 crc kubenswrapper[4811]: I0122 09:21:41.808314 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hz9h2"] Jan 22 09:21:41 crc kubenswrapper[4811]: I0122 09:21:41.933903 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/889f85a4-9070-4d9c-82b9-4171d53b035c-utilities\") pod \"redhat-marketplace-hz9h2\" (UID: \"889f85a4-9070-4d9c-82b9-4171d53b035c\") " pod="openshift-marketplace/redhat-marketplace-hz9h2" Jan 22 09:21:41 crc kubenswrapper[4811]: I0122 09:21:41.933980 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2xg5\" (UniqueName: \"kubernetes.io/projected/889f85a4-9070-4d9c-82b9-4171d53b035c-kube-api-access-g2xg5\") pod \"redhat-marketplace-hz9h2\" (UID: \"889f85a4-9070-4d9c-82b9-4171d53b035c\") " pod="openshift-marketplace/redhat-marketplace-hz9h2" Jan 22 09:21:41 crc kubenswrapper[4811]: I0122 09:21:41.934095 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/889f85a4-9070-4d9c-82b9-4171d53b035c-catalog-content\") pod \"redhat-marketplace-hz9h2\" (UID: \"889f85a4-9070-4d9c-82b9-4171d53b035c\") " pod="openshift-marketplace/redhat-marketplace-hz9h2" Jan 22 09:21:41 crc kubenswrapper[4811]: I0122 09:21:41.998970 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d0235ec-7dc3-44f6-ba77-171db91d88b3" path="/var/lib/kubelet/pods/9d0235ec-7dc3-44f6-ba77-171db91d88b3/volumes" Jan 22 09:21:42 crc kubenswrapper[4811]: I0122 09:21:42.035887 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/889f85a4-9070-4d9c-82b9-4171d53b035c-utilities\") pod \"redhat-marketplace-hz9h2\" (UID: \"889f85a4-9070-4d9c-82b9-4171d53b035c\") " pod="openshift-marketplace/redhat-marketplace-hz9h2" Jan 22 09:21:42 crc kubenswrapper[4811]: I0122 09:21:42.035957 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g2xg5\" (UniqueName: \"kubernetes.io/projected/889f85a4-9070-4d9c-82b9-4171d53b035c-kube-api-access-g2xg5\") pod \"redhat-marketplace-hz9h2\" (UID: \"889f85a4-9070-4d9c-82b9-4171d53b035c\") " pod="openshift-marketplace/redhat-marketplace-hz9h2" Jan 22 09:21:42 crc kubenswrapper[4811]: I0122 09:21:42.036042 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/889f85a4-9070-4d9c-82b9-4171d53b035c-catalog-content\") pod \"redhat-marketplace-hz9h2\" (UID: \"889f85a4-9070-4d9c-82b9-4171d53b035c\") " pod="openshift-marketplace/redhat-marketplace-hz9h2" Jan 22 09:21:42 crc kubenswrapper[4811]: I0122 09:21:42.036312 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/889f85a4-9070-4d9c-82b9-4171d53b035c-utilities\") pod \"redhat-marketplace-hz9h2\" (UID: \"889f85a4-9070-4d9c-82b9-4171d53b035c\") " pod="openshift-marketplace/redhat-marketplace-hz9h2" Jan 22 09:21:42 crc kubenswrapper[4811]: I0122 09:21:42.036669 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/889f85a4-9070-4d9c-82b9-4171d53b035c-catalog-content\") pod \"redhat-marketplace-hz9h2\" (UID: \"889f85a4-9070-4d9c-82b9-4171d53b035c\") " pod="openshift-marketplace/redhat-marketplace-hz9h2" Jan 22 09:21:42 crc kubenswrapper[4811]: I0122 09:21:42.057258 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g2xg5\" (UniqueName: \"kubernetes.io/projected/889f85a4-9070-4d9c-82b9-4171d53b035c-kube-api-access-g2xg5\") pod \"redhat-marketplace-hz9h2\" (UID: \"889f85a4-9070-4d9c-82b9-4171d53b035c\") " pod="openshift-marketplace/redhat-marketplace-hz9h2" Jan 22 09:21:42 crc kubenswrapper[4811]: I0122 09:21:42.110907 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hz9h2" Jan 22 09:21:43 crc kubenswrapper[4811]: I0122 09:21:43.041019 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-5c6c94454b-v7bc2" podUID="1bf59625-a642-4155-9e83-46cd7d874f50" containerName="barbican-keystone-listener-log" containerID="cri-o://70687370876fb9c06d492e64f3cbfe43cd971421d7c0d968ae8ae83ad637ec23" gracePeriod=30 Jan 22 09:21:43 crc kubenswrapper[4811]: I0122 09:21:43.041452 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-866648ff8f-r22cc" podUID="01a5613f-ca39-400f-83e5-8c2e04474ce3" containerName="barbican-worker-log" containerID="cri-o://174c9ca94d9320b70b1f3c249d7f442607a11ee44400d4b212a9e12cfd61be75" gracePeriod=30 Jan 22 09:21:43 crc kubenswrapper[4811]: I0122 09:21:43.041723 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-5c6c94454b-v7bc2" podUID="1bf59625-a642-4155-9e83-46cd7d874f50" containerName="barbican-keystone-listener" containerID="cri-o://8ae67b7358dac12b27ea1a3a2b1e227264356434cd73a449f3c836fe68c276b0" gracePeriod=30 Jan 22 09:21:43 crc kubenswrapper[4811]: I0122 09:21:43.041853 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-866648ff8f-r22cc" podUID="01a5613f-ca39-400f-83e5-8c2e04474ce3" containerName="barbican-worker" containerID="cri-o://9d723fb40e3dabcf16caf21044d982948bb4251c0307bc7d8d78b6d2b7c68a78" gracePeriod=30 Jan 22 09:21:44 crc kubenswrapper[4811]: I0122 09:21:44.056750 4811 generic.go:334] "Generic (PLEG): container finished" podID="1bf59625-a642-4155-9e83-46cd7d874f50" containerID="8ae67b7358dac12b27ea1a3a2b1e227264356434cd73a449f3c836fe68c276b0" exitCode=0 Jan 22 09:21:44 crc kubenswrapper[4811]: I0122 09:21:44.057115 4811 generic.go:334] "Generic (PLEG): container finished" podID="1bf59625-a642-4155-9e83-46cd7d874f50" containerID="70687370876fb9c06d492e64f3cbfe43cd971421d7c0d968ae8ae83ad637ec23" exitCode=143 Jan 22 09:21:44 crc kubenswrapper[4811]: I0122 09:21:44.056824 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5c6c94454b-v7bc2" event={"ID":"1bf59625-a642-4155-9e83-46cd7d874f50","Type":"ContainerDied","Data":"8ae67b7358dac12b27ea1a3a2b1e227264356434cd73a449f3c836fe68c276b0"} Jan 22 09:21:44 crc kubenswrapper[4811]: I0122 09:21:44.057186 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5c6c94454b-v7bc2" event={"ID":"1bf59625-a642-4155-9e83-46cd7d874f50","Type":"ContainerDied","Data":"70687370876fb9c06d492e64f3cbfe43cd971421d7c0d968ae8ae83ad637ec23"} Jan 22 09:21:44 crc kubenswrapper[4811]: I0122 09:21:44.059376 4811 generic.go:334] "Generic (PLEG): container finished" podID="01a5613f-ca39-400f-83e5-8c2e04474ce3" containerID="9d723fb40e3dabcf16caf21044d982948bb4251c0307bc7d8d78b6d2b7c68a78" exitCode=0 Jan 22 09:21:44 crc kubenswrapper[4811]: I0122 09:21:44.059395 4811 generic.go:334] "Generic (PLEG): container finished" podID="01a5613f-ca39-400f-83e5-8c2e04474ce3" containerID="174c9ca94d9320b70b1f3c249d7f442607a11ee44400d4b212a9e12cfd61be75" exitCode=143 Jan 22 09:21:44 crc kubenswrapper[4811]: I0122 09:21:44.059411 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-866648ff8f-r22cc" event={"ID":"01a5613f-ca39-400f-83e5-8c2e04474ce3","Type":"ContainerDied","Data":"9d723fb40e3dabcf16caf21044d982948bb4251c0307bc7d8d78b6d2b7c68a78"} Jan 22 09:21:44 crc kubenswrapper[4811]: I0122 09:21:44.059431 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-866648ff8f-r22cc" event={"ID":"01a5613f-ca39-400f-83e5-8c2e04474ce3","Type":"ContainerDied","Data":"174c9ca94d9320b70b1f3c249d7f442607a11ee44400d4b212a9e12cfd61be75"} Jan 22 09:21:45 crc kubenswrapper[4811]: I0122 09:21:45.439353 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7757dc6854-tnspq" Jan 22 09:21:45 crc kubenswrapper[4811]: I0122 09:21:45.689883 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 22 09:21:45 crc kubenswrapper[4811]: I0122 09:21:45.787899 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7474d577dc-bfmkd" Jan 22 09:21:45 crc kubenswrapper[4811]: I0122 09:21:45.861032 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5d87b7c6dc-f7kdt"] Jan 22 09:21:45 crc kubenswrapper[4811]: I0122 09:21:45.861311 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5d87b7c6dc-f7kdt" podUID="36b87b12-1080-46b8-a342-b9e743377f23" containerName="dnsmasq-dns" containerID="cri-o://70a86b019d8d490a8376bf72bdea67e48c1425e64b7851debe6637a69c5e0ecc" gracePeriod=10 Jan 22 09:21:45 crc kubenswrapper[4811]: I0122 09:21:45.973911 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 22 09:21:46 crc kubenswrapper[4811]: I0122 09:21:46.093553 4811 generic.go:334] "Generic (PLEG): container finished" podID="36b87b12-1080-46b8-a342-b9e743377f23" containerID="70a86b019d8d490a8376bf72bdea67e48c1425e64b7851debe6637a69c5e0ecc" exitCode=0 Jan 22 09:21:46 crc kubenswrapper[4811]: I0122 09:21:46.093604 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d87b7c6dc-f7kdt" event={"ID":"36b87b12-1080-46b8-a342-b9e743377f23","Type":"ContainerDied","Data":"70a86b019d8d490a8376bf72bdea67e48c1425e64b7851debe6637a69c5e0ecc"} Jan 22 09:21:46 crc kubenswrapper[4811]: I0122 09:21:46.130913 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 22 09:21:46 crc kubenswrapper[4811]: I0122 09:21:46.166986 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7757dc6854-tnspq" Jan 22 09:21:47 crc kubenswrapper[4811]: I0122 09:21:47.101761 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="80f3e64e-0e9f-49b8-9c7e-78e08634ba92" containerName="cinder-scheduler" containerID="cri-o://074abccf97b616cd189f81d3e407c934a9e30ad015215dd231c297a4cd13f288" gracePeriod=30 Jan 22 09:21:47 crc kubenswrapper[4811]: I0122 09:21:47.102309 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="80f3e64e-0e9f-49b8-9c7e-78e08634ba92" containerName="probe" containerID="cri-o://33c2413d57b56626dabcb11ae8b5afefb9da4a8234a3b363b7b9fd22d0c9714d" gracePeriod=30 Jan 22 09:21:47 crc kubenswrapper[4811]: I0122 09:21:47.622796 4811 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5d87b7c6dc-f7kdt" podUID="36b87b12-1080-46b8-a342-b9e743377f23" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.132:5353: connect: connection refused" Jan 22 09:21:47 crc kubenswrapper[4811]: I0122 09:21:47.735277 4811 scope.go:117] "RemoveContainer" containerID="9dce536a3e8b57c5a1cad8c8feb7c3df19e73b7dd34f32d4775b93afeafa6fb0" Jan 22 09:21:48 crc kubenswrapper[4811]: E0122 09:21:48.236277 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="173400f9-c99e-4737-b27c-cff0bdb5ee94" Jan 22 09:21:48 crc kubenswrapper[4811]: I0122 09:21:48.365831 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-866648ff8f-r22cc" Jan 22 09:21:48 crc kubenswrapper[4811]: I0122 09:21:48.379760 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5c6c94454b-v7bc2" Jan 22 09:21:48 crc kubenswrapper[4811]: I0122 09:21:48.458281 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01a5613f-ca39-400f-83e5-8c2e04474ce3-combined-ca-bundle\") pod \"01a5613f-ca39-400f-83e5-8c2e04474ce3\" (UID: \"01a5613f-ca39-400f-83e5-8c2e04474ce3\") " Jan 22 09:21:48 crc kubenswrapper[4811]: I0122 09:21:48.458400 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/01a5613f-ca39-400f-83e5-8c2e04474ce3-logs\") pod \"01a5613f-ca39-400f-83e5-8c2e04474ce3\" (UID: \"01a5613f-ca39-400f-83e5-8c2e04474ce3\") " Jan 22 09:21:48 crc kubenswrapper[4811]: I0122 09:21:48.458474 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01a5613f-ca39-400f-83e5-8c2e04474ce3-config-data\") pod \"01a5613f-ca39-400f-83e5-8c2e04474ce3\" (UID: \"01a5613f-ca39-400f-83e5-8c2e04474ce3\") " Jan 22 09:21:48 crc kubenswrapper[4811]: I0122 09:21:48.458583 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hlf7k\" (UniqueName: \"kubernetes.io/projected/01a5613f-ca39-400f-83e5-8c2e04474ce3-kube-api-access-hlf7k\") pod \"01a5613f-ca39-400f-83e5-8c2e04474ce3\" (UID: \"01a5613f-ca39-400f-83e5-8c2e04474ce3\") " Jan 22 09:21:48 crc kubenswrapper[4811]: I0122 09:21:48.458674 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/01a5613f-ca39-400f-83e5-8c2e04474ce3-config-data-custom\") pod \"01a5613f-ca39-400f-83e5-8c2e04474ce3\" (UID: \"01a5613f-ca39-400f-83e5-8c2e04474ce3\") " Jan 22 09:21:48 crc kubenswrapper[4811]: I0122 09:21:48.461430 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/01a5613f-ca39-400f-83e5-8c2e04474ce3-logs" (OuterVolumeSpecName: "logs") pod "01a5613f-ca39-400f-83e5-8c2e04474ce3" (UID: "01a5613f-ca39-400f-83e5-8c2e04474ce3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:21:48 crc kubenswrapper[4811]: I0122 09:21:48.469184 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01a5613f-ca39-400f-83e5-8c2e04474ce3-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "01a5613f-ca39-400f-83e5-8c2e04474ce3" (UID: "01a5613f-ca39-400f-83e5-8c2e04474ce3"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:21:48 crc kubenswrapper[4811]: I0122 09:21:48.469326 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01a5613f-ca39-400f-83e5-8c2e04474ce3-kube-api-access-hlf7k" (OuterVolumeSpecName: "kube-api-access-hlf7k") pod "01a5613f-ca39-400f-83e5-8c2e04474ce3" (UID: "01a5613f-ca39-400f-83e5-8c2e04474ce3"). InnerVolumeSpecName "kube-api-access-hlf7k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:21:48 crc kubenswrapper[4811]: I0122 09:21:48.503402 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01a5613f-ca39-400f-83e5-8c2e04474ce3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "01a5613f-ca39-400f-83e5-8c2e04474ce3" (UID: "01a5613f-ca39-400f-83e5-8c2e04474ce3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:21:48 crc kubenswrapper[4811]: I0122 09:21:48.505062 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d87b7c6dc-f7kdt" Jan 22 09:21:48 crc kubenswrapper[4811]: I0122 09:21:48.546642 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01a5613f-ca39-400f-83e5-8c2e04474ce3-config-data" (OuterVolumeSpecName: "config-data") pod "01a5613f-ca39-400f-83e5-8c2e04474ce3" (UID: "01a5613f-ca39-400f-83e5-8c2e04474ce3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:21:48 crc kubenswrapper[4811]: I0122 09:21:48.563177 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bf59625-a642-4155-9e83-46cd7d874f50-combined-ca-bundle\") pod \"1bf59625-a642-4155-9e83-46cd7d874f50\" (UID: \"1bf59625-a642-4155-9e83-46cd7d874f50\") " Jan 22 09:21:48 crc kubenswrapper[4811]: I0122 09:21:48.564203 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bf59625-a642-4155-9e83-46cd7d874f50-config-data\") pod \"1bf59625-a642-4155-9e83-46cd7d874f50\" (UID: \"1bf59625-a642-4155-9e83-46cd7d874f50\") " Jan 22 09:21:48 crc kubenswrapper[4811]: I0122 09:21:48.564584 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1bf59625-a642-4155-9e83-46cd7d874f50-logs\") pod \"1bf59625-a642-4155-9e83-46cd7d874f50\" (UID: \"1bf59625-a642-4155-9e83-46cd7d874f50\") " Jan 22 09:21:48 crc kubenswrapper[4811]: I0122 09:21:48.564908 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kmvl4\" (UniqueName: \"kubernetes.io/projected/1bf59625-a642-4155-9e83-46cd7d874f50-kube-api-access-kmvl4\") pod \"1bf59625-a642-4155-9e83-46cd7d874f50\" (UID: \"1bf59625-a642-4155-9e83-46cd7d874f50\") " Jan 22 09:21:48 crc kubenswrapper[4811]: I0122 09:21:48.565212 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1bf59625-a642-4155-9e83-46cd7d874f50-config-data-custom\") pod \"1bf59625-a642-4155-9e83-46cd7d874f50\" (UID: \"1bf59625-a642-4155-9e83-46cd7d874f50\") " Jan 22 09:21:48 crc kubenswrapper[4811]: I0122 09:21:48.569357 4811 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01a5613f-ca39-400f-83e5-8c2e04474ce3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 09:21:48 crc kubenswrapper[4811]: I0122 09:21:48.569410 4811 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/01a5613f-ca39-400f-83e5-8c2e04474ce3-logs\") on node \"crc\" DevicePath \"\"" Jan 22 09:21:48 crc kubenswrapper[4811]: I0122 09:21:48.569459 4811 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01a5613f-ca39-400f-83e5-8c2e04474ce3-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 09:21:48 crc kubenswrapper[4811]: I0122 09:21:48.569497 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hlf7k\" (UniqueName: \"kubernetes.io/projected/01a5613f-ca39-400f-83e5-8c2e04474ce3-kube-api-access-hlf7k\") on node \"crc\" DevicePath \"\"" Jan 22 09:21:48 crc kubenswrapper[4811]: I0122 09:21:48.569509 4811 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/01a5613f-ca39-400f-83e5-8c2e04474ce3-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 22 09:21:48 crc kubenswrapper[4811]: I0122 09:21:48.571414 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1bf59625-a642-4155-9e83-46cd7d874f50-logs" (OuterVolumeSpecName: "logs") pod "1bf59625-a642-4155-9e83-46cd7d874f50" (UID: "1bf59625-a642-4155-9e83-46cd7d874f50"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:21:48 crc kubenswrapper[4811]: I0122 09:21:48.581045 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf59625-a642-4155-9e83-46cd7d874f50-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "1bf59625-a642-4155-9e83-46cd7d874f50" (UID: "1bf59625-a642-4155-9e83-46cd7d874f50"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:21:48 crc kubenswrapper[4811]: I0122 09:21:48.592428 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf59625-a642-4155-9e83-46cd7d874f50-kube-api-access-kmvl4" (OuterVolumeSpecName: "kube-api-access-kmvl4") pod "1bf59625-a642-4155-9e83-46cd7d874f50" (UID: "1bf59625-a642-4155-9e83-46cd7d874f50"). InnerVolumeSpecName "kube-api-access-kmvl4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:21:48 crc kubenswrapper[4811]: I0122 09:21:48.642667 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf59625-a642-4155-9e83-46cd7d874f50-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1bf59625-a642-4155-9e83-46cd7d874f50" (UID: "1bf59625-a642-4155-9e83-46cd7d874f50"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:21:48 crc kubenswrapper[4811]: I0122 09:21:48.655773 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-76dcc8f4f4-dv85h" Jan 22 09:21:48 crc kubenswrapper[4811]: I0122 09:21:48.670350 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36b87b12-1080-46b8-a342-b9e743377f23-config\") pod \"36b87b12-1080-46b8-a342-b9e743377f23\" (UID: \"36b87b12-1080-46b8-a342-b9e743377f23\") " Jan 22 09:21:48 crc kubenswrapper[4811]: I0122 09:21:48.671590 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/36b87b12-1080-46b8-a342-b9e743377f23-ovsdbserver-sb\") pod \"36b87b12-1080-46b8-a342-b9e743377f23\" (UID: \"36b87b12-1080-46b8-a342-b9e743377f23\") " Jan 22 09:21:48 crc kubenswrapper[4811]: I0122 09:21:48.671874 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/36b87b12-1080-46b8-a342-b9e743377f23-ovsdbserver-nb\") pod \"36b87b12-1080-46b8-a342-b9e743377f23\" (UID: \"36b87b12-1080-46b8-a342-b9e743377f23\") " Jan 22 09:21:48 crc kubenswrapper[4811]: I0122 09:21:48.671988 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l8d7l\" (UniqueName: \"kubernetes.io/projected/36b87b12-1080-46b8-a342-b9e743377f23-kube-api-access-l8d7l\") pod \"36b87b12-1080-46b8-a342-b9e743377f23\" (UID: \"36b87b12-1080-46b8-a342-b9e743377f23\") " Jan 22 09:21:48 crc kubenswrapper[4811]: I0122 09:21:48.672005 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/36b87b12-1080-46b8-a342-b9e743377f23-dns-svc\") pod \"36b87b12-1080-46b8-a342-b9e743377f23\" (UID: \"36b87b12-1080-46b8-a342-b9e743377f23\") " Jan 22 09:21:48 crc kubenswrapper[4811]: I0122 09:21:48.672834 4811 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1bf59625-a642-4155-9e83-46cd7d874f50-logs\") on node \"crc\" DevicePath \"\"" Jan 22 09:21:48 crc kubenswrapper[4811]: I0122 09:21:48.672847 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kmvl4\" (UniqueName: \"kubernetes.io/projected/1bf59625-a642-4155-9e83-46cd7d874f50-kube-api-access-kmvl4\") on node \"crc\" DevicePath \"\"" Jan 22 09:21:48 crc kubenswrapper[4811]: I0122 09:21:48.672857 4811 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1bf59625-a642-4155-9e83-46cd7d874f50-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 22 09:21:48 crc kubenswrapper[4811]: I0122 09:21:48.672867 4811 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bf59625-a642-4155-9e83-46cd7d874f50-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 09:21:48 crc kubenswrapper[4811]: I0122 09:21:48.686916 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf59625-a642-4155-9e83-46cd7d874f50-config-data" (OuterVolumeSpecName: "config-data") pod "1bf59625-a642-4155-9e83-46cd7d874f50" (UID: "1bf59625-a642-4155-9e83-46cd7d874f50"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:21:48 crc kubenswrapper[4811]: I0122 09:21:48.701991 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36b87b12-1080-46b8-a342-b9e743377f23-kube-api-access-l8d7l" (OuterVolumeSpecName: "kube-api-access-l8d7l") pod "36b87b12-1080-46b8-a342-b9e743377f23" (UID: "36b87b12-1080-46b8-a342-b9e743377f23"). InnerVolumeSpecName "kube-api-access-l8d7l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:21:48 crc kubenswrapper[4811]: I0122 09:21:48.728879 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-76dcc8f4f4-dv85h" Jan 22 09:21:48 crc kubenswrapper[4811]: I0122 09:21:48.729670 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36b87b12-1080-46b8-a342-b9e743377f23-config" (OuterVolumeSpecName: "config") pod "36b87b12-1080-46b8-a342-b9e743377f23" (UID: "36b87b12-1080-46b8-a342-b9e743377f23"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:21:48 crc kubenswrapper[4811]: I0122 09:21:48.734835 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36b87b12-1080-46b8-a342-b9e743377f23-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "36b87b12-1080-46b8-a342-b9e743377f23" (UID: "36b87b12-1080-46b8-a342-b9e743377f23"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:21:48 crc kubenswrapper[4811]: I0122 09:21:48.750941 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36b87b12-1080-46b8-a342-b9e743377f23-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "36b87b12-1080-46b8-a342-b9e743377f23" (UID: "36b87b12-1080-46b8-a342-b9e743377f23"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:21:48 crc kubenswrapper[4811]: I0122 09:21:48.769133 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36b87b12-1080-46b8-a342-b9e743377f23-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "36b87b12-1080-46b8-a342-b9e743377f23" (UID: "36b87b12-1080-46b8-a342-b9e743377f23"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:21:48 crc kubenswrapper[4811]: I0122 09:21:48.775820 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l8d7l\" (UniqueName: \"kubernetes.io/projected/36b87b12-1080-46b8-a342-b9e743377f23-kube-api-access-l8d7l\") on node \"crc\" DevicePath \"\"" Jan 22 09:21:48 crc kubenswrapper[4811]: I0122 09:21:48.775848 4811 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/36b87b12-1080-46b8-a342-b9e743377f23-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 22 09:21:48 crc kubenswrapper[4811]: I0122 09:21:48.775860 4811 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36b87b12-1080-46b8-a342-b9e743377f23-config\") on node \"crc\" DevicePath \"\"" Jan 22 09:21:48 crc kubenswrapper[4811]: I0122 09:21:48.775870 4811 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/36b87b12-1080-46b8-a342-b9e743377f23-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 22 09:21:48 crc kubenswrapper[4811]: I0122 09:21:48.775879 4811 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bf59625-a642-4155-9e83-46cd7d874f50-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 09:21:48 crc kubenswrapper[4811]: I0122 09:21:48.775888 4811 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/36b87b12-1080-46b8-a342-b9e743377f23-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 22 09:21:48 crc kubenswrapper[4811]: I0122 09:21:48.807339 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-7757dc6854-tnspq"] Jan 22 09:21:48 crc kubenswrapper[4811]: I0122 09:21:48.807573 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-7757dc6854-tnspq" podUID="49ce92f5-a20a-4429-89a1-764b3db3e28a" containerName="barbican-api-log" containerID="cri-o://7b5b159f764e828b353e7ad7a1a67bcbaaebf098aaa6bc41d5033425dd503506" gracePeriod=30 Jan 22 09:21:48 crc kubenswrapper[4811]: I0122 09:21:48.807825 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-7757dc6854-tnspq" podUID="49ce92f5-a20a-4429-89a1-764b3db3e28a" containerName="barbican-api" containerID="cri-o://db6ab904511cf6e9244e2d0d01cf6c92b3afe6a4ed5ca4f1d4b6334f256a171b" gracePeriod=30 Jan 22 09:21:48 crc kubenswrapper[4811]: I0122 09:21:48.827406 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-7757dc6854-tnspq" podUID="49ce92f5-a20a-4429-89a1-764b3db3e28a" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.142:9311/healthcheck\": EOF" Jan 22 09:21:48 crc kubenswrapper[4811]: I0122 09:21:48.827780 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-7757dc6854-tnspq" podUID="49ce92f5-a20a-4429-89a1-764b3db3e28a" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.142:9311/healthcheck\": EOF" Jan 22 09:21:48 crc kubenswrapper[4811]: I0122 09:21:48.878857 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 22 09:21:48 crc kubenswrapper[4811]: I0122 09:21:48.894392 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hz9h2"] Jan 22 09:21:49 crc kubenswrapper[4811]: I0122 09:21:49.179497 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5c6c94454b-v7bc2" event={"ID":"1bf59625-a642-4155-9e83-46cd7d874f50","Type":"ContainerDied","Data":"3e0d41bb2c4f4956b8e2a25ffb6d7de5a7cd8eba4e9c0d2515459afec0616707"} Jan 22 09:21:49 crc kubenswrapper[4811]: I0122 09:21:49.179543 4811 scope.go:117] "RemoveContainer" containerID="8ae67b7358dac12b27ea1a3a2b1e227264356434cd73a449f3c836fe68c276b0" Jan 22 09:21:49 crc kubenswrapper[4811]: I0122 09:21:49.179797 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5c6c94454b-v7bc2" Jan 22 09:21:49 crc kubenswrapper[4811]: I0122 09:21:49.197804 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d87b7c6dc-f7kdt" event={"ID":"36b87b12-1080-46b8-a342-b9e743377f23","Type":"ContainerDied","Data":"88b2d28ad766be5a779a026c7dade30d407225debb9822dd56bc8ee4167560e5"} Jan 22 09:21:49 crc kubenswrapper[4811]: I0122 09:21:49.197837 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d87b7c6dc-f7kdt" Jan 22 09:21:49 crc kubenswrapper[4811]: I0122 09:21:49.206435 4811 generic.go:334] "Generic (PLEG): container finished" podID="49ce92f5-a20a-4429-89a1-764b3db3e28a" containerID="7b5b159f764e828b353e7ad7a1a67bcbaaebf098aaa6bc41d5033425dd503506" exitCode=143 Jan 22 09:21:49 crc kubenswrapper[4811]: I0122 09:21:49.207243 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7757dc6854-tnspq" event={"ID":"49ce92f5-a20a-4429-89a1-764b3db3e28a","Type":"ContainerDied","Data":"7b5b159f764e828b353e7ad7a1a67bcbaaebf098aaa6bc41d5033425dd503506"} Jan 22 09:21:49 crc kubenswrapper[4811]: I0122 09:21:49.223275 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="173400f9-c99e-4737-b27c-cff0bdb5ee94" containerName="ceilometer-notification-agent" containerID="cri-o://1ddbd4da0737b0e060323ce04e1e1091447ceeddb176ab83d32f3f37cfb10b20" gracePeriod=30 Jan 22 09:21:49 crc kubenswrapper[4811]: I0122 09:21:49.223919 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="173400f9-c99e-4737-b27c-cff0bdb5ee94" containerName="proxy-httpd" containerID="cri-o://685e13d936d2f908fb5ba170a6ac272da77484dbb28475b2621c845b2f036ed2" gracePeriod=30 Jan 22 09:21:49 crc kubenswrapper[4811]: I0122 09:21:49.224020 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="173400f9-c99e-4737-b27c-cff0bdb5ee94" containerName="sg-core" containerID="cri-o://d5579584315ce5bf7b0ba9cd5484b6e73082240e3a580577b72df6d7a77f1636" gracePeriod=30 Jan 22 09:21:49 crc kubenswrapper[4811]: I0122 09:21:49.224798 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"173400f9-c99e-4737-b27c-cff0bdb5ee94","Type":"ContainerStarted","Data":"685e13d936d2f908fb5ba170a6ac272da77484dbb28475b2621c845b2f036ed2"} Jan 22 09:21:49 crc kubenswrapper[4811]: I0122 09:21:49.224847 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 22 09:21:49 crc kubenswrapper[4811]: I0122 09:21:49.238815 4811 generic.go:334] "Generic (PLEG): container finished" podID="80f3e64e-0e9f-49b8-9c7e-78e08634ba92" containerID="33c2413d57b56626dabcb11ae8b5afefb9da4a8234a3b363b7b9fd22d0c9714d" exitCode=0 Jan 22 09:21:49 crc kubenswrapper[4811]: I0122 09:21:49.238887 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"80f3e64e-0e9f-49b8-9c7e-78e08634ba92","Type":"ContainerDied","Data":"33c2413d57b56626dabcb11ae8b5afefb9da4a8234a3b363b7b9fd22d0c9714d"} Jan 22 09:21:49 crc kubenswrapper[4811]: I0122 09:21:49.250322 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-5c6c94454b-v7bc2"] Jan 22 09:21:49 crc kubenswrapper[4811]: I0122 09:21:49.256124 4811 scope.go:117] "RemoveContainer" containerID="70687370876fb9c06d492e64f3cbfe43cd971421d7c0d968ae8ae83ad637ec23" Jan 22 09:21:49 crc kubenswrapper[4811]: I0122 09:21:49.266199 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-866648ff8f-r22cc" event={"ID":"01a5613f-ca39-400f-83e5-8c2e04474ce3","Type":"ContainerDied","Data":"30a41faaeb381e1073d802aabc45d1059f9a87d0e53e785dec1433ed9b7527a0"} Jan 22 09:21:49 crc kubenswrapper[4811]: I0122 09:21:49.266272 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-866648ff8f-r22cc" Jan 22 09:21:49 crc kubenswrapper[4811]: I0122 09:21:49.274126 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"8191719a-7bd8-44c9-9a24-65074b9bfa10","Type":"ContainerStarted","Data":"ffa1a61c4650ef40546d7cd84d7a55436abdf77f948efea74461666824ebe8b7"} Jan 22 09:21:49 crc kubenswrapper[4811]: I0122 09:21:49.279662 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-keystone-listener-5c6c94454b-v7bc2"] Jan 22 09:21:49 crc kubenswrapper[4811]: I0122 09:21:49.297445 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-xs5vv" Jan 22 09:21:49 crc kubenswrapper[4811]: I0122 09:21:49.297489 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-xs5vv" Jan 22 09:21:49 crc kubenswrapper[4811]: I0122 09:21:49.302605 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hz9h2" event={"ID":"889f85a4-9070-4d9c-82b9-4171d53b035c","Type":"ContainerStarted","Data":"807f35e367407114fa99cc408c2a9cccbdb7e45a6dd121415c2cbed3051b45b9"} Jan 22 09:21:49 crc kubenswrapper[4811]: I0122 09:21:49.323713 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5d87b7c6dc-f7kdt"] Jan 22 09:21:49 crc kubenswrapper[4811]: I0122 09:21:49.336886 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5d87b7c6dc-f7kdt"] Jan 22 09:21:49 crc kubenswrapper[4811]: I0122 09:21:49.352527 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-866648ff8f-r22cc"] Jan 22 09:21:49 crc kubenswrapper[4811]: I0122 09:21:49.361590 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-worker-866648ff8f-r22cc"] Jan 22 09:21:49 crc kubenswrapper[4811]: I0122 09:21:49.389288 4811 scope.go:117] "RemoveContainer" containerID="70a86b019d8d490a8376bf72bdea67e48c1425e64b7851debe6637a69c5e0ecc" Jan 22 09:21:49 crc kubenswrapper[4811]: I0122 09:21:49.423205 4811 scope.go:117] "RemoveContainer" containerID="2a5d01c985f836de9107beeb271bdbe89b6d281af17d2eeca1f2d783a8f922bf" Jan 22 09:21:49 crc kubenswrapper[4811]: I0122 09:21:49.452766 4811 scope.go:117] "RemoveContainer" containerID="9d723fb40e3dabcf16caf21044d982948bb4251c0307bc7d8d78b6d2b7c68a78" Jan 22 09:21:49 crc kubenswrapper[4811]: I0122 09:21:49.474184 4811 scope.go:117] "RemoveContainer" containerID="174c9ca94d9320b70b1f3c249d7f442607a11ee44400d4b212a9e12cfd61be75" Jan 22 09:21:50 crc kubenswrapper[4811]: I0122 09:21:50.008840 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01a5613f-ca39-400f-83e5-8c2e04474ce3" path="/var/lib/kubelet/pods/01a5613f-ca39-400f-83e5-8c2e04474ce3/volumes" Jan 22 09:21:50 crc kubenswrapper[4811]: I0122 09:21:50.013284 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf59625-a642-4155-9e83-46cd7d874f50" path="/var/lib/kubelet/pods/1bf59625-a642-4155-9e83-46cd7d874f50/volumes" Jan 22 09:21:50 crc kubenswrapper[4811]: I0122 09:21:50.017925 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36b87b12-1080-46b8-a342-b9e743377f23" path="/var/lib/kubelet/pods/36b87b12-1080-46b8-a342-b9e743377f23/volumes" Jan 22 09:21:50 crc kubenswrapper[4811]: I0122 09:21:50.311587 4811 generic.go:334] "Generic (PLEG): container finished" podID="173400f9-c99e-4737-b27c-cff0bdb5ee94" containerID="685e13d936d2f908fb5ba170a6ac272da77484dbb28475b2621c845b2f036ed2" exitCode=0 Jan 22 09:21:50 crc kubenswrapper[4811]: I0122 09:21:50.311615 4811 generic.go:334] "Generic (PLEG): container finished" podID="173400f9-c99e-4737-b27c-cff0bdb5ee94" containerID="d5579584315ce5bf7b0ba9cd5484b6e73082240e3a580577b72df6d7a77f1636" exitCode=2 Jan 22 09:21:50 crc kubenswrapper[4811]: I0122 09:21:50.311652 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"173400f9-c99e-4737-b27c-cff0bdb5ee94","Type":"ContainerDied","Data":"685e13d936d2f908fb5ba170a6ac272da77484dbb28475b2621c845b2f036ed2"} Jan 22 09:21:50 crc kubenswrapper[4811]: I0122 09:21:50.311692 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"173400f9-c99e-4737-b27c-cff0bdb5ee94","Type":"ContainerDied","Data":"d5579584315ce5bf7b0ba9cd5484b6e73082240e3a580577b72df6d7a77f1636"} Jan 22 09:21:50 crc kubenswrapper[4811]: I0122 09:21:50.314665 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"8191719a-7bd8-44c9-9a24-65074b9bfa10","Type":"ContainerStarted","Data":"ec9c0c2614f75d3aa21147e251b0fb2a81b1669ac138eefe83475098cf3d3d50"} Jan 22 09:21:50 crc kubenswrapper[4811]: I0122 09:21:50.314702 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"8191719a-7bd8-44c9-9a24-65074b9bfa10","Type":"ContainerStarted","Data":"b3ebe764dae2506aad0c0ae08fae57236dbec2747389bfe4c45e16390cb5e90d"} Jan 22 09:21:50 crc kubenswrapper[4811]: I0122 09:21:50.314824 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 22 09:21:50 crc kubenswrapper[4811]: I0122 09:21:50.316119 4811 generic.go:334] "Generic (PLEG): container finished" podID="889f85a4-9070-4d9c-82b9-4171d53b035c" containerID="9c3cd1c69938832def8b5ce1bba5ff1e3d689fa3e524fe8b015a1cd1d93b988f" exitCode=0 Jan 22 09:21:50 crc kubenswrapper[4811]: I0122 09:21:50.316193 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hz9h2" event={"ID":"889f85a4-9070-4d9c-82b9-4171d53b035c","Type":"ContainerDied","Data":"9c3cd1c69938832def8b5ce1bba5ff1e3d689fa3e524fe8b015a1cd1d93b988f"} Jan 22 09:21:50 crc kubenswrapper[4811]: I0122 09:21:50.334175 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=10.334165022 podStartE2EDuration="10.334165022s" podCreationTimestamp="2026-01-22 09:21:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:21:50.330585574 +0000 UTC m=+954.652772696" watchObservedRunningTime="2026-01-22 09:21:50.334165022 +0000 UTC m=+954.656352146" Jan 22 09:21:50 crc kubenswrapper[4811]: I0122 09:21:50.373393 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-xs5vv" podUID="8e75e74d-a6bf-48c5-bedb-8fb30806d29e" containerName="registry-server" probeResult="failure" output=< Jan 22 09:21:50 crc kubenswrapper[4811]: timeout: failed to connect service ":50051" within 1s Jan 22 09:21:50 crc kubenswrapper[4811]: > Jan 22 09:21:51 crc kubenswrapper[4811]: I0122 09:21:51.332477 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hz9h2" event={"ID":"889f85a4-9070-4d9c-82b9-4171d53b035c","Type":"ContainerStarted","Data":"8628ce5153686113eb35cc27ba826d5ea5ff318b459e8a25ba07eda13f1ca283"} Jan 22 09:21:51 crc kubenswrapper[4811]: I0122 09:21:51.895085 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 22 09:21:52 crc kubenswrapper[4811]: I0122 09:21:52.062538 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/80f3e64e-0e9f-49b8-9c7e-78e08634ba92-scripts\") pod \"80f3e64e-0e9f-49b8-9c7e-78e08634ba92\" (UID: \"80f3e64e-0e9f-49b8-9c7e-78e08634ba92\") " Jan 22 09:21:52 crc kubenswrapper[4811]: I0122 09:21:52.062707 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xvk5z\" (UniqueName: \"kubernetes.io/projected/80f3e64e-0e9f-49b8-9c7e-78e08634ba92-kube-api-access-xvk5z\") pod \"80f3e64e-0e9f-49b8-9c7e-78e08634ba92\" (UID: \"80f3e64e-0e9f-49b8-9c7e-78e08634ba92\") " Jan 22 09:21:52 crc kubenswrapper[4811]: I0122 09:21:52.062726 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80f3e64e-0e9f-49b8-9c7e-78e08634ba92-config-data\") pod \"80f3e64e-0e9f-49b8-9c7e-78e08634ba92\" (UID: \"80f3e64e-0e9f-49b8-9c7e-78e08634ba92\") " Jan 22 09:21:52 crc kubenswrapper[4811]: I0122 09:21:52.062770 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80f3e64e-0e9f-49b8-9c7e-78e08634ba92-combined-ca-bundle\") pod \"80f3e64e-0e9f-49b8-9c7e-78e08634ba92\" (UID: \"80f3e64e-0e9f-49b8-9c7e-78e08634ba92\") " Jan 22 09:21:52 crc kubenswrapper[4811]: I0122 09:21:52.062794 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/80f3e64e-0e9f-49b8-9c7e-78e08634ba92-etc-machine-id\") pod \"80f3e64e-0e9f-49b8-9c7e-78e08634ba92\" (UID: \"80f3e64e-0e9f-49b8-9c7e-78e08634ba92\") " Jan 22 09:21:52 crc kubenswrapper[4811]: I0122 09:21:52.062965 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/80f3e64e-0e9f-49b8-9c7e-78e08634ba92-config-data-custom\") pod \"80f3e64e-0e9f-49b8-9c7e-78e08634ba92\" (UID: \"80f3e64e-0e9f-49b8-9c7e-78e08634ba92\") " Jan 22 09:21:52 crc kubenswrapper[4811]: I0122 09:21:52.063706 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/80f3e64e-0e9f-49b8-9c7e-78e08634ba92-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "80f3e64e-0e9f-49b8-9c7e-78e08634ba92" (UID: "80f3e64e-0e9f-49b8-9c7e-78e08634ba92"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 09:21:52 crc kubenswrapper[4811]: I0122 09:21:52.068749 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80f3e64e-0e9f-49b8-9c7e-78e08634ba92-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "80f3e64e-0e9f-49b8-9c7e-78e08634ba92" (UID: "80f3e64e-0e9f-49b8-9c7e-78e08634ba92"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:21:52 crc kubenswrapper[4811]: I0122 09:21:52.073115 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80f3e64e-0e9f-49b8-9c7e-78e08634ba92-scripts" (OuterVolumeSpecName: "scripts") pod "80f3e64e-0e9f-49b8-9c7e-78e08634ba92" (UID: "80f3e64e-0e9f-49b8-9c7e-78e08634ba92"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:21:52 crc kubenswrapper[4811]: I0122 09:21:52.074478 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80f3e64e-0e9f-49b8-9c7e-78e08634ba92-kube-api-access-xvk5z" (OuterVolumeSpecName: "kube-api-access-xvk5z") pod "80f3e64e-0e9f-49b8-9c7e-78e08634ba92" (UID: "80f3e64e-0e9f-49b8-9c7e-78e08634ba92"). InnerVolumeSpecName "kube-api-access-xvk5z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:21:52 crc kubenswrapper[4811]: I0122 09:21:52.114294 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80f3e64e-0e9f-49b8-9c7e-78e08634ba92-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "80f3e64e-0e9f-49b8-9c7e-78e08634ba92" (UID: "80f3e64e-0e9f-49b8-9c7e-78e08634ba92"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:21:52 crc kubenswrapper[4811]: I0122 09:21:52.141336 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80f3e64e-0e9f-49b8-9c7e-78e08634ba92-config-data" (OuterVolumeSpecName: "config-data") pod "80f3e64e-0e9f-49b8-9c7e-78e08634ba92" (UID: "80f3e64e-0e9f-49b8-9c7e-78e08634ba92"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:21:52 crc kubenswrapper[4811]: I0122 09:21:52.168132 4811 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/80f3e64e-0e9f-49b8-9c7e-78e08634ba92-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 22 09:21:52 crc kubenswrapper[4811]: I0122 09:21:52.168167 4811 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/80f3e64e-0e9f-49b8-9c7e-78e08634ba92-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 09:21:52 crc kubenswrapper[4811]: I0122 09:21:52.168176 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xvk5z\" (UniqueName: \"kubernetes.io/projected/80f3e64e-0e9f-49b8-9c7e-78e08634ba92-kube-api-access-xvk5z\") on node \"crc\" DevicePath \"\"" Jan 22 09:21:52 crc kubenswrapper[4811]: I0122 09:21:52.168186 4811 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80f3e64e-0e9f-49b8-9c7e-78e08634ba92-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 09:21:52 crc kubenswrapper[4811]: I0122 09:21:52.168195 4811 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80f3e64e-0e9f-49b8-9c7e-78e08634ba92-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 09:21:52 crc kubenswrapper[4811]: I0122 09:21:52.168205 4811 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/80f3e64e-0e9f-49b8-9c7e-78e08634ba92-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 22 09:21:52 crc kubenswrapper[4811]: I0122 09:21:52.342007 4811 generic.go:334] "Generic (PLEG): container finished" podID="80f3e64e-0e9f-49b8-9c7e-78e08634ba92" containerID="074abccf97b616cd189f81d3e407c934a9e30ad015215dd231c297a4cd13f288" exitCode=0 Jan 22 09:21:52 crc kubenswrapper[4811]: I0122 09:21:52.342089 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 22 09:21:52 crc kubenswrapper[4811]: I0122 09:21:52.342421 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"80f3e64e-0e9f-49b8-9c7e-78e08634ba92","Type":"ContainerDied","Data":"074abccf97b616cd189f81d3e407c934a9e30ad015215dd231c297a4cd13f288"} Jan 22 09:21:52 crc kubenswrapper[4811]: I0122 09:21:52.342506 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"80f3e64e-0e9f-49b8-9c7e-78e08634ba92","Type":"ContainerDied","Data":"23dedf52d26f84cc7cb8cd631e680d17293d66219c79afbc932b424388e23914"} Jan 22 09:21:52 crc kubenswrapper[4811]: I0122 09:21:52.342527 4811 scope.go:117] "RemoveContainer" containerID="33c2413d57b56626dabcb11ae8b5afefb9da4a8234a3b363b7b9fd22d0c9714d" Jan 22 09:21:52 crc kubenswrapper[4811]: I0122 09:21:52.344158 4811 generic.go:334] "Generic (PLEG): container finished" podID="889f85a4-9070-4d9c-82b9-4171d53b035c" containerID="8628ce5153686113eb35cc27ba826d5ea5ff318b459e8a25ba07eda13f1ca283" exitCode=0 Jan 22 09:21:52 crc kubenswrapper[4811]: I0122 09:21:52.344204 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hz9h2" event={"ID":"889f85a4-9070-4d9c-82b9-4171d53b035c","Type":"ContainerDied","Data":"8628ce5153686113eb35cc27ba826d5ea5ff318b459e8a25ba07eda13f1ca283"} Jan 22 09:21:52 crc kubenswrapper[4811]: I0122 09:21:52.361681 4811 scope.go:117] "RemoveContainer" containerID="074abccf97b616cd189f81d3e407c934a9e30ad015215dd231c297a4cd13f288" Jan 22 09:21:52 crc kubenswrapper[4811]: I0122 09:21:52.388866 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 22 09:21:52 crc kubenswrapper[4811]: I0122 09:21:52.394703 4811 scope.go:117] "RemoveContainer" containerID="33c2413d57b56626dabcb11ae8b5afefb9da4a8234a3b363b7b9fd22d0c9714d" Jan 22 09:21:52 crc kubenswrapper[4811]: E0122 09:21:52.397352 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33c2413d57b56626dabcb11ae8b5afefb9da4a8234a3b363b7b9fd22d0c9714d\": container with ID starting with 33c2413d57b56626dabcb11ae8b5afefb9da4a8234a3b363b7b9fd22d0c9714d not found: ID does not exist" containerID="33c2413d57b56626dabcb11ae8b5afefb9da4a8234a3b363b7b9fd22d0c9714d" Jan 22 09:21:52 crc kubenswrapper[4811]: I0122 09:21:52.397396 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33c2413d57b56626dabcb11ae8b5afefb9da4a8234a3b363b7b9fd22d0c9714d"} err="failed to get container status \"33c2413d57b56626dabcb11ae8b5afefb9da4a8234a3b363b7b9fd22d0c9714d\": rpc error: code = NotFound desc = could not find container \"33c2413d57b56626dabcb11ae8b5afefb9da4a8234a3b363b7b9fd22d0c9714d\": container with ID starting with 33c2413d57b56626dabcb11ae8b5afefb9da4a8234a3b363b7b9fd22d0c9714d not found: ID does not exist" Jan 22 09:21:52 crc kubenswrapper[4811]: I0122 09:21:52.397419 4811 scope.go:117] "RemoveContainer" containerID="074abccf97b616cd189f81d3e407c934a9e30ad015215dd231c297a4cd13f288" Jan 22 09:21:52 crc kubenswrapper[4811]: E0122 09:21:52.397679 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"074abccf97b616cd189f81d3e407c934a9e30ad015215dd231c297a4cd13f288\": container with ID starting with 074abccf97b616cd189f81d3e407c934a9e30ad015215dd231c297a4cd13f288 not found: ID does not exist" containerID="074abccf97b616cd189f81d3e407c934a9e30ad015215dd231c297a4cd13f288" Jan 22 09:21:52 crc kubenswrapper[4811]: I0122 09:21:52.397703 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"074abccf97b616cd189f81d3e407c934a9e30ad015215dd231c297a4cd13f288"} err="failed to get container status \"074abccf97b616cd189f81d3e407c934a9e30ad015215dd231c297a4cd13f288\": rpc error: code = NotFound desc = could not find container \"074abccf97b616cd189f81d3e407c934a9e30ad015215dd231c297a4cd13f288\": container with ID starting with 074abccf97b616cd189f81d3e407c934a9e30ad015215dd231c297a4cd13f288 not found: ID does not exist" Jan 22 09:21:52 crc kubenswrapper[4811]: I0122 09:21:52.404922 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 22 09:21:52 crc kubenswrapper[4811]: I0122 09:21:52.432194 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 22 09:21:52 crc kubenswrapper[4811]: E0122 09:21:52.432500 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bf59625-a642-4155-9e83-46cd7d874f50" containerName="barbican-keystone-listener" Jan 22 09:21:52 crc kubenswrapper[4811]: I0122 09:21:52.432520 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bf59625-a642-4155-9e83-46cd7d874f50" containerName="barbican-keystone-listener" Jan 22 09:21:52 crc kubenswrapper[4811]: E0122 09:21:52.432533 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80f3e64e-0e9f-49b8-9c7e-78e08634ba92" containerName="probe" Jan 22 09:21:52 crc kubenswrapper[4811]: I0122 09:21:52.432539 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="80f3e64e-0e9f-49b8-9c7e-78e08634ba92" containerName="probe" Jan 22 09:21:52 crc kubenswrapper[4811]: E0122 09:21:52.432547 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80f3e64e-0e9f-49b8-9c7e-78e08634ba92" containerName="cinder-scheduler" Jan 22 09:21:52 crc kubenswrapper[4811]: I0122 09:21:52.432552 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="80f3e64e-0e9f-49b8-9c7e-78e08634ba92" containerName="cinder-scheduler" Jan 22 09:21:52 crc kubenswrapper[4811]: E0122 09:21:52.432567 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36b87b12-1080-46b8-a342-b9e743377f23" containerName="dnsmasq-dns" Jan 22 09:21:52 crc kubenswrapper[4811]: I0122 09:21:52.432572 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="36b87b12-1080-46b8-a342-b9e743377f23" containerName="dnsmasq-dns" Jan 22 09:21:52 crc kubenswrapper[4811]: E0122 09:21:52.432586 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36b87b12-1080-46b8-a342-b9e743377f23" containerName="init" Jan 22 09:21:52 crc kubenswrapper[4811]: I0122 09:21:52.432591 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="36b87b12-1080-46b8-a342-b9e743377f23" containerName="init" Jan 22 09:21:52 crc kubenswrapper[4811]: E0122 09:21:52.432601 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01a5613f-ca39-400f-83e5-8c2e04474ce3" containerName="barbican-worker-log" Jan 22 09:21:52 crc kubenswrapper[4811]: I0122 09:21:52.432606 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="01a5613f-ca39-400f-83e5-8c2e04474ce3" containerName="barbican-worker-log" Jan 22 09:21:52 crc kubenswrapper[4811]: E0122 09:21:52.432612 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bf59625-a642-4155-9e83-46cd7d874f50" containerName="barbican-keystone-listener-log" Jan 22 09:21:52 crc kubenswrapper[4811]: I0122 09:21:52.432617 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bf59625-a642-4155-9e83-46cd7d874f50" containerName="barbican-keystone-listener-log" Jan 22 09:21:52 crc kubenswrapper[4811]: E0122 09:21:52.432638 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01a5613f-ca39-400f-83e5-8c2e04474ce3" containerName="barbican-worker" Jan 22 09:21:52 crc kubenswrapper[4811]: I0122 09:21:52.432643 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="01a5613f-ca39-400f-83e5-8c2e04474ce3" containerName="barbican-worker" Jan 22 09:21:52 crc kubenswrapper[4811]: I0122 09:21:52.432795 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="01a5613f-ca39-400f-83e5-8c2e04474ce3" containerName="barbican-worker-log" Jan 22 09:21:52 crc kubenswrapper[4811]: I0122 09:21:52.432808 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="01a5613f-ca39-400f-83e5-8c2e04474ce3" containerName="barbican-worker" Jan 22 09:21:52 crc kubenswrapper[4811]: I0122 09:21:52.432817 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="80f3e64e-0e9f-49b8-9c7e-78e08634ba92" containerName="cinder-scheduler" Jan 22 09:21:52 crc kubenswrapper[4811]: I0122 09:21:52.432828 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="80f3e64e-0e9f-49b8-9c7e-78e08634ba92" containerName="probe" Jan 22 09:21:52 crc kubenswrapper[4811]: I0122 09:21:52.432836 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="1bf59625-a642-4155-9e83-46cd7d874f50" containerName="barbican-keystone-listener-log" Jan 22 09:21:52 crc kubenswrapper[4811]: I0122 09:21:52.432847 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="36b87b12-1080-46b8-a342-b9e743377f23" containerName="dnsmasq-dns" Jan 22 09:21:52 crc kubenswrapper[4811]: I0122 09:21:52.432856 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="1bf59625-a642-4155-9e83-46cd7d874f50" containerName="barbican-keystone-listener" Jan 22 09:21:52 crc kubenswrapper[4811]: I0122 09:21:52.433601 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 22 09:21:52 crc kubenswrapper[4811]: I0122 09:21:52.436654 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 22 09:21:52 crc kubenswrapper[4811]: I0122 09:21:52.443829 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 22 09:21:52 crc kubenswrapper[4811]: I0122 09:21:52.574129 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82d89bb4-3738-46d0-8268-d14e298c13c8-config-data\") pod \"cinder-scheduler-0\" (UID: \"82d89bb4-3738-46d0-8268-d14e298c13c8\") " pod="openstack/cinder-scheduler-0" Jan 22 09:21:52 crc kubenswrapper[4811]: I0122 09:21:52.574575 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/82d89bb4-3738-46d0-8268-d14e298c13c8-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"82d89bb4-3738-46d0-8268-d14e298c13c8\") " pod="openstack/cinder-scheduler-0" Jan 22 09:21:52 crc kubenswrapper[4811]: I0122 09:21:52.574704 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjnws\" (UniqueName: \"kubernetes.io/projected/82d89bb4-3738-46d0-8268-d14e298c13c8-kube-api-access-fjnws\") pod \"cinder-scheduler-0\" (UID: \"82d89bb4-3738-46d0-8268-d14e298c13c8\") " pod="openstack/cinder-scheduler-0" Jan 22 09:21:52 crc kubenswrapper[4811]: I0122 09:21:52.574850 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82d89bb4-3738-46d0-8268-d14e298c13c8-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"82d89bb4-3738-46d0-8268-d14e298c13c8\") " pod="openstack/cinder-scheduler-0" Jan 22 09:21:52 crc kubenswrapper[4811]: I0122 09:21:52.574899 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/82d89bb4-3738-46d0-8268-d14e298c13c8-scripts\") pod \"cinder-scheduler-0\" (UID: \"82d89bb4-3738-46d0-8268-d14e298c13c8\") " pod="openstack/cinder-scheduler-0" Jan 22 09:21:52 crc kubenswrapper[4811]: I0122 09:21:52.574964 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/82d89bb4-3738-46d0-8268-d14e298c13c8-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"82d89bb4-3738-46d0-8268-d14e298c13c8\") " pod="openstack/cinder-scheduler-0" Jan 22 09:21:52 crc kubenswrapper[4811]: I0122 09:21:52.677860 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82d89bb4-3738-46d0-8268-d14e298c13c8-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"82d89bb4-3738-46d0-8268-d14e298c13c8\") " pod="openstack/cinder-scheduler-0" Jan 22 09:21:52 crc kubenswrapper[4811]: I0122 09:21:52.677924 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/82d89bb4-3738-46d0-8268-d14e298c13c8-scripts\") pod \"cinder-scheduler-0\" (UID: \"82d89bb4-3738-46d0-8268-d14e298c13c8\") " pod="openstack/cinder-scheduler-0" Jan 22 09:21:52 crc kubenswrapper[4811]: I0122 09:21:52.678004 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/82d89bb4-3738-46d0-8268-d14e298c13c8-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"82d89bb4-3738-46d0-8268-d14e298c13c8\") " pod="openstack/cinder-scheduler-0" Jan 22 09:21:52 crc kubenswrapper[4811]: I0122 09:21:52.678155 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82d89bb4-3738-46d0-8268-d14e298c13c8-config-data\") pod \"cinder-scheduler-0\" (UID: \"82d89bb4-3738-46d0-8268-d14e298c13c8\") " pod="openstack/cinder-scheduler-0" Jan 22 09:21:52 crc kubenswrapper[4811]: I0122 09:21:52.678267 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/82d89bb4-3738-46d0-8268-d14e298c13c8-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"82d89bb4-3738-46d0-8268-d14e298c13c8\") " pod="openstack/cinder-scheduler-0" Jan 22 09:21:52 crc kubenswrapper[4811]: I0122 09:21:52.678293 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fjnws\" (UniqueName: \"kubernetes.io/projected/82d89bb4-3738-46d0-8268-d14e298c13c8-kube-api-access-fjnws\") pod \"cinder-scheduler-0\" (UID: \"82d89bb4-3738-46d0-8268-d14e298c13c8\") " pod="openstack/cinder-scheduler-0" Jan 22 09:21:52 crc kubenswrapper[4811]: I0122 09:21:52.679228 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/82d89bb4-3738-46d0-8268-d14e298c13c8-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"82d89bb4-3738-46d0-8268-d14e298c13c8\") " pod="openstack/cinder-scheduler-0" Jan 22 09:21:52 crc kubenswrapper[4811]: I0122 09:21:52.686479 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/82d89bb4-3738-46d0-8268-d14e298c13c8-scripts\") pod \"cinder-scheduler-0\" (UID: \"82d89bb4-3738-46d0-8268-d14e298c13c8\") " pod="openstack/cinder-scheduler-0" Jan 22 09:21:52 crc kubenswrapper[4811]: I0122 09:21:52.686882 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82d89bb4-3738-46d0-8268-d14e298c13c8-config-data\") pod \"cinder-scheduler-0\" (UID: \"82d89bb4-3738-46d0-8268-d14e298c13c8\") " pod="openstack/cinder-scheduler-0" Jan 22 09:21:52 crc kubenswrapper[4811]: I0122 09:21:52.687839 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82d89bb4-3738-46d0-8268-d14e298c13c8-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"82d89bb4-3738-46d0-8268-d14e298c13c8\") " pod="openstack/cinder-scheduler-0" Jan 22 09:21:52 crc kubenswrapper[4811]: I0122 09:21:52.689309 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/82d89bb4-3738-46d0-8268-d14e298c13c8-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"82d89bb4-3738-46d0-8268-d14e298c13c8\") " pod="openstack/cinder-scheduler-0" Jan 22 09:21:52 crc kubenswrapper[4811]: I0122 09:21:52.703927 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjnws\" (UniqueName: \"kubernetes.io/projected/82d89bb4-3738-46d0-8268-d14e298c13c8-kube-api-access-fjnws\") pod \"cinder-scheduler-0\" (UID: \"82d89bb4-3738-46d0-8268-d14e298c13c8\") " pod="openstack/cinder-scheduler-0" Jan 22 09:21:52 crc kubenswrapper[4811]: I0122 09:21:52.752120 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 22 09:21:52 crc kubenswrapper[4811]: I0122 09:21:52.784618 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 09:21:52 crc kubenswrapper[4811]: I0122 09:21:52.889196 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/173400f9-c99e-4737-b27c-cff0bdb5ee94-log-httpd\") pod \"173400f9-c99e-4737-b27c-cff0bdb5ee94\" (UID: \"173400f9-c99e-4737-b27c-cff0bdb5ee94\") " Jan 22 09:21:52 crc kubenswrapper[4811]: I0122 09:21:52.889531 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/173400f9-c99e-4737-b27c-cff0bdb5ee94-combined-ca-bundle\") pod \"173400f9-c99e-4737-b27c-cff0bdb5ee94\" (UID: \"173400f9-c99e-4737-b27c-cff0bdb5ee94\") " Jan 22 09:21:52 crc kubenswrapper[4811]: I0122 09:21:52.891066 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/173400f9-c99e-4737-b27c-cff0bdb5ee94-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "173400f9-c99e-4737-b27c-cff0bdb5ee94" (UID: "173400f9-c99e-4737-b27c-cff0bdb5ee94"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:21:52 crc kubenswrapper[4811]: I0122 09:21:52.953834 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/173400f9-c99e-4737-b27c-cff0bdb5ee94-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "173400f9-c99e-4737-b27c-cff0bdb5ee94" (UID: "173400f9-c99e-4737-b27c-cff0bdb5ee94"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:21:52 crc kubenswrapper[4811]: I0122 09:21:52.991486 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/173400f9-c99e-4737-b27c-cff0bdb5ee94-sg-core-conf-yaml\") pod \"173400f9-c99e-4737-b27c-cff0bdb5ee94\" (UID: \"173400f9-c99e-4737-b27c-cff0bdb5ee94\") " Jan 22 09:21:52 crc kubenswrapper[4811]: I0122 09:21:52.991678 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/173400f9-c99e-4737-b27c-cff0bdb5ee94-run-httpd\") pod \"173400f9-c99e-4737-b27c-cff0bdb5ee94\" (UID: \"173400f9-c99e-4737-b27c-cff0bdb5ee94\") " Jan 22 09:21:52 crc kubenswrapper[4811]: I0122 09:21:52.991929 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s8n8s\" (UniqueName: \"kubernetes.io/projected/173400f9-c99e-4737-b27c-cff0bdb5ee94-kube-api-access-s8n8s\") pod \"173400f9-c99e-4737-b27c-cff0bdb5ee94\" (UID: \"173400f9-c99e-4737-b27c-cff0bdb5ee94\") " Jan 22 09:21:52 crc kubenswrapper[4811]: I0122 09:21:52.992008 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/173400f9-c99e-4737-b27c-cff0bdb5ee94-scripts\") pod \"173400f9-c99e-4737-b27c-cff0bdb5ee94\" (UID: \"173400f9-c99e-4737-b27c-cff0bdb5ee94\") " Jan 22 09:21:52 crc kubenswrapper[4811]: I0122 09:21:52.992092 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/173400f9-c99e-4737-b27c-cff0bdb5ee94-config-data\") pod \"173400f9-c99e-4737-b27c-cff0bdb5ee94\" (UID: \"173400f9-c99e-4737-b27c-cff0bdb5ee94\") " Jan 22 09:21:52 crc kubenswrapper[4811]: I0122 09:21:52.992743 4811 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/173400f9-c99e-4737-b27c-cff0bdb5ee94-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 09:21:52 crc kubenswrapper[4811]: I0122 09:21:52.992787 4811 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/173400f9-c99e-4737-b27c-cff0bdb5ee94-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 09:21:52 crc kubenswrapper[4811]: I0122 09:21:52.993056 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/173400f9-c99e-4737-b27c-cff0bdb5ee94-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "173400f9-c99e-4737-b27c-cff0bdb5ee94" (UID: "173400f9-c99e-4737-b27c-cff0bdb5ee94"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:21:52 crc kubenswrapper[4811]: I0122 09:21:52.996850 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/173400f9-c99e-4737-b27c-cff0bdb5ee94-scripts" (OuterVolumeSpecName: "scripts") pod "173400f9-c99e-4737-b27c-cff0bdb5ee94" (UID: "173400f9-c99e-4737-b27c-cff0bdb5ee94"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:21:52 crc kubenswrapper[4811]: I0122 09:21:52.997203 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/173400f9-c99e-4737-b27c-cff0bdb5ee94-kube-api-access-s8n8s" (OuterVolumeSpecName: "kube-api-access-s8n8s") pod "173400f9-c99e-4737-b27c-cff0bdb5ee94" (UID: "173400f9-c99e-4737-b27c-cff0bdb5ee94"). InnerVolumeSpecName "kube-api-access-s8n8s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:21:53 crc kubenswrapper[4811]: I0122 09:21:53.011865 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/173400f9-c99e-4737-b27c-cff0bdb5ee94-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "173400f9-c99e-4737-b27c-cff0bdb5ee94" (UID: "173400f9-c99e-4737-b27c-cff0bdb5ee94"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:21:53 crc kubenswrapper[4811]: I0122 09:21:53.053410 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/173400f9-c99e-4737-b27c-cff0bdb5ee94-config-data" (OuterVolumeSpecName: "config-data") pod "173400f9-c99e-4737-b27c-cff0bdb5ee94" (UID: "173400f9-c99e-4737-b27c-cff0bdb5ee94"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:21:53 crc kubenswrapper[4811]: I0122 09:21:53.094851 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s8n8s\" (UniqueName: \"kubernetes.io/projected/173400f9-c99e-4737-b27c-cff0bdb5ee94-kube-api-access-s8n8s\") on node \"crc\" DevicePath \"\"" Jan 22 09:21:53 crc kubenswrapper[4811]: I0122 09:21:53.095007 4811 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/173400f9-c99e-4737-b27c-cff0bdb5ee94-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 09:21:53 crc kubenswrapper[4811]: I0122 09:21:53.095080 4811 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/173400f9-c99e-4737-b27c-cff0bdb5ee94-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 09:21:53 crc kubenswrapper[4811]: I0122 09:21:53.095138 4811 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/173400f9-c99e-4737-b27c-cff0bdb5ee94-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 22 09:21:53 crc kubenswrapper[4811]: I0122 09:21:53.095188 4811 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/173400f9-c99e-4737-b27c-cff0bdb5ee94-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 09:21:53 crc kubenswrapper[4811]: I0122 09:21:53.118606 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-8b9759fbd-gggfs" Jan 22 09:21:53 crc kubenswrapper[4811]: I0122 09:21:53.202551 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 22 09:21:53 crc kubenswrapper[4811]: I0122 09:21:53.362341 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hz9h2" event={"ID":"889f85a4-9070-4d9c-82b9-4171d53b035c","Type":"ContainerStarted","Data":"5bf0d84c1603be5235f1064336c52ce77092a96a9da377df2a24850178937468"} Jan 22 09:21:53 crc kubenswrapper[4811]: I0122 09:21:53.367987 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"82d89bb4-3738-46d0-8268-d14e298c13c8","Type":"ContainerStarted","Data":"43729bc0a046479da09ea9e429ae7aa5bcff67bd3193fdfe90020205ca8e21ab"} Jan 22 09:21:53 crc kubenswrapper[4811]: I0122 09:21:53.369884 4811 generic.go:334] "Generic (PLEG): container finished" podID="173400f9-c99e-4737-b27c-cff0bdb5ee94" containerID="1ddbd4da0737b0e060323ce04e1e1091447ceeddb176ab83d32f3f37cfb10b20" exitCode=0 Jan 22 09:21:53 crc kubenswrapper[4811]: I0122 09:21:53.369941 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"173400f9-c99e-4737-b27c-cff0bdb5ee94","Type":"ContainerDied","Data":"1ddbd4da0737b0e060323ce04e1e1091447ceeddb176ab83d32f3f37cfb10b20"} Jan 22 09:21:53 crc kubenswrapper[4811]: I0122 09:21:53.369962 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"173400f9-c99e-4737-b27c-cff0bdb5ee94","Type":"ContainerDied","Data":"098423c2a3605e4c619446d724d48d3aff151ca985eacec29cf4afa50b997982"} Jan 22 09:21:53 crc kubenswrapper[4811]: I0122 09:21:53.370001 4811 scope.go:117] "RemoveContainer" containerID="685e13d936d2f908fb5ba170a6ac272da77484dbb28475b2621c845b2f036ed2" Jan 22 09:21:53 crc kubenswrapper[4811]: I0122 09:21:53.370112 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 09:21:53 crc kubenswrapper[4811]: I0122 09:21:53.373739 4811 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7757dc6854-tnspq" podUID="49ce92f5-a20a-4429-89a1-764b3db3e28a" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.142:9311/healthcheck\": read tcp 10.217.0.2:44958->10.217.0.142:9311: read: connection reset by peer" Jan 22 09:21:53 crc kubenswrapper[4811]: I0122 09:21:53.373799 4811 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7757dc6854-tnspq" podUID="49ce92f5-a20a-4429-89a1-764b3db3e28a" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.142:9311/healthcheck\": read tcp 10.217.0.2:44942->10.217.0.142:9311: read: connection reset by peer" Jan 22 09:21:53 crc kubenswrapper[4811]: I0122 09:21:53.390275 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-hz9h2" podStartSLOduration=9.789330704 podStartE2EDuration="12.39026096s" podCreationTimestamp="2026-01-22 09:21:41 +0000 UTC" firstStartedPulling="2026-01-22 09:21:50.31773875 +0000 UTC m=+954.639925873" lastFinishedPulling="2026-01-22 09:21:52.918669006 +0000 UTC m=+957.240856129" observedRunningTime="2026-01-22 09:21:53.383316497 +0000 UTC m=+957.705503620" watchObservedRunningTime="2026-01-22 09:21:53.39026096 +0000 UTC m=+957.712448083" Jan 22 09:21:53 crc kubenswrapper[4811]: I0122 09:21:53.445555 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 22 09:21:53 crc kubenswrapper[4811]: I0122 09:21:53.456757 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 22 09:21:53 crc kubenswrapper[4811]: I0122 09:21:53.468751 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 22 09:21:53 crc kubenswrapper[4811]: E0122 09:21:53.469183 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="173400f9-c99e-4737-b27c-cff0bdb5ee94" containerName="proxy-httpd" Jan 22 09:21:53 crc kubenswrapper[4811]: I0122 09:21:53.469279 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="173400f9-c99e-4737-b27c-cff0bdb5ee94" containerName="proxy-httpd" Jan 22 09:21:53 crc kubenswrapper[4811]: E0122 09:21:53.469349 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="173400f9-c99e-4737-b27c-cff0bdb5ee94" containerName="ceilometer-notification-agent" Jan 22 09:21:53 crc kubenswrapper[4811]: I0122 09:21:53.473024 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="173400f9-c99e-4737-b27c-cff0bdb5ee94" containerName="ceilometer-notification-agent" Jan 22 09:21:53 crc kubenswrapper[4811]: E0122 09:21:53.473159 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="173400f9-c99e-4737-b27c-cff0bdb5ee94" containerName="sg-core" Jan 22 09:21:53 crc kubenswrapper[4811]: I0122 09:21:53.473233 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="173400f9-c99e-4737-b27c-cff0bdb5ee94" containerName="sg-core" Jan 22 09:21:53 crc kubenswrapper[4811]: I0122 09:21:53.473533 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="173400f9-c99e-4737-b27c-cff0bdb5ee94" containerName="sg-core" Jan 22 09:21:53 crc kubenswrapper[4811]: I0122 09:21:53.473609 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="173400f9-c99e-4737-b27c-cff0bdb5ee94" containerName="proxy-httpd" Jan 22 09:21:53 crc kubenswrapper[4811]: I0122 09:21:53.473685 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="173400f9-c99e-4737-b27c-cff0bdb5ee94" containerName="ceilometer-notification-agent" Jan 22 09:21:53 crc kubenswrapper[4811]: I0122 09:21:53.475006 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 09:21:53 crc kubenswrapper[4811]: I0122 09:21:53.482505 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 22 09:21:53 crc kubenswrapper[4811]: I0122 09:21:53.487217 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 22 09:21:53 crc kubenswrapper[4811]: I0122 09:21:53.492131 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 22 09:21:53 crc kubenswrapper[4811]: I0122 09:21:53.505047 4811 scope.go:117] "RemoveContainer" containerID="d5579584315ce5bf7b0ba9cd5484b6e73082240e3a580577b72df6d7a77f1636" Jan 22 09:21:53 crc kubenswrapper[4811]: I0122 09:21:53.507611 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/467a6b01-4fa9-484a-b65e-a91d89e91eb3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"467a6b01-4fa9-484a-b65e-a91d89e91eb3\") " pod="openstack/ceilometer-0" Jan 22 09:21:53 crc kubenswrapper[4811]: I0122 09:21:53.507678 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/467a6b01-4fa9-484a-b65e-a91d89e91eb3-log-httpd\") pod \"ceilometer-0\" (UID: \"467a6b01-4fa9-484a-b65e-a91d89e91eb3\") " pod="openstack/ceilometer-0" Jan 22 09:21:53 crc kubenswrapper[4811]: I0122 09:21:53.507733 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/467a6b01-4fa9-484a-b65e-a91d89e91eb3-run-httpd\") pod \"ceilometer-0\" (UID: \"467a6b01-4fa9-484a-b65e-a91d89e91eb3\") " pod="openstack/ceilometer-0" Jan 22 09:21:53 crc kubenswrapper[4811]: I0122 09:21:53.507749 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qztxp\" (UniqueName: \"kubernetes.io/projected/467a6b01-4fa9-484a-b65e-a91d89e91eb3-kube-api-access-qztxp\") pod \"ceilometer-0\" (UID: \"467a6b01-4fa9-484a-b65e-a91d89e91eb3\") " pod="openstack/ceilometer-0" Jan 22 09:21:53 crc kubenswrapper[4811]: I0122 09:21:53.508033 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/467a6b01-4fa9-484a-b65e-a91d89e91eb3-scripts\") pod \"ceilometer-0\" (UID: \"467a6b01-4fa9-484a-b65e-a91d89e91eb3\") " pod="openstack/ceilometer-0" Jan 22 09:21:53 crc kubenswrapper[4811]: I0122 09:21:53.508127 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/467a6b01-4fa9-484a-b65e-a91d89e91eb3-config-data\") pod \"ceilometer-0\" (UID: \"467a6b01-4fa9-484a-b65e-a91d89e91eb3\") " pod="openstack/ceilometer-0" Jan 22 09:21:53 crc kubenswrapper[4811]: I0122 09:21:53.508151 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/467a6b01-4fa9-484a-b65e-a91d89e91eb3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"467a6b01-4fa9-484a-b65e-a91d89e91eb3\") " pod="openstack/ceilometer-0" Jan 22 09:21:53 crc kubenswrapper[4811]: I0122 09:21:53.554783 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-b5fd9ff5-td6xf"] Jan 22 09:21:53 crc kubenswrapper[4811]: I0122 09:21:53.555616 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-b5fd9ff5-td6xf" podUID="d8ace218-8390-49f1-950a-7162f7bce032" containerName="neutron-api" containerID="cri-o://fa6201b52d14e99f6cea3de3c8c89efb500fedf9bd7c3d79ef1de96fd0c4ddcf" gracePeriod=30 Jan 22 09:21:53 crc kubenswrapper[4811]: I0122 09:21:53.558705 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-b5fd9ff5-td6xf" podUID="d8ace218-8390-49f1-950a-7162f7bce032" containerName="neutron-httpd" containerID="cri-o://9391132e8d380c4c3f1fd0d0b7dfdabe021b6149cf92e443862c0b800bb9c893" gracePeriod=30 Jan 22 09:21:53 crc kubenswrapper[4811]: I0122 09:21:53.590141 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-864bc8bfcf-nvbzn"] Jan 22 09:21:53 crc kubenswrapper[4811]: I0122 09:21:53.591469 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-864bc8bfcf-nvbzn" Jan 22 09:21:53 crc kubenswrapper[4811]: I0122 09:21:53.608088 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-b5fd9ff5-td6xf" Jan 22 09:21:53 crc kubenswrapper[4811]: I0122 09:21:53.615164 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3cd66dd0-aadf-46e8-b0d0-48d0563efa06-public-tls-certs\") pod \"neutron-864bc8bfcf-nvbzn\" (UID: \"3cd66dd0-aadf-46e8-b0d0-48d0563efa06\") " pod="openstack/neutron-864bc8bfcf-nvbzn" Jan 22 09:21:53 crc kubenswrapper[4811]: I0122 09:21:53.615228 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/467a6b01-4fa9-484a-b65e-a91d89e91eb3-config-data\") pod \"ceilometer-0\" (UID: \"467a6b01-4fa9-484a-b65e-a91d89e91eb3\") " pod="openstack/ceilometer-0" Jan 22 09:21:53 crc kubenswrapper[4811]: I0122 09:21:53.615259 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/467a6b01-4fa9-484a-b65e-a91d89e91eb3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"467a6b01-4fa9-484a-b65e-a91d89e91eb3\") " pod="openstack/ceilometer-0" Jan 22 09:21:53 crc kubenswrapper[4811]: I0122 09:21:53.615294 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3cd66dd0-aadf-46e8-b0d0-48d0563efa06-config\") pod \"neutron-864bc8bfcf-nvbzn\" (UID: \"3cd66dd0-aadf-46e8-b0d0-48d0563efa06\") " pod="openstack/neutron-864bc8bfcf-nvbzn" Jan 22 09:21:53 crc kubenswrapper[4811]: I0122 09:21:53.615341 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3cd66dd0-aadf-46e8-b0d0-48d0563efa06-ovndb-tls-certs\") pod \"neutron-864bc8bfcf-nvbzn\" (UID: \"3cd66dd0-aadf-46e8-b0d0-48d0563efa06\") " pod="openstack/neutron-864bc8bfcf-nvbzn" Jan 22 09:21:53 crc kubenswrapper[4811]: I0122 09:21:53.615439 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/467a6b01-4fa9-484a-b65e-a91d89e91eb3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"467a6b01-4fa9-484a-b65e-a91d89e91eb3\") " pod="openstack/ceilometer-0" Jan 22 09:21:53 crc kubenswrapper[4811]: I0122 09:21:53.615476 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/467a6b01-4fa9-484a-b65e-a91d89e91eb3-log-httpd\") pod \"ceilometer-0\" (UID: \"467a6b01-4fa9-484a-b65e-a91d89e91eb3\") " pod="openstack/ceilometer-0" Jan 22 09:21:53 crc kubenswrapper[4811]: I0122 09:21:53.615536 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/467a6b01-4fa9-484a-b65e-a91d89e91eb3-run-httpd\") pod \"ceilometer-0\" (UID: \"467a6b01-4fa9-484a-b65e-a91d89e91eb3\") " pod="openstack/ceilometer-0" Jan 22 09:21:53 crc kubenswrapper[4811]: I0122 09:21:53.615557 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qztxp\" (UniqueName: \"kubernetes.io/projected/467a6b01-4fa9-484a-b65e-a91d89e91eb3-kube-api-access-qztxp\") pod \"ceilometer-0\" (UID: \"467a6b01-4fa9-484a-b65e-a91d89e91eb3\") " pod="openstack/ceilometer-0" Jan 22 09:21:53 crc kubenswrapper[4811]: I0122 09:21:53.615657 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cd66dd0-aadf-46e8-b0d0-48d0563efa06-combined-ca-bundle\") pod \"neutron-864bc8bfcf-nvbzn\" (UID: \"3cd66dd0-aadf-46e8-b0d0-48d0563efa06\") " pod="openstack/neutron-864bc8bfcf-nvbzn" Jan 22 09:21:53 crc kubenswrapper[4811]: I0122 09:21:53.615684 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3cd66dd0-aadf-46e8-b0d0-48d0563efa06-httpd-config\") pod \"neutron-864bc8bfcf-nvbzn\" (UID: \"3cd66dd0-aadf-46e8-b0d0-48d0563efa06\") " pod="openstack/neutron-864bc8bfcf-nvbzn" Jan 22 09:21:53 crc kubenswrapper[4811]: I0122 09:21:53.615703 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3cd66dd0-aadf-46e8-b0d0-48d0563efa06-internal-tls-certs\") pod \"neutron-864bc8bfcf-nvbzn\" (UID: \"3cd66dd0-aadf-46e8-b0d0-48d0563efa06\") " pod="openstack/neutron-864bc8bfcf-nvbzn" Jan 22 09:21:53 crc kubenswrapper[4811]: I0122 09:21:53.615799 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkbl6\" (UniqueName: \"kubernetes.io/projected/3cd66dd0-aadf-46e8-b0d0-48d0563efa06-kube-api-access-rkbl6\") pod \"neutron-864bc8bfcf-nvbzn\" (UID: \"3cd66dd0-aadf-46e8-b0d0-48d0563efa06\") " pod="openstack/neutron-864bc8bfcf-nvbzn" Jan 22 09:21:53 crc kubenswrapper[4811]: I0122 09:21:53.615922 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/467a6b01-4fa9-484a-b65e-a91d89e91eb3-scripts\") pod \"ceilometer-0\" (UID: \"467a6b01-4fa9-484a-b65e-a91d89e91eb3\") " pod="openstack/ceilometer-0" Jan 22 09:21:53 crc kubenswrapper[4811]: I0122 09:21:53.620028 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-864bc8bfcf-nvbzn"] Jan 22 09:21:53 crc kubenswrapper[4811]: I0122 09:21:53.628201 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/467a6b01-4fa9-484a-b65e-a91d89e91eb3-run-httpd\") pod \"ceilometer-0\" (UID: \"467a6b01-4fa9-484a-b65e-a91d89e91eb3\") " pod="openstack/ceilometer-0" Jan 22 09:21:53 crc kubenswrapper[4811]: I0122 09:21:53.635894 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/467a6b01-4fa9-484a-b65e-a91d89e91eb3-scripts\") pod \"ceilometer-0\" (UID: \"467a6b01-4fa9-484a-b65e-a91d89e91eb3\") " pod="openstack/ceilometer-0" Jan 22 09:21:53 crc kubenswrapper[4811]: I0122 09:21:53.639174 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/467a6b01-4fa9-484a-b65e-a91d89e91eb3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"467a6b01-4fa9-484a-b65e-a91d89e91eb3\") " pod="openstack/ceilometer-0" Jan 22 09:21:53 crc kubenswrapper[4811]: I0122 09:21:53.643604 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/467a6b01-4fa9-484a-b65e-a91d89e91eb3-log-httpd\") pod \"ceilometer-0\" (UID: \"467a6b01-4fa9-484a-b65e-a91d89e91eb3\") " pod="openstack/ceilometer-0" Jan 22 09:21:53 crc kubenswrapper[4811]: I0122 09:21:53.655426 4811 scope.go:117] "RemoveContainer" containerID="1ddbd4da0737b0e060323ce04e1e1091447ceeddb176ab83d32f3f37cfb10b20" Jan 22 09:21:53 crc kubenswrapper[4811]: I0122 09:21:53.656761 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qztxp\" (UniqueName: \"kubernetes.io/projected/467a6b01-4fa9-484a-b65e-a91d89e91eb3-kube-api-access-qztxp\") pod \"ceilometer-0\" (UID: \"467a6b01-4fa9-484a-b65e-a91d89e91eb3\") " pod="openstack/ceilometer-0" Jan 22 09:21:53 crc kubenswrapper[4811]: I0122 09:21:53.665992 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/467a6b01-4fa9-484a-b65e-a91d89e91eb3-config-data\") pod \"ceilometer-0\" (UID: \"467a6b01-4fa9-484a-b65e-a91d89e91eb3\") " pod="openstack/ceilometer-0" Jan 22 09:21:53 crc kubenswrapper[4811]: I0122 09:21:53.695496 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/467a6b01-4fa9-484a-b65e-a91d89e91eb3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"467a6b01-4fa9-484a-b65e-a91d89e91eb3\") " pod="openstack/ceilometer-0" Jan 22 09:21:53 crc kubenswrapper[4811]: I0122 09:21:53.710564 4811 scope.go:117] "RemoveContainer" containerID="685e13d936d2f908fb5ba170a6ac272da77484dbb28475b2621c845b2f036ed2" Jan 22 09:21:53 crc kubenswrapper[4811]: E0122 09:21:53.712823 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"685e13d936d2f908fb5ba170a6ac272da77484dbb28475b2621c845b2f036ed2\": container with ID starting with 685e13d936d2f908fb5ba170a6ac272da77484dbb28475b2621c845b2f036ed2 not found: ID does not exist" containerID="685e13d936d2f908fb5ba170a6ac272da77484dbb28475b2621c845b2f036ed2" Jan 22 09:21:53 crc kubenswrapper[4811]: I0122 09:21:53.712852 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"685e13d936d2f908fb5ba170a6ac272da77484dbb28475b2621c845b2f036ed2"} err="failed to get container status \"685e13d936d2f908fb5ba170a6ac272da77484dbb28475b2621c845b2f036ed2\": rpc error: code = NotFound desc = could not find container \"685e13d936d2f908fb5ba170a6ac272da77484dbb28475b2621c845b2f036ed2\": container with ID starting with 685e13d936d2f908fb5ba170a6ac272da77484dbb28475b2621c845b2f036ed2 not found: ID does not exist" Jan 22 09:21:53 crc kubenswrapper[4811]: I0122 09:21:53.712872 4811 scope.go:117] "RemoveContainer" containerID="d5579584315ce5bf7b0ba9cd5484b6e73082240e3a580577b72df6d7a77f1636" Jan 22 09:21:53 crc kubenswrapper[4811]: E0122 09:21:53.716121 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d5579584315ce5bf7b0ba9cd5484b6e73082240e3a580577b72df6d7a77f1636\": container with ID starting with d5579584315ce5bf7b0ba9cd5484b6e73082240e3a580577b72df6d7a77f1636 not found: ID does not exist" containerID="d5579584315ce5bf7b0ba9cd5484b6e73082240e3a580577b72df6d7a77f1636" Jan 22 09:21:53 crc kubenswrapper[4811]: I0122 09:21:53.716148 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d5579584315ce5bf7b0ba9cd5484b6e73082240e3a580577b72df6d7a77f1636"} err="failed to get container status \"d5579584315ce5bf7b0ba9cd5484b6e73082240e3a580577b72df6d7a77f1636\": rpc error: code = NotFound desc = could not find container \"d5579584315ce5bf7b0ba9cd5484b6e73082240e3a580577b72df6d7a77f1636\": container with ID starting with d5579584315ce5bf7b0ba9cd5484b6e73082240e3a580577b72df6d7a77f1636 not found: ID does not exist" Jan 22 09:21:53 crc kubenswrapper[4811]: I0122 09:21:53.716176 4811 scope.go:117] "RemoveContainer" containerID="1ddbd4da0737b0e060323ce04e1e1091447ceeddb176ab83d32f3f37cfb10b20" Jan 22 09:21:53 crc kubenswrapper[4811]: E0122 09:21:53.723834 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ddbd4da0737b0e060323ce04e1e1091447ceeddb176ab83d32f3f37cfb10b20\": container with ID starting with 1ddbd4da0737b0e060323ce04e1e1091447ceeddb176ab83d32f3f37cfb10b20 not found: ID does not exist" containerID="1ddbd4da0737b0e060323ce04e1e1091447ceeddb176ab83d32f3f37cfb10b20" Jan 22 09:21:53 crc kubenswrapper[4811]: I0122 09:21:53.723883 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ddbd4da0737b0e060323ce04e1e1091447ceeddb176ab83d32f3f37cfb10b20"} err="failed to get container status \"1ddbd4da0737b0e060323ce04e1e1091447ceeddb176ab83d32f3f37cfb10b20\": rpc error: code = NotFound desc = could not find container \"1ddbd4da0737b0e060323ce04e1e1091447ceeddb176ab83d32f3f37cfb10b20\": container with ID starting with 1ddbd4da0737b0e060323ce04e1e1091447ceeddb176ab83d32f3f37cfb10b20 not found: ID does not exist" Jan 22 09:21:53 crc kubenswrapper[4811]: I0122 09:21:53.723976 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cd66dd0-aadf-46e8-b0d0-48d0563efa06-combined-ca-bundle\") pod \"neutron-864bc8bfcf-nvbzn\" (UID: \"3cd66dd0-aadf-46e8-b0d0-48d0563efa06\") " pod="openstack/neutron-864bc8bfcf-nvbzn" Jan 22 09:21:53 crc kubenswrapper[4811]: I0122 09:21:53.724009 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3cd66dd0-aadf-46e8-b0d0-48d0563efa06-httpd-config\") pod \"neutron-864bc8bfcf-nvbzn\" (UID: \"3cd66dd0-aadf-46e8-b0d0-48d0563efa06\") " pod="openstack/neutron-864bc8bfcf-nvbzn" Jan 22 09:21:53 crc kubenswrapper[4811]: I0122 09:21:53.724026 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3cd66dd0-aadf-46e8-b0d0-48d0563efa06-internal-tls-certs\") pod \"neutron-864bc8bfcf-nvbzn\" (UID: \"3cd66dd0-aadf-46e8-b0d0-48d0563efa06\") " pod="openstack/neutron-864bc8bfcf-nvbzn" Jan 22 09:21:53 crc kubenswrapper[4811]: I0122 09:21:53.724086 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rkbl6\" (UniqueName: \"kubernetes.io/projected/3cd66dd0-aadf-46e8-b0d0-48d0563efa06-kube-api-access-rkbl6\") pod \"neutron-864bc8bfcf-nvbzn\" (UID: \"3cd66dd0-aadf-46e8-b0d0-48d0563efa06\") " pod="openstack/neutron-864bc8bfcf-nvbzn" Jan 22 09:21:53 crc kubenswrapper[4811]: I0122 09:21:53.724212 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3cd66dd0-aadf-46e8-b0d0-48d0563efa06-public-tls-certs\") pod \"neutron-864bc8bfcf-nvbzn\" (UID: \"3cd66dd0-aadf-46e8-b0d0-48d0563efa06\") " pod="openstack/neutron-864bc8bfcf-nvbzn" Jan 22 09:21:53 crc kubenswrapper[4811]: I0122 09:21:53.724244 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3cd66dd0-aadf-46e8-b0d0-48d0563efa06-config\") pod \"neutron-864bc8bfcf-nvbzn\" (UID: \"3cd66dd0-aadf-46e8-b0d0-48d0563efa06\") " pod="openstack/neutron-864bc8bfcf-nvbzn" Jan 22 09:21:53 crc kubenswrapper[4811]: I0122 09:21:53.724275 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3cd66dd0-aadf-46e8-b0d0-48d0563efa06-ovndb-tls-certs\") pod \"neutron-864bc8bfcf-nvbzn\" (UID: \"3cd66dd0-aadf-46e8-b0d0-48d0563efa06\") " pod="openstack/neutron-864bc8bfcf-nvbzn" Jan 22 09:21:53 crc kubenswrapper[4811]: I0122 09:21:53.731058 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3cd66dd0-aadf-46e8-b0d0-48d0563efa06-public-tls-certs\") pod \"neutron-864bc8bfcf-nvbzn\" (UID: \"3cd66dd0-aadf-46e8-b0d0-48d0563efa06\") " pod="openstack/neutron-864bc8bfcf-nvbzn" Jan 22 09:21:53 crc kubenswrapper[4811]: I0122 09:21:53.733066 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3cd66dd0-aadf-46e8-b0d0-48d0563efa06-httpd-config\") pod \"neutron-864bc8bfcf-nvbzn\" (UID: \"3cd66dd0-aadf-46e8-b0d0-48d0563efa06\") " pod="openstack/neutron-864bc8bfcf-nvbzn" Jan 22 09:21:53 crc kubenswrapper[4811]: I0122 09:21:53.733694 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/3cd66dd0-aadf-46e8-b0d0-48d0563efa06-config\") pod \"neutron-864bc8bfcf-nvbzn\" (UID: \"3cd66dd0-aadf-46e8-b0d0-48d0563efa06\") " pod="openstack/neutron-864bc8bfcf-nvbzn" Jan 22 09:21:53 crc kubenswrapper[4811]: I0122 09:21:53.735780 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cd66dd0-aadf-46e8-b0d0-48d0563efa06-combined-ca-bundle\") pod \"neutron-864bc8bfcf-nvbzn\" (UID: \"3cd66dd0-aadf-46e8-b0d0-48d0563efa06\") " pod="openstack/neutron-864bc8bfcf-nvbzn" Jan 22 09:21:53 crc kubenswrapper[4811]: I0122 09:21:53.736845 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3cd66dd0-aadf-46e8-b0d0-48d0563efa06-ovndb-tls-certs\") pod \"neutron-864bc8bfcf-nvbzn\" (UID: \"3cd66dd0-aadf-46e8-b0d0-48d0563efa06\") " pod="openstack/neutron-864bc8bfcf-nvbzn" Jan 22 09:21:53 crc kubenswrapper[4811]: I0122 09:21:53.737737 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3cd66dd0-aadf-46e8-b0d0-48d0563efa06-internal-tls-certs\") pod \"neutron-864bc8bfcf-nvbzn\" (UID: \"3cd66dd0-aadf-46e8-b0d0-48d0563efa06\") " pod="openstack/neutron-864bc8bfcf-nvbzn" Jan 22 09:21:53 crc kubenswrapper[4811]: I0122 09:21:53.748946 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rkbl6\" (UniqueName: \"kubernetes.io/projected/3cd66dd0-aadf-46e8-b0d0-48d0563efa06-kube-api-access-rkbl6\") pod \"neutron-864bc8bfcf-nvbzn\" (UID: \"3cd66dd0-aadf-46e8-b0d0-48d0563efa06\") " pod="openstack/neutron-864bc8bfcf-nvbzn" Jan 22 09:21:53 crc kubenswrapper[4811]: I0122 09:21:53.836953 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 09:21:53 crc kubenswrapper[4811]: I0122 09:21:53.950558 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-864bc8bfcf-nvbzn" Jan 22 09:21:53 crc kubenswrapper[4811]: I0122 09:21:53.988925 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7757dc6854-tnspq" Jan 22 09:21:54 crc kubenswrapper[4811]: I0122 09:21:54.016743 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="173400f9-c99e-4737-b27c-cff0bdb5ee94" path="/var/lib/kubelet/pods/173400f9-c99e-4737-b27c-cff0bdb5ee94/volumes" Jan 22 09:21:54 crc kubenswrapper[4811]: I0122 09:21:54.017435 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80f3e64e-0e9f-49b8-9c7e-78e08634ba92" path="/var/lib/kubelet/pods/80f3e64e-0e9f-49b8-9c7e-78e08634ba92/volumes" Jan 22 09:21:54 crc kubenswrapper[4811]: I0122 09:21:54.030176 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49ce92f5-a20a-4429-89a1-764b3db3e28a-config-data\") pod \"49ce92f5-a20a-4429-89a1-764b3db3e28a\" (UID: \"49ce92f5-a20a-4429-89a1-764b3db3e28a\") " Jan 22 09:21:54 crc kubenswrapper[4811]: I0122 09:21:54.030337 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fj2fl\" (UniqueName: \"kubernetes.io/projected/49ce92f5-a20a-4429-89a1-764b3db3e28a-kube-api-access-fj2fl\") pod \"49ce92f5-a20a-4429-89a1-764b3db3e28a\" (UID: \"49ce92f5-a20a-4429-89a1-764b3db3e28a\") " Jan 22 09:21:54 crc kubenswrapper[4811]: I0122 09:21:54.030434 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/49ce92f5-a20a-4429-89a1-764b3db3e28a-config-data-custom\") pod \"49ce92f5-a20a-4429-89a1-764b3db3e28a\" (UID: \"49ce92f5-a20a-4429-89a1-764b3db3e28a\") " Jan 22 09:21:54 crc kubenswrapper[4811]: I0122 09:21:54.031549 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49ce92f5-a20a-4429-89a1-764b3db3e28a-combined-ca-bundle\") pod \"49ce92f5-a20a-4429-89a1-764b3db3e28a\" (UID: \"49ce92f5-a20a-4429-89a1-764b3db3e28a\") " Jan 22 09:21:54 crc kubenswrapper[4811]: I0122 09:21:54.033520 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/49ce92f5-a20a-4429-89a1-764b3db3e28a-logs\") pod \"49ce92f5-a20a-4429-89a1-764b3db3e28a\" (UID: \"49ce92f5-a20a-4429-89a1-764b3db3e28a\") " Jan 22 09:21:54 crc kubenswrapper[4811]: I0122 09:21:54.035244 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/49ce92f5-a20a-4429-89a1-764b3db3e28a-logs" (OuterVolumeSpecName: "logs") pod "49ce92f5-a20a-4429-89a1-764b3db3e28a" (UID: "49ce92f5-a20a-4429-89a1-764b3db3e28a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:21:54 crc kubenswrapper[4811]: I0122 09:21:54.041026 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49ce92f5-a20a-4429-89a1-764b3db3e28a-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "49ce92f5-a20a-4429-89a1-764b3db3e28a" (UID: "49ce92f5-a20a-4429-89a1-764b3db3e28a"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:21:54 crc kubenswrapper[4811]: I0122 09:21:54.057851 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ce92f5-a20a-4429-89a1-764b3db3e28a-kube-api-access-fj2fl" (OuterVolumeSpecName: "kube-api-access-fj2fl") pod "49ce92f5-a20a-4429-89a1-764b3db3e28a" (UID: "49ce92f5-a20a-4429-89a1-764b3db3e28a"). InnerVolumeSpecName "kube-api-access-fj2fl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:21:54 crc kubenswrapper[4811]: I0122 09:21:54.127152 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49ce92f5-a20a-4429-89a1-764b3db3e28a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "49ce92f5-a20a-4429-89a1-764b3db3e28a" (UID: "49ce92f5-a20a-4429-89a1-764b3db3e28a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:21:54 crc kubenswrapper[4811]: I0122 09:21:54.148466 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fj2fl\" (UniqueName: \"kubernetes.io/projected/49ce92f5-a20a-4429-89a1-764b3db3e28a-kube-api-access-fj2fl\") on node \"crc\" DevicePath \"\"" Jan 22 09:21:54 crc kubenswrapper[4811]: I0122 09:21:54.148510 4811 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/49ce92f5-a20a-4429-89a1-764b3db3e28a-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 22 09:21:54 crc kubenswrapper[4811]: I0122 09:21:54.148526 4811 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49ce92f5-a20a-4429-89a1-764b3db3e28a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 09:21:54 crc kubenswrapper[4811]: I0122 09:21:54.148537 4811 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/49ce92f5-a20a-4429-89a1-764b3db3e28a-logs\") on node \"crc\" DevicePath \"\"" Jan 22 09:21:54 crc kubenswrapper[4811]: I0122 09:21:54.246442 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49ce92f5-a20a-4429-89a1-764b3db3e28a-config-data" (OuterVolumeSpecName: "config-data") pod "49ce92f5-a20a-4429-89a1-764b3db3e28a" (UID: "49ce92f5-a20a-4429-89a1-764b3db3e28a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:21:54 crc kubenswrapper[4811]: I0122 09:21:54.252383 4811 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49ce92f5-a20a-4429-89a1-764b3db3e28a-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 09:21:54 crc kubenswrapper[4811]: I0122 09:21:54.412800 4811 generic.go:334] "Generic (PLEG): container finished" podID="49ce92f5-a20a-4429-89a1-764b3db3e28a" containerID="db6ab904511cf6e9244e2d0d01cf6c92b3afe6a4ed5ca4f1d4b6334f256a171b" exitCode=0 Jan 22 09:21:54 crc kubenswrapper[4811]: I0122 09:21:54.413263 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7757dc6854-tnspq" event={"ID":"49ce92f5-a20a-4429-89a1-764b3db3e28a","Type":"ContainerDied","Data":"db6ab904511cf6e9244e2d0d01cf6c92b3afe6a4ed5ca4f1d4b6334f256a171b"} Jan 22 09:21:54 crc kubenswrapper[4811]: I0122 09:21:54.413315 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7757dc6854-tnspq" event={"ID":"49ce92f5-a20a-4429-89a1-764b3db3e28a","Type":"ContainerDied","Data":"0a4c47cc71b948ac84a7c48976de701c8c045320f78dd59b5f086e262efe154d"} Jan 22 09:21:54 crc kubenswrapper[4811]: I0122 09:21:54.413343 4811 scope.go:117] "RemoveContainer" containerID="db6ab904511cf6e9244e2d0d01cf6c92b3afe6a4ed5ca4f1d4b6334f256a171b" Jan 22 09:21:54 crc kubenswrapper[4811]: I0122 09:21:54.413499 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7757dc6854-tnspq" Jan 22 09:21:54 crc kubenswrapper[4811]: I0122 09:21:54.433342 4811 generic.go:334] "Generic (PLEG): container finished" podID="d8ace218-8390-49f1-950a-7162f7bce032" containerID="9391132e8d380c4c3f1fd0d0b7dfdabe021b6149cf92e443862c0b800bb9c893" exitCode=0 Jan 22 09:21:54 crc kubenswrapper[4811]: I0122 09:21:54.436720 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-b5fd9ff5-td6xf" event={"ID":"d8ace218-8390-49f1-950a-7162f7bce032","Type":"ContainerDied","Data":"9391132e8d380c4c3f1fd0d0b7dfdabe021b6149cf92e443862c0b800bb9c893"} Jan 22 09:21:54 crc kubenswrapper[4811]: I0122 09:21:54.468596 4811 scope.go:117] "RemoveContainer" containerID="7b5b159f764e828b353e7ad7a1a67bcbaaebf098aaa6bc41d5033425dd503506" Jan 22 09:21:54 crc kubenswrapper[4811]: I0122 09:21:54.480540 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-7757dc6854-tnspq"] Jan 22 09:21:54 crc kubenswrapper[4811]: I0122 09:21:54.510137 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-7757dc6854-tnspq"] Jan 22 09:21:54 crc kubenswrapper[4811]: I0122 09:21:54.525143 4811 scope.go:117] "RemoveContainer" containerID="db6ab904511cf6e9244e2d0d01cf6c92b3afe6a4ed5ca4f1d4b6334f256a171b" Jan 22 09:21:54 crc kubenswrapper[4811]: E0122 09:21:54.525766 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db6ab904511cf6e9244e2d0d01cf6c92b3afe6a4ed5ca4f1d4b6334f256a171b\": container with ID starting with db6ab904511cf6e9244e2d0d01cf6c92b3afe6a4ed5ca4f1d4b6334f256a171b not found: ID does not exist" containerID="db6ab904511cf6e9244e2d0d01cf6c92b3afe6a4ed5ca4f1d4b6334f256a171b" Jan 22 09:21:54 crc kubenswrapper[4811]: I0122 09:21:54.525821 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db6ab904511cf6e9244e2d0d01cf6c92b3afe6a4ed5ca4f1d4b6334f256a171b"} err="failed to get container status \"db6ab904511cf6e9244e2d0d01cf6c92b3afe6a4ed5ca4f1d4b6334f256a171b\": rpc error: code = NotFound desc = could not find container \"db6ab904511cf6e9244e2d0d01cf6c92b3afe6a4ed5ca4f1d4b6334f256a171b\": container with ID starting with db6ab904511cf6e9244e2d0d01cf6c92b3afe6a4ed5ca4f1d4b6334f256a171b not found: ID does not exist" Jan 22 09:21:54 crc kubenswrapper[4811]: I0122 09:21:54.525852 4811 scope.go:117] "RemoveContainer" containerID="7b5b159f764e828b353e7ad7a1a67bcbaaebf098aaa6bc41d5033425dd503506" Jan 22 09:21:54 crc kubenswrapper[4811]: E0122 09:21:54.526308 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b5b159f764e828b353e7ad7a1a67bcbaaebf098aaa6bc41d5033425dd503506\": container with ID starting with 7b5b159f764e828b353e7ad7a1a67bcbaaebf098aaa6bc41d5033425dd503506 not found: ID does not exist" containerID="7b5b159f764e828b353e7ad7a1a67bcbaaebf098aaa6bc41d5033425dd503506" Jan 22 09:21:54 crc kubenswrapper[4811]: I0122 09:21:54.526370 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b5b159f764e828b353e7ad7a1a67bcbaaebf098aaa6bc41d5033425dd503506"} err="failed to get container status \"7b5b159f764e828b353e7ad7a1a67bcbaaebf098aaa6bc41d5033425dd503506\": rpc error: code = NotFound desc = could not find container \"7b5b159f764e828b353e7ad7a1a67bcbaaebf098aaa6bc41d5033425dd503506\": container with ID starting with 7b5b159f764e828b353e7ad7a1a67bcbaaebf098aaa6bc41d5033425dd503506 not found: ID does not exist" Jan 22 09:21:54 crc kubenswrapper[4811]: I0122 09:21:54.646237 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 22 09:21:54 crc kubenswrapper[4811]: I0122 09:21:54.906114 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-864bc8bfcf-nvbzn"] Jan 22 09:21:54 crc kubenswrapper[4811]: W0122 09:21:54.914045 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3cd66dd0_aadf_46e8_b0d0_48d0563efa06.slice/crio-7846e85ab1dec59680232b3c11ee69ab82669635c72300d7a0d743253490dc63 WatchSource:0}: Error finding container 7846e85ab1dec59680232b3c11ee69ab82669635c72300d7a0d743253490dc63: Status 404 returned error can't find the container with id 7846e85ab1dec59680232b3c11ee69ab82669635c72300d7a0d743253490dc63 Jan 22 09:21:55 crc kubenswrapper[4811]: I0122 09:21:55.031097 4811 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-b5fd9ff5-td6xf" podUID="d8ace218-8390-49f1-950a-7162f7bce032" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.138:9696/\": dial tcp 10.217.0.138:9696: connect: connection refused" Jan 22 09:21:55 crc kubenswrapper[4811]: I0122 09:21:55.464412 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"467a6b01-4fa9-484a-b65e-a91d89e91eb3","Type":"ContainerStarted","Data":"3a24fd76a13bd447bf4170555fd4850c0729f4161be448b52a68a2f87fdfd94d"} Jan 22 09:21:55 crc kubenswrapper[4811]: I0122 09:21:55.465087 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"467a6b01-4fa9-484a-b65e-a91d89e91eb3","Type":"ContainerStarted","Data":"a8ec8050cc73a7db1766e9d9f5ed078eac7283b6709276181350d8465c6224c7"} Jan 22 09:21:55 crc kubenswrapper[4811]: I0122 09:21:55.478787 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-864bc8bfcf-nvbzn" event={"ID":"3cd66dd0-aadf-46e8-b0d0-48d0563efa06","Type":"ContainerStarted","Data":"49c68972cec5099afdd5f1c963be809d1ab341f4b76cf338783ebd4235a98256"} Jan 22 09:21:55 crc kubenswrapper[4811]: I0122 09:21:55.478833 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-864bc8bfcf-nvbzn" event={"ID":"3cd66dd0-aadf-46e8-b0d0-48d0563efa06","Type":"ContainerStarted","Data":"7de39f28843ed3cd1b228a4881e613ae809164b8b9779a0cf86cc30d6c79a741"} Jan 22 09:21:55 crc kubenswrapper[4811]: I0122 09:21:55.478844 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-864bc8bfcf-nvbzn" event={"ID":"3cd66dd0-aadf-46e8-b0d0-48d0563efa06","Type":"ContainerStarted","Data":"7846e85ab1dec59680232b3c11ee69ab82669635c72300d7a0d743253490dc63"} Jan 22 09:21:55 crc kubenswrapper[4811]: I0122 09:21:55.479057 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-864bc8bfcf-nvbzn" Jan 22 09:21:55 crc kubenswrapper[4811]: I0122 09:21:55.485024 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"82d89bb4-3738-46d0-8268-d14e298c13c8","Type":"ContainerStarted","Data":"7c9213d5831e96aab84fdca5e77e9746649b167822e3718605048193a63358df"} Jan 22 09:21:55 crc kubenswrapper[4811]: I0122 09:21:55.485082 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"82d89bb4-3738-46d0-8268-d14e298c13c8","Type":"ContainerStarted","Data":"0fcbefa81fe2d994507c02ca35b639b5c7f0c07f85508b67d25f60c5962ae63e"} Jan 22 09:21:55 crc kubenswrapper[4811]: I0122 09:21:55.531283 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-864bc8bfcf-nvbzn" podStartSLOduration=2.531258083 podStartE2EDuration="2.531258083s" podCreationTimestamp="2026-01-22 09:21:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:21:55.49672965 +0000 UTC m=+959.818916773" watchObservedRunningTime="2026-01-22 09:21:55.531258083 +0000 UTC m=+959.853445196" Jan 22 09:21:55 crc kubenswrapper[4811]: I0122 09:21:55.537925 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.537910155 podStartE2EDuration="3.537910155s" podCreationTimestamp="2026-01-22 09:21:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:21:55.527289701 +0000 UTC m=+959.849476824" watchObservedRunningTime="2026-01-22 09:21:55.537910155 +0000 UTC m=+959.860097279" Jan 22 09:21:56 crc kubenswrapper[4811]: I0122 09:21:56.009965 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ce92f5-a20a-4429-89a1-764b3db3e28a" path="/var/lib/kubelet/pods/49ce92f5-a20a-4429-89a1-764b3db3e28a/volumes" Jan 22 09:21:56 crc kubenswrapper[4811]: I0122 09:21:56.521592 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"467a6b01-4fa9-484a-b65e-a91d89e91eb3","Type":"ContainerStarted","Data":"7b25cb1bfa92c85d1064c702133e9b152515bc276c69919013284de19ca9202c"} Jan 22 09:21:57 crc kubenswrapper[4811]: I0122 09:21:57.532782 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"467a6b01-4fa9-484a-b65e-a91d89e91eb3","Type":"ContainerStarted","Data":"18222df6c78a680ea3883cf28fcbab7b02b7dfc128f8b9c6b10b601843afc038"} Jan 22 09:21:57 crc kubenswrapper[4811]: I0122 09:21:57.752505 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 22 09:21:58 crc kubenswrapper[4811]: I0122 09:21:58.469925 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 22 09:21:58 crc kubenswrapper[4811]: I0122 09:21:58.551618 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"467a6b01-4fa9-484a-b65e-a91d89e91eb3","Type":"ContainerStarted","Data":"f501cc7a9057dead55e46a3cd02152bd9681e329de95326d5c1dc370925e45cc"} Jan 22 09:21:58 crc kubenswrapper[4811]: I0122 09:21:58.551809 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 22 09:21:58 crc kubenswrapper[4811]: I0122 09:21:58.562328 4811 generic.go:334] "Generic (PLEG): container finished" podID="d8ace218-8390-49f1-950a-7162f7bce032" containerID="fa6201b52d14e99f6cea3de3c8c89efb500fedf9bd7c3d79ef1de96fd0c4ddcf" exitCode=0 Jan 22 09:21:58 crc kubenswrapper[4811]: I0122 09:21:58.562379 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-b5fd9ff5-td6xf" event={"ID":"d8ace218-8390-49f1-950a-7162f7bce032","Type":"ContainerDied","Data":"fa6201b52d14e99f6cea3de3c8c89efb500fedf9bd7c3d79ef1de96fd0c4ddcf"} Jan 22 09:21:58 crc kubenswrapper[4811]: I0122 09:21:58.939092 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-b5fd9ff5-td6xf" Jan 22 09:21:58 crc kubenswrapper[4811]: I0122 09:21:58.959596 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.48711337 podStartE2EDuration="5.959580764s" podCreationTimestamp="2026-01-22 09:21:53 +0000 UTC" firstStartedPulling="2026-01-22 09:21:54.667842106 +0000 UTC m=+958.990029229" lastFinishedPulling="2026-01-22 09:21:58.140309501 +0000 UTC m=+962.462496623" observedRunningTime="2026-01-22 09:21:58.588157266 +0000 UTC m=+962.910344389" watchObservedRunningTime="2026-01-22 09:21:58.959580764 +0000 UTC m=+963.281767887" Jan 22 09:21:58 crc kubenswrapper[4811]: I0122 09:21:58.980402 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8ace218-8390-49f1-950a-7162f7bce032-internal-tls-certs\") pod \"d8ace218-8390-49f1-950a-7162f7bce032\" (UID: \"d8ace218-8390-49f1-950a-7162f7bce032\") " Jan 22 09:21:58 crc kubenswrapper[4811]: I0122 09:21:58.980442 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d8ace218-8390-49f1-950a-7162f7bce032-httpd-config\") pod \"d8ace218-8390-49f1-950a-7162f7bce032\" (UID: \"d8ace218-8390-49f1-950a-7162f7bce032\") " Jan 22 09:21:58 crc kubenswrapper[4811]: I0122 09:21:58.980469 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d8ace218-8390-49f1-950a-7162f7bce032-config\") pod \"d8ace218-8390-49f1-950a-7162f7bce032\" (UID: \"d8ace218-8390-49f1-950a-7162f7bce032\") " Jan 22 09:21:58 crc kubenswrapper[4811]: I0122 09:21:58.980511 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8ace218-8390-49f1-950a-7162f7bce032-combined-ca-bundle\") pod \"d8ace218-8390-49f1-950a-7162f7bce032\" (UID: \"d8ace218-8390-49f1-950a-7162f7bce032\") " Jan 22 09:21:58 crc kubenswrapper[4811]: I0122 09:21:58.980554 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qsjgv\" (UniqueName: \"kubernetes.io/projected/d8ace218-8390-49f1-950a-7162f7bce032-kube-api-access-qsjgv\") pod \"d8ace218-8390-49f1-950a-7162f7bce032\" (UID: \"d8ace218-8390-49f1-950a-7162f7bce032\") " Jan 22 09:21:58 crc kubenswrapper[4811]: I0122 09:21:58.980573 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8ace218-8390-49f1-950a-7162f7bce032-ovndb-tls-certs\") pod \"d8ace218-8390-49f1-950a-7162f7bce032\" (UID: \"d8ace218-8390-49f1-950a-7162f7bce032\") " Jan 22 09:21:58 crc kubenswrapper[4811]: I0122 09:21:58.987896 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8ace218-8390-49f1-950a-7162f7bce032-kube-api-access-qsjgv" (OuterVolumeSpecName: "kube-api-access-qsjgv") pod "d8ace218-8390-49f1-950a-7162f7bce032" (UID: "d8ace218-8390-49f1-950a-7162f7bce032"). InnerVolumeSpecName "kube-api-access-qsjgv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:21:58 crc kubenswrapper[4811]: I0122 09:21:58.996748 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8ace218-8390-49f1-950a-7162f7bce032-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "d8ace218-8390-49f1-950a-7162f7bce032" (UID: "d8ace218-8390-49f1-950a-7162f7bce032"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:21:59 crc kubenswrapper[4811]: I0122 09:21:59.048336 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8ace218-8390-49f1-950a-7162f7bce032-config" (OuterVolumeSpecName: "config") pod "d8ace218-8390-49f1-950a-7162f7bce032" (UID: "d8ace218-8390-49f1-950a-7162f7bce032"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:21:59 crc kubenswrapper[4811]: I0122 09:21:59.059478 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8ace218-8390-49f1-950a-7162f7bce032-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "d8ace218-8390-49f1-950a-7162f7bce032" (UID: "d8ace218-8390-49f1-950a-7162f7bce032"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:21:59 crc kubenswrapper[4811]: I0122 09:21:59.079811 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8ace218-8390-49f1-950a-7162f7bce032-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d8ace218-8390-49f1-950a-7162f7bce032" (UID: "d8ace218-8390-49f1-950a-7162f7bce032"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:21:59 crc kubenswrapper[4811]: I0122 09:21:59.083351 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8ace218-8390-49f1-950a-7162f7bce032-public-tls-certs\") pod \"d8ace218-8390-49f1-950a-7162f7bce032\" (UID: \"d8ace218-8390-49f1-950a-7162f7bce032\") " Jan 22 09:21:59 crc kubenswrapper[4811]: I0122 09:21:59.084611 4811 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8ace218-8390-49f1-950a-7162f7bce032-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 09:21:59 crc kubenswrapper[4811]: I0122 09:21:59.084725 4811 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d8ace218-8390-49f1-950a-7162f7bce032-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 22 09:21:59 crc kubenswrapper[4811]: I0122 09:21:59.084738 4811 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/d8ace218-8390-49f1-950a-7162f7bce032-config\") on node \"crc\" DevicePath \"\"" Jan 22 09:21:59 crc kubenswrapper[4811]: I0122 09:21:59.085068 4811 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8ace218-8390-49f1-950a-7162f7bce032-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 09:21:59 crc kubenswrapper[4811]: I0122 09:21:59.085108 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qsjgv\" (UniqueName: \"kubernetes.io/projected/d8ace218-8390-49f1-950a-7162f7bce032-kube-api-access-qsjgv\") on node \"crc\" DevicePath \"\"" Jan 22 09:21:59 crc kubenswrapper[4811]: I0122 09:21:59.093747 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8ace218-8390-49f1-950a-7162f7bce032-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "d8ace218-8390-49f1-950a-7162f7bce032" (UID: "d8ace218-8390-49f1-950a-7162f7bce032"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:21:59 crc kubenswrapper[4811]: I0122 09:21:59.123336 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8ace218-8390-49f1-950a-7162f7bce032-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "d8ace218-8390-49f1-950a-7162f7bce032" (UID: "d8ace218-8390-49f1-950a-7162f7bce032"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:21:59 crc kubenswrapper[4811]: I0122 09:21:59.187361 4811 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8ace218-8390-49f1-950a-7162f7bce032-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 09:21:59 crc kubenswrapper[4811]: I0122 09:21:59.187387 4811 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8ace218-8390-49f1-950a-7162f7bce032-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 09:21:59 crc kubenswrapper[4811]: I0122 09:21:59.343327 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-xs5vv" Jan 22 09:21:59 crc kubenswrapper[4811]: I0122 09:21:59.384773 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-xs5vv" Jan 22 09:21:59 crc kubenswrapper[4811]: I0122 09:21:59.576204 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-b5fd9ff5-td6xf" event={"ID":"d8ace218-8390-49f1-950a-7162f7bce032","Type":"ContainerDied","Data":"cf52d87bfede37c38a852f5c0f9615fc219d79e983f911a01d8548db7cd3108c"} Jan 22 09:21:59 crc kubenswrapper[4811]: I0122 09:21:59.576270 4811 scope.go:117] "RemoveContainer" containerID="9391132e8d380c4c3f1fd0d0b7dfdabe021b6149cf92e443862c0b800bb9c893" Jan 22 09:21:59 crc kubenswrapper[4811]: I0122 09:21:59.576271 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-b5fd9ff5-td6xf" Jan 22 09:21:59 crc kubenswrapper[4811]: I0122 09:21:59.606706 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-b5fd9ff5-td6xf"] Jan 22 09:21:59 crc kubenswrapper[4811]: I0122 09:21:59.611547 4811 scope.go:117] "RemoveContainer" containerID="fa6201b52d14e99f6cea3de3c8c89efb500fedf9bd7c3d79ef1de96fd0c4ddcf" Jan 22 09:21:59 crc kubenswrapper[4811]: I0122 09:21:59.614770 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-b5fd9ff5-td6xf"] Jan 22 09:22:00 crc kubenswrapper[4811]: I0122 09:22:00.003931 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8ace218-8390-49f1-950a-7162f7bce032" path="/var/lib/kubelet/pods/d8ace218-8390-49f1-950a-7162f7bce032/volumes" Jan 22 09:22:00 crc kubenswrapper[4811]: I0122 09:22:00.157754 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xs5vv"] Jan 22 09:22:00 crc kubenswrapper[4811]: I0122 09:22:00.589903 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-xs5vv" podUID="8e75e74d-a6bf-48c5-bedb-8fb30806d29e" containerName="registry-server" containerID="cri-o://aee4f296e96eff80133eac3ffcfc74b6ced9111cd32179b5e35f0ccbbba56382" gracePeriod=2 Jan 22 09:22:01 crc kubenswrapper[4811]: I0122 09:22:01.040730 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xs5vv" Jan 22 09:22:01 crc kubenswrapper[4811]: I0122 09:22:01.134788 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e75e74d-a6bf-48c5-bedb-8fb30806d29e-utilities\") pod \"8e75e74d-a6bf-48c5-bedb-8fb30806d29e\" (UID: \"8e75e74d-a6bf-48c5-bedb-8fb30806d29e\") " Jan 22 09:22:01 crc kubenswrapper[4811]: I0122 09:22:01.134997 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9x72c\" (UniqueName: \"kubernetes.io/projected/8e75e74d-a6bf-48c5-bedb-8fb30806d29e-kube-api-access-9x72c\") pod \"8e75e74d-a6bf-48c5-bedb-8fb30806d29e\" (UID: \"8e75e74d-a6bf-48c5-bedb-8fb30806d29e\") " Jan 22 09:22:01 crc kubenswrapper[4811]: I0122 09:22:01.135028 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e75e74d-a6bf-48c5-bedb-8fb30806d29e-catalog-content\") pod \"8e75e74d-a6bf-48c5-bedb-8fb30806d29e\" (UID: \"8e75e74d-a6bf-48c5-bedb-8fb30806d29e\") " Jan 22 09:22:01 crc kubenswrapper[4811]: I0122 09:22:01.135364 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e75e74d-a6bf-48c5-bedb-8fb30806d29e-utilities" (OuterVolumeSpecName: "utilities") pod "8e75e74d-a6bf-48c5-bedb-8fb30806d29e" (UID: "8e75e74d-a6bf-48c5-bedb-8fb30806d29e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:22:01 crc kubenswrapper[4811]: I0122 09:22:01.136246 4811 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e75e74d-a6bf-48c5-bedb-8fb30806d29e-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 09:22:01 crc kubenswrapper[4811]: I0122 09:22:01.140438 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e75e74d-a6bf-48c5-bedb-8fb30806d29e-kube-api-access-9x72c" (OuterVolumeSpecName: "kube-api-access-9x72c") pod "8e75e74d-a6bf-48c5-bedb-8fb30806d29e" (UID: "8e75e74d-a6bf-48c5-bedb-8fb30806d29e"). InnerVolumeSpecName "kube-api-access-9x72c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:22:01 crc kubenswrapper[4811]: I0122 09:22:01.236551 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e75e74d-a6bf-48c5-bedb-8fb30806d29e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8e75e74d-a6bf-48c5-bedb-8fb30806d29e" (UID: "8e75e74d-a6bf-48c5-bedb-8fb30806d29e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:22:01 crc kubenswrapper[4811]: I0122 09:22:01.238414 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e75e74d-a6bf-48c5-bedb-8fb30806d29e-catalog-content\") pod \"8e75e74d-a6bf-48c5-bedb-8fb30806d29e\" (UID: \"8e75e74d-a6bf-48c5-bedb-8fb30806d29e\") " Jan 22 09:22:01 crc kubenswrapper[4811]: W0122 09:22:01.238576 4811 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/8e75e74d-a6bf-48c5-bedb-8fb30806d29e/volumes/kubernetes.io~empty-dir/catalog-content Jan 22 09:22:01 crc kubenswrapper[4811]: I0122 09:22:01.238602 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e75e74d-a6bf-48c5-bedb-8fb30806d29e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8e75e74d-a6bf-48c5-bedb-8fb30806d29e" (UID: "8e75e74d-a6bf-48c5-bedb-8fb30806d29e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:22:01 crc kubenswrapper[4811]: I0122 09:22:01.239954 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9x72c\" (UniqueName: \"kubernetes.io/projected/8e75e74d-a6bf-48c5-bedb-8fb30806d29e-kube-api-access-9x72c\") on node \"crc\" DevicePath \"\"" Jan 22 09:22:01 crc kubenswrapper[4811]: I0122 09:22:01.240045 4811 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e75e74d-a6bf-48c5-bedb-8fb30806d29e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 09:22:01 crc kubenswrapper[4811]: I0122 09:22:01.602927 4811 generic.go:334] "Generic (PLEG): container finished" podID="8e75e74d-a6bf-48c5-bedb-8fb30806d29e" containerID="aee4f296e96eff80133eac3ffcfc74b6ced9111cd32179b5e35f0ccbbba56382" exitCode=0 Jan 22 09:22:01 crc kubenswrapper[4811]: I0122 09:22:01.602971 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xs5vv" event={"ID":"8e75e74d-a6bf-48c5-bedb-8fb30806d29e","Type":"ContainerDied","Data":"aee4f296e96eff80133eac3ffcfc74b6ced9111cd32179b5e35f0ccbbba56382"} Jan 22 09:22:01 crc kubenswrapper[4811]: I0122 09:22:01.602979 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xs5vv" Jan 22 09:22:01 crc kubenswrapper[4811]: I0122 09:22:01.603020 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xs5vv" event={"ID":"8e75e74d-a6bf-48c5-bedb-8fb30806d29e","Type":"ContainerDied","Data":"1e96e6bcf82ef503c5fd18323d48f4e04c27f96e302498619b40c9a80ba2d38d"} Jan 22 09:22:01 crc kubenswrapper[4811]: I0122 09:22:01.603077 4811 scope.go:117] "RemoveContainer" containerID="aee4f296e96eff80133eac3ffcfc74b6ced9111cd32179b5e35f0ccbbba56382" Jan 22 09:22:01 crc kubenswrapper[4811]: I0122 09:22:01.626683 4811 scope.go:117] "RemoveContainer" containerID="2943a71f7a5f384c12e44fcd7922d176ba525ee5172ce3baf9607b87215d480a" Jan 22 09:22:01 crc kubenswrapper[4811]: I0122 09:22:01.643298 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xs5vv"] Jan 22 09:22:01 crc kubenswrapper[4811]: I0122 09:22:01.661059 4811 scope.go:117] "RemoveContainer" containerID="f5bf066bfef2789231511e3b08f5222491585ceed2463b00c6f36d138f5fccc1" Jan 22 09:22:01 crc kubenswrapper[4811]: I0122 09:22:01.663350 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-xs5vv"] Jan 22 09:22:01 crc kubenswrapper[4811]: I0122 09:22:01.697085 4811 scope.go:117] "RemoveContainer" containerID="aee4f296e96eff80133eac3ffcfc74b6ced9111cd32179b5e35f0ccbbba56382" Jan 22 09:22:01 crc kubenswrapper[4811]: E0122 09:22:01.697544 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aee4f296e96eff80133eac3ffcfc74b6ced9111cd32179b5e35f0ccbbba56382\": container with ID starting with aee4f296e96eff80133eac3ffcfc74b6ced9111cd32179b5e35f0ccbbba56382 not found: ID does not exist" containerID="aee4f296e96eff80133eac3ffcfc74b6ced9111cd32179b5e35f0ccbbba56382" Jan 22 09:22:01 crc kubenswrapper[4811]: I0122 09:22:01.697602 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aee4f296e96eff80133eac3ffcfc74b6ced9111cd32179b5e35f0ccbbba56382"} err="failed to get container status \"aee4f296e96eff80133eac3ffcfc74b6ced9111cd32179b5e35f0ccbbba56382\": rpc error: code = NotFound desc = could not find container \"aee4f296e96eff80133eac3ffcfc74b6ced9111cd32179b5e35f0ccbbba56382\": container with ID starting with aee4f296e96eff80133eac3ffcfc74b6ced9111cd32179b5e35f0ccbbba56382 not found: ID does not exist" Jan 22 09:22:01 crc kubenswrapper[4811]: I0122 09:22:01.697651 4811 scope.go:117] "RemoveContainer" containerID="2943a71f7a5f384c12e44fcd7922d176ba525ee5172ce3baf9607b87215d480a" Jan 22 09:22:01 crc kubenswrapper[4811]: E0122 09:22:01.698113 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2943a71f7a5f384c12e44fcd7922d176ba525ee5172ce3baf9607b87215d480a\": container with ID starting with 2943a71f7a5f384c12e44fcd7922d176ba525ee5172ce3baf9607b87215d480a not found: ID does not exist" containerID="2943a71f7a5f384c12e44fcd7922d176ba525ee5172ce3baf9607b87215d480a" Jan 22 09:22:01 crc kubenswrapper[4811]: I0122 09:22:01.698164 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2943a71f7a5f384c12e44fcd7922d176ba525ee5172ce3baf9607b87215d480a"} err="failed to get container status \"2943a71f7a5f384c12e44fcd7922d176ba525ee5172ce3baf9607b87215d480a\": rpc error: code = NotFound desc = could not find container \"2943a71f7a5f384c12e44fcd7922d176ba525ee5172ce3baf9607b87215d480a\": container with ID starting with 2943a71f7a5f384c12e44fcd7922d176ba525ee5172ce3baf9607b87215d480a not found: ID does not exist" Jan 22 09:22:01 crc kubenswrapper[4811]: I0122 09:22:01.698203 4811 scope.go:117] "RemoveContainer" containerID="f5bf066bfef2789231511e3b08f5222491585ceed2463b00c6f36d138f5fccc1" Jan 22 09:22:01 crc kubenswrapper[4811]: E0122 09:22:01.698468 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f5bf066bfef2789231511e3b08f5222491585ceed2463b00c6f36d138f5fccc1\": container with ID starting with f5bf066bfef2789231511e3b08f5222491585ceed2463b00c6f36d138f5fccc1 not found: ID does not exist" containerID="f5bf066bfef2789231511e3b08f5222491585ceed2463b00c6f36d138f5fccc1" Jan 22 09:22:01 crc kubenswrapper[4811]: I0122 09:22:01.698505 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5bf066bfef2789231511e3b08f5222491585ceed2463b00c6f36d138f5fccc1"} err="failed to get container status \"f5bf066bfef2789231511e3b08f5222491585ceed2463b00c6f36d138f5fccc1\": rpc error: code = NotFound desc = could not find container \"f5bf066bfef2789231511e3b08f5222491585ceed2463b00c6f36d138f5fccc1\": container with ID starting with f5bf066bfef2789231511e3b08f5222491585ceed2463b00c6f36d138f5fccc1 not found: ID does not exist" Jan 22 09:22:02 crc kubenswrapper[4811]: I0122 09:22:02.000780 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e75e74d-a6bf-48c5-bedb-8fb30806d29e" path="/var/lib/kubelet/pods/8e75e74d-a6bf-48c5-bedb-8fb30806d29e/volumes" Jan 22 09:22:02 crc kubenswrapper[4811]: I0122 09:22:02.112159 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-hz9h2" Jan 22 09:22:02 crc kubenswrapper[4811]: I0122 09:22:02.112201 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-hz9h2" Jan 22 09:22:02 crc kubenswrapper[4811]: I0122 09:22:02.151411 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-hz9h2" Jan 22 09:22:02 crc kubenswrapper[4811]: I0122 09:22:02.679380 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-hz9h2" Jan 22 09:22:02 crc kubenswrapper[4811]: I0122 09:22:02.863046 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-5bf9c84c75-rbgml" Jan 22 09:22:03 crc kubenswrapper[4811]: I0122 09:22:03.036610 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 22 09:22:03 crc kubenswrapper[4811]: I0122 09:22:03.794422 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 22 09:22:03 crc kubenswrapper[4811]: E0122 09:22:03.794720 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e75e74d-a6bf-48c5-bedb-8fb30806d29e" containerName="registry-server" Jan 22 09:22:03 crc kubenswrapper[4811]: I0122 09:22:03.794733 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e75e74d-a6bf-48c5-bedb-8fb30806d29e" containerName="registry-server" Jan 22 09:22:03 crc kubenswrapper[4811]: E0122 09:22:03.794742 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8ace218-8390-49f1-950a-7162f7bce032" containerName="neutron-httpd" Jan 22 09:22:03 crc kubenswrapper[4811]: I0122 09:22:03.794747 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8ace218-8390-49f1-950a-7162f7bce032" containerName="neutron-httpd" Jan 22 09:22:03 crc kubenswrapper[4811]: E0122 09:22:03.794761 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49ce92f5-a20a-4429-89a1-764b3db3e28a" containerName="barbican-api-log" Jan 22 09:22:03 crc kubenswrapper[4811]: I0122 09:22:03.794765 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="49ce92f5-a20a-4429-89a1-764b3db3e28a" containerName="barbican-api-log" Jan 22 09:22:03 crc kubenswrapper[4811]: E0122 09:22:03.794770 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49ce92f5-a20a-4429-89a1-764b3db3e28a" containerName="barbican-api" Jan 22 09:22:03 crc kubenswrapper[4811]: I0122 09:22:03.794775 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="49ce92f5-a20a-4429-89a1-764b3db3e28a" containerName="barbican-api" Jan 22 09:22:03 crc kubenswrapper[4811]: E0122 09:22:03.794784 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e75e74d-a6bf-48c5-bedb-8fb30806d29e" containerName="extract-content" Jan 22 09:22:03 crc kubenswrapper[4811]: I0122 09:22:03.794789 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e75e74d-a6bf-48c5-bedb-8fb30806d29e" containerName="extract-content" Jan 22 09:22:03 crc kubenswrapper[4811]: E0122 09:22:03.794797 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8ace218-8390-49f1-950a-7162f7bce032" containerName="neutron-api" Jan 22 09:22:03 crc kubenswrapper[4811]: I0122 09:22:03.794801 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8ace218-8390-49f1-950a-7162f7bce032" containerName="neutron-api" Jan 22 09:22:03 crc kubenswrapper[4811]: E0122 09:22:03.794815 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e75e74d-a6bf-48c5-bedb-8fb30806d29e" containerName="extract-utilities" Jan 22 09:22:03 crc kubenswrapper[4811]: I0122 09:22:03.794820 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e75e74d-a6bf-48c5-bedb-8fb30806d29e" containerName="extract-utilities" Jan 22 09:22:03 crc kubenswrapper[4811]: I0122 09:22:03.794945 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="49ce92f5-a20a-4429-89a1-764b3db3e28a" containerName="barbican-api-log" Jan 22 09:22:03 crc kubenswrapper[4811]: I0122 09:22:03.794955 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="49ce92f5-a20a-4429-89a1-764b3db3e28a" containerName="barbican-api" Jan 22 09:22:03 crc kubenswrapper[4811]: I0122 09:22:03.794964 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8ace218-8390-49f1-950a-7162f7bce032" containerName="neutron-api" Jan 22 09:22:03 crc kubenswrapper[4811]: I0122 09:22:03.794975 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e75e74d-a6bf-48c5-bedb-8fb30806d29e" containerName="registry-server" Jan 22 09:22:03 crc kubenswrapper[4811]: I0122 09:22:03.794982 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8ace218-8390-49f1-950a-7162f7bce032" containerName="neutron-httpd" Jan 22 09:22:03 crc kubenswrapper[4811]: I0122 09:22:03.795408 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 22 09:22:03 crc kubenswrapper[4811]: I0122 09:22:03.798824 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-cmd2p" Jan 22 09:22:03 crc kubenswrapper[4811]: I0122 09:22:03.799661 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 22 09:22:03 crc kubenswrapper[4811]: I0122 09:22:03.799677 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 22 09:22:03 crc kubenswrapper[4811]: I0122 09:22:03.811824 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 22 09:22:03 crc kubenswrapper[4811]: I0122 09:22:03.896406 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/616ddacf-6ee0-46d9-9e03-c234d53b5dd8-combined-ca-bundle\") pod \"openstackclient\" (UID: \"616ddacf-6ee0-46d9-9e03-c234d53b5dd8\") " pod="openstack/openstackclient" Jan 22 09:22:03 crc kubenswrapper[4811]: I0122 09:22:03.896463 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/616ddacf-6ee0-46d9-9e03-c234d53b5dd8-openstack-config-secret\") pod \"openstackclient\" (UID: \"616ddacf-6ee0-46d9-9e03-c234d53b5dd8\") " pod="openstack/openstackclient" Jan 22 09:22:03 crc kubenswrapper[4811]: I0122 09:22:03.896552 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s62kt\" (UniqueName: \"kubernetes.io/projected/616ddacf-6ee0-46d9-9e03-c234d53b5dd8-kube-api-access-s62kt\") pod \"openstackclient\" (UID: \"616ddacf-6ee0-46d9-9e03-c234d53b5dd8\") " pod="openstack/openstackclient" Jan 22 09:22:03 crc kubenswrapper[4811]: I0122 09:22:03.896585 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/616ddacf-6ee0-46d9-9e03-c234d53b5dd8-openstack-config\") pod \"openstackclient\" (UID: \"616ddacf-6ee0-46d9-9e03-c234d53b5dd8\") " pod="openstack/openstackclient" Jan 22 09:22:04 crc kubenswrapper[4811]: I0122 09:22:04.001985 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/616ddacf-6ee0-46d9-9e03-c234d53b5dd8-combined-ca-bundle\") pod \"openstackclient\" (UID: \"616ddacf-6ee0-46d9-9e03-c234d53b5dd8\") " pod="openstack/openstackclient" Jan 22 09:22:04 crc kubenswrapper[4811]: I0122 09:22:04.002088 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/616ddacf-6ee0-46d9-9e03-c234d53b5dd8-openstack-config-secret\") pod \"openstackclient\" (UID: \"616ddacf-6ee0-46d9-9e03-c234d53b5dd8\") " pod="openstack/openstackclient" Jan 22 09:22:04 crc kubenswrapper[4811]: I0122 09:22:04.002250 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s62kt\" (UniqueName: \"kubernetes.io/projected/616ddacf-6ee0-46d9-9e03-c234d53b5dd8-kube-api-access-s62kt\") pod \"openstackclient\" (UID: \"616ddacf-6ee0-46d9-9e03-c234d53b5dd8\") " pod="openstack/openstackclient" Jan 22 09:22:04 crc kubenswrapper[4811]: I0122 09:22:04.002311 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/616ddacf-6ee0-46d9-9e03-c234d53b5dd8-openstack-config\") pod \"openstackclient\" (UID: \"616ddacf-6ee0-46d9-9e03-c234d53b5dd8\") " pod="openstack/openstackclient" Jan 22 09:22:04 crc kubenswrapper[4811]: I0122 09:22:04.003274 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/616ddacf-6ee0-46d9-9e03-c234d53b5dd8-openstack-config\") pod \"openstackclient\" (UID: \"616ddacf-6ee0-46d9-9e03-c234d53b5dd8\") " pod="openstack/openstackclient" Jan 22 09:22:04 crc kubenswrapper[4811]: I0122 09:22:04.008700 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/616ddacf-6ee0-46d9-9e03-c234d53b5dd8-openstack-config-secret\") pod \"openstackclient\" (UID: \"616ddacf-6ee0-46d9-9e03-c234d53b5dd8\") " pod="openstack/openstackclient" Jan 22 09:22:04 crc kubenswrapper[4811]: I0122 09:22:04.027097 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/616ddacf-6ee0-46d9-9e03-c234d53b5dd8-combined-ca-bundle\") pod \"openstackclient\" (UID: \"616ddacf-6ee0-46d9-9e03-c234d53b5dd8\") " pod="openstack/openstackclient" Jan 22 09:22:04 crc kubenswrapper[4811]: I0122 09:22:04.032453 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s62kt\" (UniqueName: \"kubernetes.io/projected/616ddacf-6ee0-46d9-9e03-c234d53b5dd8-kube-api-access-s62kt\") pod \"openstackclient\" (UID: \"616ddacf-6ee0-46d9-9e03-c234d53b5dd8\") " pod="openstack/openstackclient" Jan 22 09:22:04 crc kubenswrapper[4811]: I0122 09:22:04.109471 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 22 09:22:04 crc kubenswrapper[4811]: I0122 09:22:04.359019 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hz9h2"] Jan 22 09:22:04 crc kubenswrapper[4811]: I0122 09:22:04.553169 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 22 09:22:04 crc kubenswrapper[4811]: I0122 09:22:04.631581 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"616ddacf-6ee0-46d9-9e03-c234d53b5dd8","Type":"ContainerStarted","Data":"1f75fb68084035c1b8f319f42b84be42301bced3206e4352bd141aec0e10d9cf"} Jan 22 09:22:04 crc kubenswrapper[4811]: I0122 09:22:04.631729 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-hz9h2" podUID="889f85a4-9070-4d9c-82b9-4171d53b035c" containerName="registry-server" containerID="cri-o://5bf0d84c1603be5235f1064336c52ce77092a96a9da377df2a24850178937468" gracePeriod=2 Jan 22 09:22:04 crc kubenswrapper[4811]: I0122 09:22:04.980960 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hz9h2" Jan 22 09:22:05 crc kubenswrapper[4811]: I0122 09:22:05.020227 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g2xg5\" (UniqueName: \"kubernetes.io/projected/889f85a4-9070-4d9c-82b9-4171d53b035c-kube-api-access-g2xg5\") pod \"889f85a4-9070-4d9c-82b9-4171d53b035c\" (UID: \"889f85a4-9070-4d9c-82b9-4171d53b035c\") " Jan 22 09:22:05 crc kubenswrapper[4811]: I0122 09:22:05.020399 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/889f85a4-9070-4d9c-82b9-4171d53b035c-catalog-content\") pod \"889f85a4-9070-4d9c-82b9-4171d53b035c\" (UID: \"889f85a4-9070-4d9c-82b9-4171d53b035c\") " Jan 22 09:22:05 crc kubenswrapper[4811]: I0122 09:22:05.020682 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/889f85a4-9070-4d9c-82b9-4171d53b035c-utilities\") pod \"889f85a4-9070-4d9c-82b9-4171d53b035c\" (UID: \"889f85a4-9070-4d9c-82b9-4171d53b035c\") " Jan 22 09:22:05 crc kubenswrapper[4811]: I0122 09:22:05.024654 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/889f85a4-9070-4d9c-82b9-4171d53b035c-utilities" (OuterVolumeSpecName: "utilities") pod "889f85a4-9070-4d9c-82b9-4171d53b035c" (UID: "889f85a4-9070-4d9c-82b9-4171d53b035c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:22:05 crc kubenswrapper[4811]: I0122 09:22:05.030199 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/889f85a4-9070-4d9c-82b9-4171d53b035c-kube-api-access-g2xg5" (OuterVolumeSpecName: "kube-api-access-g2xg5") pod "889f85a4-9070-4d9c-82b9-4171d53b035c" (UID: "889f85a4-9070-4d9c-82b9-4171d53b035c"). InnerVolumeSpecName "kube-api-access-g2xg5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:22:05 crc kubenswrapper[4811]: I0122 09:22:05.042491 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/889f85a4-9070-4d9c-82b9-4171d53b035c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "889f85a4-9070-4d9c-82b9-4171d53b035c" (UID: "889f85a4-9070-4d9c-82b9-4171d53b035c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:22:05 crc kubenswrapper[4811]: I0122 09:22:05.123472 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g2xg5\" (UniqueName: \"kubernetes.io/projected/889f85a4-9070-4d9c-82b9-4171d53b035c-kube-api-access-g2xg5\") on node \"crc\" DevicePath \"\"" Jan 22 09:22:05 crc kubenswrapper[4811]: I0122 09:22:05.123511 4811 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/889f85a4-9070-4d9c-82b9-4171d53b035c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 09:22:05 crc kubenswrapper[4811]: I0122 09:22:05.123523 4811 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/889f85a4-9070-4d9c-82b9-4171d53b035c-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 09:22:05 crc kubenswrapper[4811]: I0122 09:22:05.654648 4811 generic.go:334] "Generic (PLEG): container finished" podID="889f85a4-9070-4d9c-82b9-4171d53b035c" containerID="5bf0d84c1603be5235f1064336c52ce77092a96a9da377df2a24850178937468" exitCode=0 Jan 22 09:22:05 crc kubenswrapper[4811]: I0122 09:22:05.654701 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hz9h2" event={"ID":"889f85a4-9070-4d9c-82b9-4171d53b035c","Type":"ContainerDied","Data":"5bf0d84c1603be5235f1064336c52ce77092a96a9da377df2a24850178937468"} Jan 22 09:22:05 crc kubenswrapper[4811]: I0122 09:22:05.654738 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hz9h2" Jan 22 09:22:05 crc kubenswrapper[4811]: I0122 09:22:05.654753 4811 scope.go:117] "RemoveContainer" containerID="5bf0d84c1603be5235f1064336c52ce77092a96a9da377df2a24850178937468" Jan 22 09:22:05 crc kubenswrapper[4811]: I0122 09:22:05.654742 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hz9h2" event={"ID":"889f85a4-9070-4d9c-82b9-4171d53b035c","Type":"ContainerDied","Data":"807f35e367407114fa99cc408c2a9cccbdb7e45a6dd121415c2cbed3051b45b9"} Jan 22 09:22:05 crc kubenswrapper[4811]: I0122 09:22:05.681195 4811 scope.go:117] "RemoveContainer" containerID="8628ce5153686113eb35cc27ba826d5ea5ff318b459e8a25ba07eda13f1ca283" Jan 22 09:22:05 crc kubenswrapper[4811]: I0122 09:22:05.688300 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hz9h2"] Jan 22 09:22:05 crc kubenswrapper[4811]: I0122 09:22:05.696506 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-hz9h2"] Jan 22 09:22:05 crc kubenswrapper[4811]: I0122 09:22:05.699763 4811 scope.go:117] "RemoveContainer" containerID="9c3cd1c69938832def8b5ce1bba5ff1e3d689fa3e524fe8b015a1cd1d93b988f" Jan 22 09:22:05 crc kubenswrapper[4811]: I0122 09:22:05.736678 4811 scope.go:117] "RemoveContainer" containerID="5bf0d84c1603be5235f1064336c52ce77092a96a9da377df2a24850178937468" Jan 22 09:22:05 crc kubenswrapper[4811]: E0122 09:22:05.737332 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5bf0d84c1603be5235f1064336c52ce77092a96a9da377df2a24850178937468\": container with ID starting with 5bf0d84c1603be5235f1064336c52ce77092a96a9da377df2a24850178937468 not found: ID does not exist" containerID="5bf0d84c1603be5235f1064336c52ce77092a96a9da377df2a24850178937468" Jan 22 09:22:05 crc kubenswrapper[4811]: I0122 09:22:05.737380 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5bf0d84c1603be5235f1064336c52ce77092a96a9da377df2a24850178937468"} err="failed to get container status \"5bf0d84c1603be5235f1064336c52ce77092a96a9da377df2a24850178937468\": rpc error: code = NotFound desc = could not find container \"5bf0d84c1603be5235f1064336c52ce77092a96a9da377df2a24850178937468\": container with ID starting with 5bf0d84c1603be5235f1064336c52ce77092a96a9da377df2a24850178937468 not found: ID does not exist" Jan 22 09:22:05 crc kubenswrapper[4811]: I0122 09:22:05.737408 4811 scope.go:117] "RemoveContainer" containerID="8628ce5153686113eb35cc27ba826d5ea5ff318b459e8a25ba07eda13f1ca283" Jan 22 09:22:05 crc kubenswrapper[4811]: E0122 09:22:05.738201 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8628ce5153686113eb35cc27ba826d5ea5ff318b459e8a25ba07eda13f1ca283\": container with ID starting with 8628ce5153686113eb35cc27ba826d5ea5ff318b459e8a25ba07eda13f1ca283 not found: ID does not exist" containerID="8628ce5153686113eb35cc27ba826d5ea5ff318b459e8a25ba07eda13f1ca283" Jan 22 09:22:05 crc kubenswrapper[4811]: I0122 09:22:05.738357 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8628ce5153686113eb35cc27ba826d5ea5ff318b459e8a25ba07eda13f1ca283"} err="failed to get container status \"8628ce5153686113eb35cc27ba826d5ea5ff318b459e8a25ba07eda13f1ca283\": rpc error: code = NotFound desc = could not find container \"8628ce5153686113eb35cc27ba826d5ea5ff318b459e8a25ba07eda13f1ca283\": container with ID starting with 8628ce5153686113eb35cc27ba826d5ea5ff318b459e8a25ba07eda13f1ca283 not found: ID does not exist" Jan 22 09:22:05 crc kubenswrapper[4811]: I0122 09:22:05.738524 4811 scope.go:117] "RemoveContainer" containerID="9c3cd1c69938832def8b5ce1bba5ff1e3d689fa3e524fe8b015a1cd1d93b988f" Jan 22 09:22:05 crc kubenswrapper[4811]: E0122 09:22:05.739292 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c3cd1c69938832def8b5ce1bba5ff1e3d689fa3e524fe8b015a1cd1d93b988f\": container with ID starting with 9c3cd1c69938832def8b5ce1bba5ff1e3d689fa3e524fe8b015a1cd1d93b988f not found: ID does not exist" containerID="9c3cd1c69938832def8b5ce1bba5ff1e3d689fa3e524fe8b015a1cd1d93b988f" Jan 22 09:22:05 crc kubenswrapper[4811]: I0122 09:22:05.739331 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c3cd1c69938832def8b5ce1bba5ff1e3d689fa3e524fe8b015a1cd1d93b988f"} err="failed to get container status \"9c3cd1c69938832def8b5ce1bba5ff1e3d689fa3e524fe8b015a1cd1d93b988f\": rpc error: code = NotFound desc = could not find container \"9c3cd1c69938832def8b5ce1bba5ff1e3d689fa3e524fe8b015a1cd1d93b988f\": container with ID starting with 9c3cd1c69938832def8b5ce1bba5ff1e3d689fa3e524fe8b015a1cd1d93b988f not found: ID does not exist" Jan 22 09:22:06 crc kubenswrapper[4811]: I0122 09:22:06.001083 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="889f85a4-9070-4d9c-82b9-4171d53b035c" path="/var/lib/kubelet/pods/889f85a4-9070-4d9c-82b9-4171d53b035c/volumes" Jan 22 09:22:07 crc kubenswrapper[4811]: I0122 09:22:07.831232 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6976f89774-xh5fd" Jan 22 09:22:07 crc kubenswrapper[4811]: I0122 09:22:07.958482 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6976f89774-xh5fd" Jan 22 09:22:10 crc kubenswrapper[4811]: I0122 09:22:10.276789 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 22 09:22:10 crc kubenswrapper[4811]: I0122 09:22:10.277363 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="467a6b01-4fa9-484a-b65e-a91d89e91eb3" containerName="ceilometer-central-agent" containerID="cri-o://3a24fd76a13bd447bf4170555fd4850c0729f4161be448b52a68a2f87fdfd94d" gracePeriod=30 Jan 22 09:22:10 crc kubenswrapper[4811]: I0122 09:22:10.277714 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="467a6b01-4fa9-484a-b65e-a91d89e91eb3" containerName="proxy-httpd" containerID="cri-o://f501cc7a9057dead55e46a3cd02152bd9681e329de95326d5c1dc370925e45cc" gracePeriod=30 Jan 22 09:22:10 crc kubenswrapper[4811]: I0122 09:22:10.277771 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="467a6b01-4fa9-484a-b65e-a91d89e91eb3" containerName="ceilometer-notification-agent" containerID="cri-o://7b25cb1bfa92c85d1064c702133e9b152515bc276c69919013284de19ca9202c" gracePeriod=30 Jan 22 09:22:10 crc kubenswrapper[4811]: I0122 09:22:10.277815 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="467a6b01-4fa9-484a-b65e-a91d89e91eb3" containerName="sg-core" containerID="cri-o://18222df6c78a680ea3883cf28fcbab7b02b7dfc128f8b9c6b10b601843afc038" gracePeriod=30 Jan 22 09:22:10 crc kubenswrapper[4811]: I0122 09:22:10.302281 4811 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="467a6b01-4fa9-484a-b65e-a91d89e91eb3" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502" Jan 22 09:22:10 crc kubenswrapper[4811]: I0122 09:22:10.703730 4811 generic.go:334] "Generic (PLEG): container finished" podID="467a6b01-4fa9-484a-b65e-a91d89e91eb3" containerID="f501cc7a9057dead55e46a3cd02152bd9681e329de95326d5c1dc370925e45cc" exitCode=0 Jan 22 09:22:10 crc kubenswrapper[4811]: I0122 09:22:10.703756 4811 generic.go:334] "Generic (PLEG): container finished" podID="467a6b01-4fa9-484a-b65e-a91d89e91eb3" containerID="18222df6c78a680ea3883cf28fcbab7b02b7dfc128f8b9c6b10b601843afc038" exitCode=2 Jan 22 09:22:10 crc kubenswrapper[4811]: I0122 09:22:10.703775 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"467a6b01-4fa9-484a-b65e-a91d89e91eb3","Type":"ContainerDied","Data":"f501cc7a9057dead55e46a3cd02152bd9681e329de95326d5c1dc370925e45cc"} Jan 22 09:22:10 crc kubenswrapper[4811]: I0122 09:22:10.703798 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"467a6b01-4fa9-484a-b65e-a91d89e91eb3","Type":"ContainerDied","Data":"18222df6c78a680ea3883cf28fcbab7b02b7dfc128f8b9c6b10b601843afc038"} Jan 22 09:22:11 crc kubenswrapper[4811]: I0122 09:22:11.719861 4811 generic.go:334] "Generic (PLEG): container finished" podID="467a6b01-4fa9-484a-b65e-a91d89e91eb3" containerID="7b25cb1bfa92c85d1064c702133e9b152515bc276c69919013284de19ca9202c" exitCode=0 Jan 22 09:22:11 crc kubenswrapper[4811]: I0122 09:22:11.720238 4811 generic.go:334] "Generic (PLEG): container finished" podID="467a6b01-4fa9-484a-b65e-a91d89e91eb3" containerID="3a24fd76a13bd447bf4170555fd4850c0729f4161be448b52a68a2f87fdfd94d" exitCode=0 Jan 22 09:22:11 crc kubenswrapper[4811]: I0122 09:22:11.719949 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"467a6b01-4fa9-484a-b65e-a91d89e91eb3","Type":"ContainerDied","Data":"7b25cb1bfa92c85d1064c702133e9b152515bc276c69919013284de19ca9202c"} Jan 22 09:22:11 crc kubenswrapper[4811]: I0122 09:22:11.720303 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"467a6b01-4fa9-484a-b65e-a91d89e91eb3","Type":"ContainerDied","Data":"3a24fd76a13bd447bf4170555fd4850c0729f4161be448b52a68a2f87fdfd94d"} Jan 22 09:22:16 crc kubenswrapper[4811]: I0122 09:22:16.671818 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 09:22:16 crc kubenswrapper[4811]: I0122 09:22:16.762048 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"616ddacf-6ee0-46d9-9e03-c234d53b5dd8","Type":"ContainerStarted","Data":"e31e728bb69e2cec8b1a0c66bd7f1c440c5437bd511bd1913e2ac67765780097"} Jan 22 09:22:16 crc kubenswrapper[4811]: I0122 09:22:16.764243 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"467a6b01-4fa9-484a-b65e-a91d89e91eb3","Type":"ContainerDied","Data":"a8ec8050cc73a7db1766e9d9f5ed078eac7283b6709276181350d8465c6224c7"} Jan 22 09:22:16 crc kubenswrapper[4811]: I0122 09:22:16.764287 4811 scope.go:117] "RemoveContainer" containerID="f501cc7a9057dead55e46a3cd02152bd9681e329de95326d5c1dc370925e45cc" Jan 22 09:22:16 crc kubenswrapper[4811]: I0122 09:22:16.764288 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 09:22:16 crc kubenswrapper[4811]: I0122 09:22:16.778653 4811 scope.go:117] "RemoveContainer" containerID="18222df6c78a680ea3883cf28fcbab7b02b7dfc128f8b9c6b10b601843afc038" Jan 22 09:22:16 crc kubenswrapper[4811]: I0122 09:22:16.788254 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=1.951994644 podStartE2EDuration="13.788240357s" podCreationTimestamp="2026-01-22 09:22:03 +0000 UTC" firstStartedPulling="2026-01-22 09:22:04.561680864 +0000 UTC m=+968.883867987" lastFinishedPulling="2026-01-22 09:22:16.397926577 +0000 UTC m=+980.720113700" observedRunningTime="2026-01-22 09:22:16.780959005 +0000 UTC m=+981.103146128" watchObservedRunningTime="2026-01-22 09:22:16.788240357 +0000 UTC m=+981.110427480" Jan 22 09:22:16 crc kubenswrapper[4811]: I0122 09:22:16.794461 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/467a6b01-4fa9-484a-b65e-a91d89e91eb3-scripts\") pod \"467a6b01-4fa9-484a-b65e-a91d89e91eb3\" (UID: \"467a6b01-4fa9-484a-b65e-a91d89e91eb3\") " Jan 22 09:22:16 crc kubenswrapper[4811]: I0122 09:22:16.794572 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/467a6b01-4fa9-484a-b65e-a91d89e91eb3-sg-core-conf-yaml\") pod \"467a6b01-4fa9-484a-b65e-a91d89e91eb3\" (UID: \"467a6b01-4fa9-484a-b65e-a91d89e91eb3\") " Jan 22 09:22:16 crc kubenswrapper[4811]: I0122 09:22:16.794697 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/467a6b01-4fa9-484a-b65e-a91d89e91eb3-log-httpd\") pod \"467a6b01-4fa9-484a-b65e-a91d89e91eb3\" (UID: \"467a6b01-4fa9-484a-b65e-a91d89e91eb3\") " Jan 22 09:22:16 crc kubenswrapper[4811]: I0122 09:22:16.794733 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/467a6b01-4fa9-484a-b65e-a91d89e91eb3-config-data\") pod \"467a6b01-4fa9-484a-b65e-a91d89e91eb3\" (UID: \"467a6b01-4fa9-484a-b65e-a91d89e91eb3\") " Jan 22 09:22:16 crc kubenswrapper[4811]: I0122 09:22:16.794784 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qztxp\" (UniqueName: \"kubernetes.io/projected/467a6b01-4fa9-484a-b65e-a91d89e91eb3-kube-api-access-qztxp\") pod \"467a6b01-4fa9-484a-b65e-a91d89e91eb3\" (UID: \"467a6b01-4fa9-484a-b65e-a91d89e91eb3\") " Jan 22 09:22:16 crc kubenswrapper[4811]: I0122 09:22:16.794840 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/467a6b01-4fa9-484a-b65e-a91d89e91eb3-combined-ca-bundle\") pod \"467a6b01-4fa9-484a-b65e-a91d89e91eb3\" (UID: \"467a6b01-4fa9-484a-b65e-a91d89e91eb3\") " Jan 22 09:22:16 crc kubenswrapper[4811]: I0122 09:22:16.794878 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/467a6b01-4fa9-484a-b65e-a91d89e91eb3-run-httpd\") pod \"467a6b01-4fa9-484a-b65e-a91d89e91eb3\" (UID: \"467a6b01-4fa9-484a-b65e-a91d89e91eb3\") " Jan 22 09:22:16 crc kubenswrapper[4811]: I0122 09:22:16.795856 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/467a6b01-4fa9-484a-b65e-a91d89e91eb3-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "467a6b01-4fa9-484a-b65e-a91d89e91eb3" (UID: "467a6b01-4fa9-484a-b65e-a91d89e91eb3"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:22:16 crc kubenswrapper[4811]: I0122 09:22:16.795926 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/467a6b01-4fa9-484a-b65e-a91d89e91eb3-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "467a6b01-4fa9-484a-b65e-a91d89e91eb3" (UID: "467a6b01-4fa9-484a-b65e-a91d89e91eb3"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:22:16 crc kubenswrapper[4811]: I0122 09:22:16.796086 4811 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/467a6b01-4fa9-484a-b65e-a91d89e91eb3-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 09:22:16 crc kubenswrapper[4811]: I0122 09:22:16.796105 4811 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/467a6b01-4fa9-484a-b65e-a91d89e91eb3-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 09:22:16 crc kubenswrapper[4811]: I0122 09:22:16.799725 4811 scope.go:117] "RemoveContainer" containerID="7b25cb1bfa92c85d1064c702133e9b152515bc276c69919013284de19ca9202c" Jan 22 09:22:16 crc kubenswrapper[4811]: I0122 09:22:16.801287 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/467a6b01-4fa9-484a-b65e-a91d89e91eb3-scripts" (OuterVolumeSpecName: "scripts") pod "467a6b01-4fa9-484a-b65e-a91d89e91eb3" (UID: "467a6b01-4fa9-484a-b65e-a91d89e91eb3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:22:16 crc kubenswrapper[4811]: I0122 09:22:16.802980 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/467a6b01-4fa9-484a-b65e-a91d89e91eb3-kube-api-access-qztxp" (OuterVolumeSpecName: "kube-api-access-qztxp") pod "467a6b01-4fa9-484a-b65e-a91d89e91eb3" (UID: "467a6b01-4fa9-484a-b65e-a91d89e91eb3"). InnerVolumeSpecName "kube-api-access-qztxp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:22:16 crc kubenswrapper[4811]: I0122 09:22:16.826251 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/467a6b01-4fa9-484a-b65e-a91d89e91eb3-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "467a6b01-4fa9-484a-b65e-a91d89e91eb3" (UID: "467a6b01-4fa9-484a-b65e-a91d89e91eb3"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:22:16 crc kubenswrapper[4811]: I0122 09:22:16.842202 4811 scope.go:117] "RemoveContainer" containerID="3a24fd76a13bd447bf4170555fd4850c0729f4161be448b52a68a2f87fdfd94d" Jan 22 09:22:16 crc kubenswrapper[4811]: I0122 09:22:16.872730 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/467a6b01-4fa9-484a-b65e-a91d89e91eb3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "467a6b01-4fa9-484a-b65e-a91d89e91eb3" (UID: "467a6b01-4fa9-484a-b65e-a91d89e91eb3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:22:16 crc kubenswrapper[4811]: I0122 09:22:16.884424 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/467a6b01-4fa9-484a-b65e-a91d89e91eb3-config-data" (OuterVolumeSpecName: "config-data") pod "467a6b01-4fa9-484a-b65e-a91d89e91eb3" (UID: "467a6b01-4fa9-484a-b65e-a91d89e91eb3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:22:16 crc kubenswrapper[4811]: I0122 09:22:16.897960 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qztxp\" (UniqueName: \"kubernetes.io/projected/467a6b01-4fa9-484a-b65e-a91d89e91eb3-kube-api-access-qztxp\") on node \"crc\" DevicePath \"\"" Jan 22 09:22:16 crc kubenswrapper[4811]: I0122 09:22:16.898721 4811 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/467a6b01-4fa9-484a-b65e-a91d89e91eb3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 09:22:16 crc kubenswrapper[4811]: I0122 09:22:16.898748 4811 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/467a6b01-4fa9-484a-b65e-a91d89e91eb3-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 09:22:16 crc kubenswrapper[4811]: I0122 09:22:16.898758 4811 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/467a6b01-4fa9-484a-b65e-a91d89e91eb3-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 22 09:22:16 crc kubenswrapper[4811]: I0122 09:22:16.898769 4811 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/467a6b01-4fa9-484a-b65e-a91d89e91eb3-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 09:22:17 crc kubenswrapper[4811]: I0122 09:22:17.093297 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 22 09:22:17 crc kubenswrapper[4811]: I0122 09:22:17.098341 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 22 09:22:17 crc kubenswrapper[4811]: I0122 09:22:17.117036 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 22 09:22:17 crc kubenswrapper[4811]: E0122 09:22:17.117393 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="467a6b01-4fa9-484a-b65e-a91d89e91eb3" containerName="sg-core" Jan 22 09:22:17 crc kubenswrapper[4811]: I0122 09:22:17.117476 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="467a6b01-4fa9-484a-b65e-a91d89e91eb3" containerName="sg-core" Jan 22 09:22:17 crc kubenswrapper[4811]: E0122 09:22:17.117529 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="889f85a4-9070-4d9c-82b9-4171d53b035c" containerName="extract-utilities" Jan 22 09:22:17 crc kubenswrapper[4811]: I0122 09:22:17.117579 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="889f85a4-9070-4d9c-82b9-4171d53b035c" containerName="extract-utilities" Jan 22 09:22:17 crc kubenswrapper[4811]: E0122 09:22:17.117673 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="889f85a4-9070-4d9c-82b9-4171d53b035c" containerName="registry-server" Jan 22 09:22:17 crc kubenswrapper[4811]: I0122 09:22:17.117733 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="889f85a4-9070-4d9c-82b9-4171d53b035c" containerName="registry-server" Jan 22 09:22:17 crc kubenswrapper[4811]: E0122 09:22:17.117805 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="467a6b01-4fa9-484a-b65e-a91d89e91eb3" containerName="ceilometer-central-agent" Jan 22 09:22:17 crc kubenswrapper[4811]: I0122 09:22:17.117858 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="467a6b01-4fa9-484a-b65e-a91d89e91eb3" containerName="ceilometer-central-agent" Jan 22 09:22:17 crc kubenswrapper[4811]: E0122 09:22:17.117912 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="467a6b01-4fa9-484a-b65e-a91d89e91eb3" containerName="proxy-httpd" Jan 22 09:22:17 crc kubenswrapper[4811]: I0122 09:22:17.117952 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="467a6b01-4fa9-484a-b65e-a91d89e91eb3" containerName="proxy-httpd" Jan 22 09:22:17 crc kubenswrapper[4811]: E0122 09:22:17.118000 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="467a6b01-4fa9-484a-b65e-a91d89e91eb3" containerName="ceilometer-notification-agent" Jan 22 09:22:17 crc kubenswrapper[4811]: I0122 09:22:17.118051 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="467a6b01-4fa9-484a-b65e-a91d89e91eb3" containerName="ceilometer-notification-agent" Jan 22 09:22:17 crc kubenswrapper[4811]: E0122 09:22:17.118098 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="889f85a4-9070-4d9c-82b9-4171d53b035c" containerName="extract-content" Jan 22 09:22:17 crc kubenswrapper[4811]: I0122 09:22:17.118141 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="889f85a4-9070-4d9c-82b9-4171d53b035c" containerName="extract-content" Jan 22 09:22:17 crc kubenswrapper[4811]: I0122 09:22:17.118381 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="467a6b01-4fa9-484a-b65e-a91d89e91eb3" containerName="sg-core" Jan 22 09:22:17 crc kubenswrapper[4811]: I0122 09:22:17.118435 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="889f85a4-9070-4d9c-82b9-4171d53b035c" containerName="registry-server" Jan 22 09:22:17 crc kubenswrapper[4811]: I0122 09:22:17.118489 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="467a6b01-4fa9-484a-b65e-a91d89e91eb3" containerName="ceilometer-central-agent" Jan 22 09:22:17 crc kubenswrapper[4811]: I0122 09:22:17.118532 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="467a6b01-4fa9-484a-b65e-a91d89e91eb3" containerName="ceilometer-notification-agent" Jan 22 09:22:17 crc kubenswrapper[4811]: I0122 09:22:17.118582 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="467a6b01-4fa9-484a-b65e-a91d89e91eb3" containerName="proxy-httpd" Jan 22 09:22:17 crc kubenswrapper[4811]: I0122 09:22:17.119895 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 09:22:17 crc kubenswrapper[4811]: I0122 09:22:17.123634 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 22 09:22:17 crc kubenswrapper[4811]: I0122 09:22:17.130311 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 22 09:22:17 crc kubenswrapper[4811]: I0122 09:22:17.135677 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 22 09:22:17 crc kubenswrapper[4811]: I0122 09:22:17.203690 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4b66e2d7-4f56-4f5d-99f7-e89339015a9a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4b66e2d7-4f56-4f5d-99f7-e89339015a9a\") " pod="openstack/ceilometer-0" Jan 22 09:22:17 crc kubenswrapper[4811]: I0122 09:22:17.203817 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4b66e2d7-4f56-4f5d-99f7-e89339015a9a-run-httpd\") pod \"ceilometer-0\" (UID: \"4b66e2d7-4f56-4f5d-99f7-e89339015a9a\") " pod="openstack/ceilometer-0" Jan 22 09:22:17 crc kubenswrapper[4811]: I0122 09:22:17.203846 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b66e2d7-4f56-4f5d-99f7-e89339015a9a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4b66e2d7-4f56-4f5d-99f7-e89339015a9a\") " pod="openstack/ceilometer-0" Jan 22 09:22:17 crc kubenswrapper[4811]: I0122 09:22:17.203890 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b66e2d7-4f56-4f5d-99f7-e89339015a9a-scripts\") pod \"ceilometer-0\" (UID: \"4b66e2d7-4f56-4f5d-99f7-e89339015a9a\") " pod="openstack/ceilometer-0" Jan 22 09:22:17 crc kubenswrapper[4811]: I0122 09:22:17.203954 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4b66e2d7-4f56-4f5d-99f7-e89339015a9a-log-httpd\") pod \"ceilometer-0\" (UID: \"4b66e2d7-4f56-4f5d-99f7-e89339015a9a\") " pod="openstack/ceilometer-0" Jan 22 09:22:17 crc kubenswrapper[4811]: I0122 09:22:17.204212 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7crds\" (UniqueName: \"kubernetes.io/projected/4b66e2d7-4f56-4f5d-99f7-e89339015a9a-kube-api-access-7crds\") pod \"ceilometer-0\" (UID: \"4b66e2d7-4f56-4f5d-99f7-e89339015a9a\") " pod="openstack/ceilometer-0" Jan 22 09:22:17 crc kubenswrapper[4811]: I0122 09:22:17.204327 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b66e2d7-4f56-4f5d-99f7-e89339015a9a-config-data\") pod \"ceilometer-0\" (UID: \"4b66e2d7-4f56-4f5d-99f7-e89339015a9a\") " pod="openstack/ceilometer-0" Jan 22 09:22:17 crc kubenswrapper[4811]: I0122 09:22:17.306664 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4b66e2d7-4f56-4f5d-99f7-e89339015a9a-run-httpd\") pod \"ceilometer-0\" (UID: \"4b66e2d7-4f56-4f5d-99f7-e89339015a9a\") " pod="openstack/ceilometer-0" Jan 22 09:22:17 crc kubenswrapper[4811]: I0122 09:22:17.307244 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b66e2d7-4f56-4f5d-99f7-e89339015a9a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4b66e2d7-4f56-4f5d-99f7-e89339015a9a\") " pod="openstack/ceilometer-0" Jan 22 09:22:17 crc kubenswrapper[4811]: I0122 09:22:17.307754 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b66e2d7-4f56-4f5d-99f7-e89339015a9a-scripts\") pod \"ceilometer-0\" (UID: \"4b66e2d7-4f56-4f5d-99f7-e89339015a9a\") " pod="openstack/ceilometer-0" Jan 22 09:22:17 crc kubenswrapper[4811]: I0122 09:22:17.307895 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4b66e2d7-4f56-4f5d-99f7-e89339015a9a-log-httpd\") pod \"ceilometer-0\" (UID: \"4b66e2d7-4f56-4f5d-99f7-e89339015a9a\") " pod="openstack/ceilometer-0" Jan 22 09:22:17 crc kubenswrapper[4811]: I0122 09:22:17.307204 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4b66e2d7-4f56-4f5d-99f7-e89339015a9a-run-httpd\") pod \"ceilometer-0\" (UID: \"4b66e2d7-4f56-4f5d-99f7-e89339015a9a\") " pod="openstack/ceilometer-0" Jan 22 09:22:17 crc kubenswrapper[4811]: I0122 09:22:17.308203 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7crds\" (UniqueName: \"kubernetes.io/projected/4b66e2d7-4f56-4f5d-99f7-e89339015a9a-kube-api-access-7crds\") pod \"ceilometer-0\" (UID: \"4b66e2d7-4f56-4f5d-99f7-e89339015a9a\") " pod="openstack/ceilometer-0" Jan 22 09:22:17 crc kubenswrapper[4811]: I0122 09:22:17.308306 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b66e2d7-4f56-4f5d-99f7-e89339015a9a-config-data\") pod \"ceilometer-0\" (UID: \"4b66e2d7-4f56-4f5d-99f7-e89339015a9a\") " pod="openstack/ceilometer-0" Jan 22 09:22:17 crc kubenswrapper[4811]: I0122 09:22:17.308477 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4b66e2d7-4f56-4f5d-99f7-e89339015a9a-log-httpd\") pod \"ceilometer-0\" (UID: \"4b66e2d7-4f56-4f5d-99f7-e89339015a9a\") " pod="openstack/ceilometer-0" Jan 22 09:22:17 crc kubenswrapper[4811]: I0122 09:22:17.308482 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4b66e2d7-4f56-4f5d-99f7-e89339015a9a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4b66e2d7-4f56-4f5d-99f7-e89339015a9a\") " pod="openstack/ceilometer-0" Jan 22 09:22:17 crc kubenswrapper[4811]: I0122 09:22:17.311433 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b66e2d7-4f56-4f5d-99f7-e89339015a9a-scripts\") pod \"ceilometer-0\" (UID: \"4b66e2d7-4f56-4f5d-99f7-e89339015a9a\") " pod="openstack/ceilometer-0" Jan 22 09:22:17 crc kubenswrapper[4811]: I0122 09:22:17.312267 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b66e2d7-4f56-4f5d-99f7-e89339015a9a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4b66e2d7-4f56-4f5d-99f7-e89339015a9a\") " pod="openstack/ceilometer-0" Jan 22 09:22:17 crc kubenswrapper[4811]: I0122 09:22:17.312771 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4b66e2d7-4f56-4f5d-99f7-e89339015a9a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4b66e2d7-4f56-4f5d-99f7-e89339015a9a\") " pod="openstack/ceilometer-0" Jan 22 09:22:17 crc kubenswrapper[4811]: I0122 09:22:17.321694 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b66e2d7-4f56-4f5d-99f7-e89339015a9a-config-data\") pod \"ceilometer-0\" (UID: \"4b66e2d7-4f56-4f5d-99f7-e89339015a9a\") " pod="openstack/ceilometer-0" Jan 22 09:22:17 crc kubenswrapper[4811]: I0122 09:22:17.324062 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 22 09:22:17 crc kubenswrapper[4811]: E0122 09:22:17.325176 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-7crds], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/ceilometer-0" podUID="4b66e2d7-4f56-4f5d-99f7-e89339015a9a" Jan 22 09:22:17 crc kubenswrapper[4811]: I0122 09:22:17.330873 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7crds\" (UniqueName: \"kubernetes.io/projected/4b66e2d7-4f56-4f5d-99f7-e89339015a9a-kube-api-access-7crds\") pod \"ceilometer-0\" (UID: \"4b66e2d7-4f56-4f5d-99f7-e89339015a9a\") " pod="openstack/ceilometer-0" Jan 22 09:22:17 crc kubenswrapper[4811]: I0122 09:22:17.775721 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 09:22:17 crc kubenswrapper[4811]: I0122 09:22:17.785677 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 09:22:17 crc kubenswrapper[4811]: I0122 09:22:17.919808 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b66e2d7-4f56-4f5d-99f7-e89339015a9a-combined-ca-bundle\") pod \"4b66e2d7-4f56-4f5d-99f7-e89339015a9a\" (UID: \"4b66e2d7-4f56-4f5d-99f7-e89339015a9a\") " Jan 22 09:22:17 crc kubenswrapper[4811]: I0122 09:22:17.920050 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b66e2d7-4f56-4f5d-99f7-e89339015a9a-config-data\") pod \"4b66e2d7-4f56-4f5d-99f7-e89339015a9a\" (UID: \"4b66e2d7-4f56-4f5d-99f7-e89339015a9a\") " Jan 22 09:22:17 crc kubenswrapper[4811]: I0122 09:22:17.920137 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b66e2d7-4f56-4f5d-99f7-e89339015a9a-scripts\") pod \"4b66e2d7-4f56-4f5d-99f7-e89339015a9a\" (UID: \"4b66e2d7-4f56-4f5d-99f7-e89339015a9a\") " Jan 22 09:22:17 crc kubenswrapper[4811]: I0122 09:22:17.920274 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7crds\" (UniqueName: \"kubernetes.io/projected/4b66e2d7-4f56-4f5d-99f7-e89339015a9a-kube-api-access-7crds\") pod \"4b66e2d7-4f56-4f5d-99f7-e89339015a9a\" (UID: \"4b66e2d7-4f56-4f5d-99f7-e89339015a9a\") " Jan 22 09:22:17 crc kubenswrapper[4811]: I0122 09:22:17.920447 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4b66e2d7-4f56-4f5d-99f7-e89339015a9a-sg-core-conf-yaml\") pod \"4b66e2d7-4f56-4f5d-99f7-e89339015a9a\" (UID: \"4b66e2d7-4f56-4f5d-99f7-e89339015a9a\") " Jan 22 09:22:17 crc kubenswrapper[4811]: I0122 09:22:17.920539 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4b66e2d7-4f56-4f5d-99f7-e89339015a9a-log-httpd\") pod \"4b66e2d7-4f56-4f5d-99f7-e89339015a9a\" (UID: \"4b66e2d7-4f56-4f5d-99f7-e89339015a9a\") " Jan 22 09:22:17 crc kubenswrapper[4811]: I0122 09:22:17.920621 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4b66e2d7-4f56-4f5d-99f7-e89339015a9a-run-httpd\") pod \"4b66e2d7-4f56-4f5d-99f7-e89339015a9a\" (UID: \"4b66e2d7-4f56-4f5d-99f7-e89339015a9a\") " Jan 22 09:22:17 crc kubenswrapper[4811]: I0122 09:22:17.921331 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4b66e2d7-4f56-4f5d-99f7-e89339015a9a-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "4b66e2d7-4f56-4f5d-99f7-e89339015a9a" (UID: "4b66e2d7-4f56-4f5d-99f7-e89339015a9a"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:22:17 crc kubenswrapper[4811]: I0122 09:22:17.922295 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4b66e2d7-4f56-4f5d-99f7-e89339015a9a-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "4b66e2d7-4f56-4f5d-99f7-e89339015a9a" (UID: "4b66e2d7-4f56-4f5d-99f7-e89339015a9a"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:22:17 crc kubenswrapper[4811]: I0122 09:22:17.926415 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b66e2d7-4f56-4f5d-99f7-e89339015a9a-scripts" (OuterVolumeSpecName: "scripts") pod "4b66e2d7-4f56-4f5d-99f7-e89339015a9a" (UID: "4b66e2d7-4f56-4f5d-99f7-e89339015a9a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:22:17 crc kubenswrapper[4811]: I0122 09:22:17.927277 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b66e2d7-4f56-4f5d-99f7-e89339015a9a-config-data" (OuterVolumeSpecName: "config-data") pod "4b66e2d7-4f56-4f5d-99f7-e89339015a9a" (UID: "4b66e2d7-4f56-4f5d-99f7-e89339015a9a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:22:17 crc kubenswrapper[4811]: I0122 09:22:17.927747 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b66e2d7-4f56-4f5d-99f7-e89339015a9a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4b66e2d7-4f56-4f5d-99f7-e89339015a9a" (UID: "4b66e2d7-4f56-4f5d-99f7-e89339015a9a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:22:17 crc kubenswrapper[4811]: I0122 09:22:17.929261 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b66e2d7-4f56-4f5d-99f7-e89339015a9a-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "4b66e2d7-4f56-4f5d-99f7-e89339015a9a" (UID: "4b66e2d7-4f56-4f5d-99f7-e89339015a9a"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:22:17 crc kubenswrapper[4811]: I0122 09:22:17.941656 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b66e2d7-4f56-4f5d-99f7-e89339015a9a-kube-api-access-7crds" (OuterVolumeSpecName: "kube-api-access-7crds") pod "4b66e2d7-4f56-4f5d-99f7-e89339015a9a" (UID: "4b66e2d7-4f56-4f5d-99f7-e89339015a9a"). InnerVolumeSpecName "kube-api-access-7crds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:22:18 crc kubenswrapper[4811]: I0122 09:22:18.004934 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="467a6b01-4fa9-484a-b65e-a91d89e91eb3" path="/var/lib/kubelet/pods/467a6b01-4fa9-484a-b65e-a91d89e91eb3/volumes" Jan 22 09:22:18 crc kubenswrapper[4811]: I0122 09:22:18.022084 4811 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b66e2d7-4f56-4f5d-99f7-e89339015a9a-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 09:22:18 crc kubenswrapper[4811]: I0122 09:22:18.022163 4811 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b66e2d7-4f56-4f5d-99f7-e89339015a9a-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 09:22:18 crc kubenswrapper[4811]: I0122 09:22:18.022212 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7crds\" (UniqueName: \"kubernetes.io/projected/4b66e2d7-4f56-4f5d-99f7-e89339015a9a-kube-api-access-7crds\") on node \"crc\" DevicePath \"\"" Jan 22 09:22:18 crc kubenswrapper[4811]: I0122 09:22:18.022260 4811 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4b66e2d7-4f56-4f5d-99f7-e89339015a9a-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 22 09:22:18 crc kubenswrapper[4811]: I0122 09:22:18.022306 4811 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4b66e2d7-4f56-4f5d-99f7-e89339015a9a-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 09:22:18 crc kubenswrapper[4811]: I0122 09:22:18.022358 4811 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4b66e2d7-4f56-4f5d-99f7-e89339015a9a-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 09:22:18 crc kubenswrapper[4811]: I0122 09:22:18.022411 4811 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b66e2d7-4f56-4f5d-99f7-e89339015a9a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 09:22:18 crc kubenswrapper[4811]: I0122 09:22:18.781472 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 09:22:18 crc kubenswrapper[4811]: I0122 09:22:18.826376 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 22 09:22:18 crc kubenswrapper[4811]: I0122 09:22:18.878404 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 22 09:22:18 crc kubenswrapper[4811]: I0122 09:22:18.890457 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 22 09:22:18 crc kubenswrapper[4811]: I0122 09:22:18.895738 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 09:22:18 crc kubenswrapper[4811]: I0122 09:22:18.898192 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 22 09:22:18 crc kubenswrapper[4811]: I0122 09:22:18.898639 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 22 09:22:18 crc kubenswrapper[4811]: I0122 09:22:18.909281 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 22 09:22:18 crc kubenswrapper[4811]: I0122 09:22:18.952412 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/31ef50f1-c093-47e5-80ee-410c0a972865-run-httpd\") pod \"ceilometer-0\" (UID: \"31ef50f1-c093-47e5-80ee-410c0a972865\") " pod="openstack/ceilometer-0" Jan 22 09:22:18 crc kubenswrapper[4811]: I0122 09:22:18.952472 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqcrs\" (UniqueName: \"kubernetes.io/projected/31ef50f1-c093-47e5-80ee-410c0a972865-kube-api-access-gqcrs\") pod \"ceilometer-0\" (UID: \"31ef50f1-c093-47e5-80ee-410c0a972865\") " pod="openstack/ceilometer-0" Jan 22 09:22:18 crc kubenswrapper[4811]: I0122 09:22:18.952768 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31ef50f1-c093-47e5-80ee-410c0a972865-config-data\") pod \"ceilometer-0\" (UID: \"31ef50f1-c093-47e5-80ee-410c0a972865\") " pod="openstack/ceilometer-0" Jan 22 09:22:18 crc kubenswrapper[4811]: I0122 09:22:18.952940 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/31ef50f1-c093-47e5-80ee-410c0a972865-scripts\") pod \"ceilometer-0\" (UID: \"31ef50f1-c093-47e5-80ee-410c0a972865\") " pod="openstack/ceilometer-0" Jan 22 09:22:18 crc kubenswrapper[4811]: I0122 09:22:18.953001 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/31ef50f1-c093-47e5-80ee-410c0a972865-log-httpd\") pod \"ceilometer-0\" (UID: \"31ef50f1-c093-47e5-80ee-410c0a972865\") " pod="openstack/ceilometer-0" Jan 22 09:22:18 crc kubenswrapper[4811]: I0122 09:22:18.953050 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/31ef50f1-c093-47e5-80ee-410c0a972865-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"31ef50f1-c093-47e5-80ee-410c0a972865\") " pod="openstack/ceilometer-0" Jan 22 09:22:18 crc kubenswrapper[4811]: I0122 09:22:18.953181 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31ef50f1-c093-47e5-80ee-410c0a972865-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"31ef50f1-c093-47e5-80ee-410c0a972865\") " pod="openstack/ceilometer-0" Jan 22 09:22:19 crc kubenswrapper[4811]: I0122 09:22:19.054799 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/31ef50f1-c093-47e5-80ee-410c0a972865-scripts\") pod \"ceilometer-0\" (UID: \"31ef50f1-c093-47e5-80ee-410c0a972865\") " pod="openstack/ceilometer-0" Jan 22 09:22:19 crc kubenswrapper[4811]: I0122 09:22:19.054879 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/31ef50f1-c093-47e5-80ee-410c0a972865-log-httpd\") pod \"ceilometer-0\" (UID: \"31ef50f1-c093-47e5-80ee-410c0a972865\") " pod="openstack/ceilometer-0" Jan 22 09:22:19 crc kubenswrapper[4811]: I0122 09:22:19.054910 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/31ef50f1-c093-47e5-80ee-410c0a972865-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"31ef50f1-c093-47e5-80ee-410c0a972865\") " pod="openstack/ceilometer-0" Jan 22 09:22:19 crc kubenswrapper[4811]: I0122 09:22:19.054982 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31ef50f1-c093-47e5-80ee-410c0a972865-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"31ef50f1-c093-47e5-80ee-410c0a972865\") " pod="openstack/ceilometer-0" Jan 22 09:22:19 crc kubenswrapper[4811]: I0122 09:22:19.055032 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/31ef50f1-c093-47e5-80ee-410c0a972865-run-httpd\") pod \"ceilometer-0\" (UID: \"31ef50f1-c093-47e5-80ee-410c0a972865\") " pod="openstack/ceilometer-0" Jan 22 09:22:19 crc kubenswrapper[4811]: I0122 09:22:19.055097 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqcrs\" (UniqueName: \"kubernetes.io/projected/31ef50f1-c093-47e5-80ee-410c0a972865-kube-api-access-gqcrs\") pod \"ceilometer-0\" (UID: \"31ef50f1-c093-47e5-80ee-410c0a972865\") " pod="openstack/ceilometer-0" Jan 22 09:22:19 crc kubenswrapper[4811]: I0122 09:22:19.055163 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31ef50f1-c093-47e5-80ee-410c0a972865-config-data\") pod \"ceilometer-0\" (UID: \"31ef50f1-c093-47e5-80ee-410c0a972865\") " pod="openstack/ceilometer-0" Jan 22 09:22:19 crc kubenswrapper[4811]: I0122 09:22:19.055864 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/31ef50f1-c093-47e5-80ee-410c0a972865-log-httpd\") pod \"ceilometer-0\" (UID: \"31ef50f1-c093-47e5-80ee-410c0a972865\") " pod="openstack/ceilometer-0" Jan 22 09:22:19 crc kubenswrapper[4811]: I0122 09:22:19.055909 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/31ef50f1-c093-47e5-80ee-410c0a972865-run-httpd\") pod \"ceilometer-0\" (UID: \"31ef50f1-c093-47e5-80ee-410c0a972865\") " pod="openstack/ceilometer-0" Jan 22 09:22:19 crc kubenswrapper[4811]: I0122 09:22:19.061484 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/31ef50f1-c093-47e5-80ee-410c0a972865-scripts\") pod \"ceilometer-0\" (UID: \"31ef50f1-c093-47e5-80ee-410c0a972865\") " pod="openstack/ceilometer-0" Jan 22 09:22:19 crc kubenswrapper[4811]: I0122 09:22:19.062408 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31ef50f1-c093-47e5-80ee-410c0a972865-config-data\") pod \"ceilometer-0\" (UID: \"31ef50f1-c093-47e5-80ee-410c0a972865\") " pod="openstack/ceilometer-0" Jan 22 09:22:19 crc kubenswrapper[4811]: I0122 09:22:19.062618 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/31ef50f1-c093-47e5-80ee-410c0a972865-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"31ef50f1-c093-47e5-80ee-410c0a972865\") " pod="openstack/ceilometer-0" Jan 22 09:22:19 crc kubenswrapper[4811]: I0122 09:22:19.065937 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31ef50f1-c093-47e5-80ee-410c0a972865-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"31ef50f1-c093-47e5-80ee-410c0a972865\") " pod="openstack/ceilometer-0" Jan 22 09:22:19 crc kubenswrapper[4811]: I0122 09:22:19.087847 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqcrs\" (UniqueName: \"kubernetes.io/projected/31ef50f1-c093-47e5-80ee-410c0a972865-kube-api-access-gqcrs\") pod \"ceilometer-0\" (UID: \"31ef50f1-c093-47e5-80ee-410c0a972865\") " pod="openstack/ceilometer-0" Jan 22 09:22:19 crc kubenswrapper[4811]: I0122 09:22:19.219476 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 09:22:19 crc kubenswrapper[4811]: I0122 09:22:19.655736 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 22 09:22:19 crc kubenswrapper[4811]: I0122 09:22:19.789529 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"31ef50f1-c093-47e5-80ee-410c0a972865","Type":"ContainerStarted","Data":"d637d4e4624e333bfa2e81bca7c609483e7dfda4268b16b8d2f57e62b489d858"} Jan 22 09:22:20 crc kubenswrapper[4811]: I0122 09:22:20.000678 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b66e2d7-4f56-4f5d-99f7-e89339015a9a" path="/var/lib/kubelet/pods/4b66e2d7-4f56-4f5d-99f7-e89339015a9a/volumes" Jan 22 09:22:20 crc kubenswrapper[4811]: I0122 09:22:20.101836 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 22 09:22:20 crc kubenswrapper[4811]: I0122 09:22:20.193354 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-bzk5b"] Jan 22 09:22:20 crc kubenswrapper[4811]: I0122 09:22:20.194247 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-bzk5b" Jan 22 09:22:20 crc kubenswrapper[4811]: I0122 09:22:20.206656 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-bzk5b"] Jan 22 09:22:20 crc kubenswrapper[4811]: I0122 09:22:20.283327 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6szs7\" (UniqueName: \"kubernetes.io/projected/64241d93-e4db-4880-a25b-2c68cacb0f5c-kube-api-access-6szs7\") pod \"nova-api-db-create-bzk5b\" (UID: \"64241d93-e4db-4880-a25b-2c68cacb0f5c\") " pod="openstack/nova-api-db-create-bzk5b" Jan 22 09:22:20 crc kubenswrapper[4811]: I0122 09:22:20.283674 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/64241d93-e4db-4880-a25b-2c68cacb0f5c-operator-scripts\") pod \"nova-api-db-create-bzk5b\" (UID: \"64241d93-e4db-4880-a25b-2c68cacb0f5c\") " pod="openstack/nova-api-db-create-bzk5b" Jan 22 09:22:20 crc kubenswrapper[4811]: I0122 09:22:20.305232 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-65d2-account-create-update-sq4sn"] Jan 22 09:22:20 crc kubenswrapper[4811]: I0122 09:22:20.307170 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-65d2-account-create-update-sq4sn" Jan 22 09:22:20 crc kubenswrapper[4811]: I0122 09:22:20.318750 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 22 09:22:20 crc kubenswrapper[4811]: I0122 09:22:20.321796 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-65d2-account-create-update-sq4sn"] Jan 22 09:22:20 crc kubenswrapper[4811]: I0122 09:22:20.389171 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6szs7\" (UniqueName: \"kubernetes.io/projected/64241d93-e4db-4880-a25b-2c68cacb0f5c-kube-api-access-6szs7\") pod \"nova-api-db-create-bzk5b\" (UID: \"64241d93-e4db-4880-a25b-2c68cacb0f5c\") " pod="openstack/nova-api-db-create-bzk5b" Jan 22 09:22:20 crc kubenswrapper[4811]: I0122 09:22:20.389552 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff4e524c-e9f6-483d-b8ae-d8abf0a7bf21-operator-scripts\") pod \"nova-api-65d2-account-create-update-sq4sn\" (UID: \"ff4e524c-e9f6-483d-b8ae-d8abf0a7bf21\") " pod="openstack/nova-api-65d2-account-create-update-sq4sn" Jan 22 09:22:20 crc kubenswrapper[4811]: I0122 09:22:20.389693 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-748f8\" (UniqueName: \"kubernetes.io/projected/ff4e524c-e9f6-483d-b8ae-d8abf0a7bf21-kube-api-access-748f8\") pod \"nova-api-65d2-account-create-update-sq4sn\" (UID: \"ff4e524c-e9f6-483d-b8ae-d8abf0a7bf21\") " pod="openstack/nova-api-65d2-account-create-update-sq4sn" Jan 22 09:22:20 crc kubenswrapper[4811]: I0122 09:22:20.389783 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/64241d93-e4db-4880-a25b-2c68cacb0f5c-operator-scripts\") pod \"nova-api-db-create-bzk5b\" (UID: \"64241d93-e4db-4880-a25b-2c68cacb0f5c\") " pod="openstack/nova-api-db-create-bzk5b" Jan 22 09:22:20 crc kubenswrapper[4811]: I0122 09:22:20.390656 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/64241d93-e4db-4880-a25b-2c68cacb0f5c-operator-scripts\") pod \"nova-api-db-create-bzk5b\" (UID: \"64241d93-e4db-4880-a25b-2c68cacb0f5c\") " pod="openstack/nova-api-db-create-bzk5b" Jan 22 09:22:20 crc kubenswrapper[4811]: I0122 09:22:20.406381 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-2sgkl"] Jan 22 09:22:20 crc kubenswrapper[4811]: I0122 09:22:20.408434 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-2sgkl" Jan 22 09:22:20 crc kubenswrapper[4811]: I0122 09:22:20.412751 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6szs7\" (UniqueName: \"kubernetes.io/projected/64241d93-e4db-4880-a25b-2c68cacb0f5c-kube-api-access-6szs7\") pod \"nova-api-db-create-bzk5b\" (UID: \"64241d93-e4db-4880-a25b-2c68cacb0f5c\") " pod="openstack/nova-api-db-create-bzk5b" Jan 22 09:22:20 crc kubenswrapper[4811]: I0122 09:22:20.453541 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-2sgkl"] Jan 22 09:22:20 crc kubenswrapper[4811]: I0122 09:22:20.496488 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93312640-d70b-4075-af45-5bf9e6625c73-operator-scripts\") pod \"nova-cell0-db-create-2sgkl\" (UID: \"93312640-d70b-4075-af45-5bf9e6625c73\") " pod="openstack/nova-cell0-db-create-2sgkl" Jan 22 09:22:20 crc kubenswrapper[4811]: I0122 09:22:20.496556 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-748f8\" (UniqueName: \"kubernetes.io/projected/ff4e524c-e9f6-483d-b8ae-d8abf0a7bf21-kube-api-access-748f8\") pod \"nova-api-65d2-account-create-update-sq4sn\" (UID: \"ff4e524c-e9f6-483d-b8ae-d8abf0a7bf21\") " pod="openstack/nova-api-65d2-account-create-update-sq4sn" Jan 22 09:22:20 crc kubenswrapper[4811]: I0122 09:22:20.496777 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8987\" (UniqueName: \"kubernetes.io/projected/93312640-d70b-4075-af45-5bf9e6625c73-kube-api-access-m8987\") pod \"nova-cell0-db-create-2sgkl\" (UID: \"93312640-d70b-4075-af45-5bf9e6625c73\") " pod="openstack/nova-cell0-db-create-2sgkl" Jan 22 09:22:20 crc kubenswrapper[4811]: I0122 09:22:20.496807 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff4e524c-e9f6-483d-b8ae-d8abf0a7bf21-operator-scripts\") pod \"nova-api-65d2-account-create-update-sq4sn\" (UID: \"ff4e524c-e9f6-483d-b8ae-d8abf0a7bf21\") " pod="openstack/nova-api-65d2-account-create-update-sq4sn" Jan 22 09:22:20 crc kubenswrapper[4811]: I0122 09:22:20.497446 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff4e524c-e9f6-483d-b8ae-d8abf0a7bf21-operator-scripts\") pod \"nova-api-65d2-account-create-update-sq4sn\" (UID: \"ff4e524c-e9f6-483d-b8ae-d8abf0a7bf21\") " pod="openstack/nova-api-65d2-account-create-update-sq4sn" Jan 22 09:22:20 crc kubenswrapper[4811]: I0122 09:22:20.508781 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-bzk5b" Jan 22 09:22:20 crc kubenswrapper[4811]: I0122 09:22:20.508932 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-zxgc6"] Jan 22 09:22:20 crc kubenswrapper[4811]: I0122 09:22:20.510520 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-zxgc6" Jan 22 09:22:20 crc kubenswrapper[4811]: I0122 09:22:20.518175 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-7b3d-account-create-update-jq477"] Jan 22 09:22:20 crc kubenswrapper[4811]: I0122 09:22:20.519548 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-7b3d-account-create-update-jq477" Jan 22 09:22:20 crc kubenswrapper[4811]: I0122 09:22:20.522559 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 22 09:22:20 crc kubenswrapper[4811]: I0122 09:22:20.529039 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-zxgc6"] Jan 22 09:22:20 crc kubenswrapper[4811]: I0122 09:22:20.531930 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-748f8\" (UniqueName: \"kubernetes.io/projected/ff4e524c-e9f6-483d-b8ae-d8abf0a7bf21-kube-api-access-748f8\") pod \"nova-api-65d2-account-create-update-sq4sn\" (UID: \"ff4e524c-e9f6-483d-b8ae-d8abf0a7bf21\") " pod="openstack/nova-api-65d2-account-create-update-sq4sn" Jan 22 09:22:20 crc kubenswrapper[4811]: I0122 09:22:20.542658 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-7b3d-account-create-update-jq477"] Jan 22 09:22:20 crc kubenswrapper[4811]: I0122 09:22:20.599869 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f33f4ea9-6343-42e6-8666-6e34e9926dd9-operator-scripts\") pod \"nova-cell1-db-create-zxgc6\" (UID: \"f33f4ea9-6343-42e6-8666-6e34e9926dd9\") " pod="openstack/nova-cell1-db-create-zxgc6" Jan 22 09:22:20 crc kubenswrapper[4811]: I0122 09:22:20.599989 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vslp9\" (UniqueName: \"kubernetes.io/projected/433d526d-dde0-4815-9863-d934d1a30739-kube-api-access-vslp9\") pod \"nova-cell0-7b3d-account-create-update-jq477\" (UID: \"433d526d-dde0-4815-9863-d934d1a30739\") " pod="openstack/nova-cell0-7b3d-account-create-update-jq477" Jan 22 09:22:20 crc kubenswrapper[4811]: I0122 09:22:20.600158 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8987\" (UniqueName: \"kubernetes.io/projected/93312640-d70b-4075-af45-5bf9e6625c73-kube-api-access-m8987\") pod \"nova-cell0-db-create-2sgkl\" (UID: \"93312640-d70b-4075-af45-5bf9e6625c73\") " pod="openstack/nova-cell0-db-create-2sgkl" Jan 22 09:22:20 crc kubenswrapper[4811]: I0122 09:22:20.600192 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9ldw\" (UniqueName: \"kubernetes.io/projected/f33f4ea9-6343-42e6-8666-6e34e9926dd9-kube-api-access-c9ldw\") pod \"nova-cell1-db-create-zxgc6\" (UID: \"f33f4ea9-6343-42e6-8666-6e34e9926dd9\") " pod="openstack/nova-cell1-db-create-zxgc6" Jan 22 09:22:20 crc kubenswrapper[4811]: I0122 09:22:20.600332 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93312640-d70b-4075-af45-5bf9e6625c73-operator-scripts\") pod \"nova-cell0-db-create-2sgkl\" (UID: \"93312640-d70b-4075-af45-5bf9e6625c73\") " pod="openstack/nova-cell0-db-create-2sgkl" Jan 22 09:22:20 crc kubenswrapper[4811]: I0122 09:22:20.600455 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/433d526d-dde0-4815-9863-d934d1a30739-operator-scripts\") pod \"nova-cell0-7b3d-account-create-update-jq477\" (UID: \"433d526d-dde0-4815-9863-d934d1a30739\") " pod="openstack/nova-cell0-7b3d-account-create-update-jq477" Jan 22 09:22:20 crc kubenswrapper[4811]: I0122 09:22:20.601552 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93312640-d70b-4075-af45-5bf9e6625c73-operator-scripts\") pod \"nova-cell0-db-create-2sgkl\" (UID: \"93312640-d70b-4075-af45-5bf9e6625c73\") " pod="openstack/nova-cell0-db-create-2sgkl" Jan 22 09:22:20 crc kubenswrapper[4811]: I0122 09:22:20.623081 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8987\" (UniqueName: \"kubernetes.io/projected/93312640-d70b-4075-af45-5bf9e6625c73-kube-api-access-m8987\") pod \"nova-cell0-db-create-2sgkl\" (UID: \"93312640-d70b-4075-af45-5bf9e6625c73\") " pod="openstack/nova-cell0-db-create-2sgkl" Jan 22 09:22:20 crc kubenswrapper[4811]: I0122 09:22:20.625225 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-65d2-account-create-update-sq4sn" Jan 22 09:22:20 crc kubenswrapper[4811]: I0122 09:22:20.701874 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9ldw\" (UniqueName: \"kubernetes.io/projected/f33f4ea9-6343-42e6-8666-6e34e9926dd9-kube-api-access-c9ldw\") pod \"nova-cell1-db-create-zxgc6\" (UID: \"f33f4ea9-6343-42e6-8666-6e34e9926dd9\") " pod="openstack/nova-cell1-db-create-zxgc6" Jan 22 09:22:20 crc kubenswrapper[4811]: I0122 09:22:20.702063 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/433d526d-dde0-4815-9863-d934d1a30739-operator-scripts\") pod \"nova-cell0-7b3d-account-create-update-jq477\" (UID: \"433d526d-dde0-4815-9863-d934d1a30739\") " pod="openstack/nova-cell0-7b3d-account-create-update-jq477" Jan 22 09:22:20 crc kubenswrapper[4811]: I0122 09:22:20.702159 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f33f4ea9-6343-42e6-8666-6e34e9926dd9-operator-scripts\") pod \"nova-cell1-db-create-zxgc6\" (UID: \"f33f4ea9-6343-42e6-8666-6e34e9926dd9\") " pod="openstack/nova-cell1-db-create-zxgc6" Jan 22 09:22:20 crc kubenswrapper[4811]: I0122 09:22:20.702212 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vslp9\" (UniqueName: \"kubernetes.io/projected/433d526d-dde0-4815-9863-d934d1a30739-kube-api-access-vslp9\") pod \"nova-cell0-7b3d-account-create-update-jq477\" (UID: \"433d526d-dde0-4815-9863-d934d1a30739\") " pod="openstack/nova-cell0-7b3d-account-create-update-jq477" Jan 22 09:22:20 crc kubenswrapper[4811]: I0122 09:22:20.702823 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/433d526d-dde0-4815-9863-d934d1a30739-operator-scripts\") pod \"nova-cell0-7b3d-account-create-update-jq477\" (UID: \"433d526d-dde0-4815-9863-d934d1a30739\") " pod="openstack/nova-cell0-7b3d-account-create-update-jq477" Jan 22 09:22:20 crc kubenswrapper[4811]: I0122 09:22:20.702965 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f33f4ea9-6343-42e6-8666-6e34e9926dd9-operator-scripts\") pod \"nova-cell1-db-create-zxgc6\" (UID: \"f33f4ea9-6343-42e6-8666-6e34e9926dd9\") " pod="openstack/nova-cell1-db-create-zxgc6" Jan 22 09:22:20 crc kubenswrapper[4811]: I0122 09:22:20.723785 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-6d99-account-create-update-rwg8r"] Jan 22 09:22:20 crc kubenswrapper[4811]: I0122 09:22:20.725138 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9ldw\" (UniqueName: \"kubernetes.io/projected/f33f4ea9-6343-42e6-8666-6e34e9926dd9-kube-api-access-c9ldw\") pod \"nova-cell1-db-create-zxgc6\" (UID: \"f33f4ea9-6343-42e6-8666-6e34e9926dd9\") " pod="openstack/nova-cell1-db-create-zxgc6" Jan 22 09:22:20 crc kubenswrapper[4811]: I0122 09:22:20.726037 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vslp9\" (UniqueName: \"kubernetes.io/projected/433d526d-dde0-4815-9863-d934d1a30739-kube-api-access-vslp9\") pod \"nova-cell0-7b3d-account-create-update-jq477\" (UID: \"433d526d-dde0-4815-9863-d934d1a30739\") " pod="openstack/nova-cell0-7b3d-account-create-update-jq477" Jan 22 09:22:20 crc kubenswrapper[4811]: I0122 09:22:20.731046 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-6d99-account-create-update-rwg8r" Jan 22 09:22:20 crc kubenswrapper[4811]: I0122 09:22:20.734796 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 22 09:22:20 crc kubenswrapper[4811]: I0122 09:22:20.735493 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-6d99-account-create-update-rwg8r"] Jan 22 09:22:20 crc kubenswrapper[4811]: I0122 09:22:20.811131 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f255ce7a-ccc6-41ac-901d-92554247b909-operator-scripts\") pod \"nova-cell1-6d99-account-create-update-rwg8r\" (UID: \"f255ce7a-ccc6-41ac-901d-92554247b909\") " pod="openstack/nova-cell1-6d99-account-create-update-rwg8r" Jan 22 09:22:20 crc kubenswrapper[4811]: I0122 09:22:20.811674 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hh8w9\" (UniqueName: \"kubernetes.io/projected/f255ce7a-ccc6-41ac-901d-92554247b909-kube-api-access-hh8w9\") pod \"nova-cell1-6d99-account-create-update-rwg8r\" (UID: \"f255ce7a-ccc6-41ac-901d-92554247b909\") " pod="openstack/nova-cell1-6d99-account-create-update-rwg8r" Jan 22 09:22:20 crc kubenswrapper[4811]: I0122 09:22:20.827714 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-2sgkl" Jan 22 09:22:20 crc kubenswrapper[4811]: I0122 09:22:20.836881 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"31ef50f1-c093-47e5-80ee-410c0a972865","Type":"ContainerStarted","Data":"00cf6f3bf2bc84798d764115be93a799a88a57a56dac19bdc01a0a941399fe43"} Jan 22 09:22:20 crc kubenswrapper[4811]: I0122 09:22:20.907291 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-7b3d-account-create-update-jq477" Jan 22 09:22:20 crc kubenswrapper[4811]: I0122 09:22:20.913639 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f255ce7a-ccc6-41ac-901d-92554247b909-operator-scripts\") pod \"nova-cell1-6d99-account-create-update-rwg8r\" (UID: \"f255ce7a-ccc6-41ac-901d-92554247b909\") " pod="openstack/nova-cell1-6d99-account-create-update-rwg8r" Jan 22 09:22:20 crc kubenswrapper[4811]: I0122 09:22:20.913802 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hh8w9\" (UniqueName: \"kubernetes.io/projected/f255ce7a-ccc6-41ac-901d-92554247b909-kube-api-access-hh8w9\") pod \"nova-cell1-6d99-account-create-update-rwg8r\" (UID: \"f255ce7a-ccc6-41ac-901d-92554247b909\") " pod="openstack/nova-cell1-6d99-account-create-update-rwg8r" Jan 22 09:22:20 crc kubenswrapper[4811]: I0122 09:22:20.914710 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f255ce7a-ccc6-41ac-901d-92554247b909-operator-scripts\") pod \"nova-cell1-6d99-account-create-update-rwg8r\" (UID: \"f255ce7a-ccc6-41ac-901d-92554247b909\") " pod="openstack/nova-cell1-6d99-account-create-update-rwg8r" Jan 22 09:22:20 crc kubenswrapper[4811]: I0122 09:22:20.927823 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-zxgc6" Jan 22 09:22:20 crc kubenswrapper[4811]: I0122 09:22:20.954173 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hh8w9\" (UniqueName: \"kubernetes.io/projected/f255ce7a-ccc6-41ac-901d-92554247b909-kube-api-access-hh8w9\") pod \"nova-cell1-6d99-account-create-update-rwg8r\" (UID: \"f255ce7a-ccc6-41ac-901d-92554247b909\") " pod="openstack/nova-cell1-6d99-account-create-update-rwg8r" Jan 22 09:22:21 crc kubenswrapper[4811]: I0122 09:22:21.003684 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-bzk5b"] Jan 22 09:22:21 crc kubenswrapper[4811]: I0122 09:22:21.029884 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-65d2-account-create-update-sq4sn"] Jan 22 09:22:21 crc kubenswrapper[4811]: W0122 09:22:21.036935 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod64241d93_e4db_4880_a25b_2c68cacb0f5c.slice/crio-5e8cd76ac16e016de389a8dc8fb7a3fb851b0af4550e9a6f8bb7f65e1a631694 WatchSource:0}: Error finding container 5e8cd76ac16e016de389a8dc8fb7a3fb851b0af4550e9a6f8bb7f65e1a631694: Status 404 returned error can't find the container with id 5e8cd76ac16e016de389a8dc8fb7a3fb851b0af4550e9a6f8bb7f65e1a631694 Jan 22 09:22:21 crc kubenswrapper[4811]: I0122 09:22:21.051480 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-6d99-account-create-update-rwg8r" Jan 22 09:22:21 crc kubenswrapper[4811]: I0122 09:22:21.344610 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-2sgkl"] Jan 22 09:22:21 crc kubenswrapper[4811]: W0122 09:22:21.502226 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod433d526d_dde0_4815_9863_d934d1a30739.slice/crio-f210aea8cfca7313240f0db3af276c434584e030a74c8e86a9a1ee11da77fafa WatchSource:0}: Error finding container f210aea8cfca7313240f0db3af276c434584e030a74c8e86a9a1ee11da77fafa: Status 404 returned error can't find the container with id f210aea8cfca7313240f0db3af276c434584e030a74c8e86a9a1ee11da77fafa Jan 22 09:22:21 crc kubenswrapper[4811]: I0122 09:22:21.503359 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-7b3d-account-create-update-jq477"] Jan 22 09:22:21 crc kubenswrapper[4811]: I0122 09:22:21.619695 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-zxgc6"] Jan 22 09:22:21 crc kubenswrapper[4811]: I0122 09:22:21.708367 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-6d99-account-create-update-rwg8r"] Jan 22 09:22:21 crc kubenswrapper[4811]: I0122 09:22:21.854783 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-7b3d-account-create-update-jq477" event={"ID":"433d526d-dde0-4815-9863-d934d1a30739","Type":"ContainerStarted","Data":"f210aea8cfca7313240f0db3af276c434584e030a74c8e86a9a1ee11da77fafa"} Jan 22 09:22:21 crc kubenswrapper[4811]: I0122 09:22:21.855907 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-bzk5b" event={"ID":"64241d93-e4db-4880-a25b-2c68cacb0f5c","Type":"ContainerStarted","Data":"da609af4ccb38238d155ee06b057e9c863a95f9fa6cba50923c1a5783c7e6cea"} Jan 22 09:22:21 crc kubenswrapper[4811]: I0122 09:22:21.855961 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-bzk5b" event={"ID":"64241d93-e4db-4880-a25b-2c68cacb0f5c","Type":"ContainerStarted","Data":"5e8cd76ac16e016de389a8dc8fb7a3fb851b0af4550e9a6f8bb7f65e1a631694"} Jan 22 09:22:21 crc kubenswrapper[4811]: I0122 09:22:21.862460 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-zxgc6" event={"ID":"f33f4ea9-6343-42e6-8666-6e34e9926dd9","Type":"ContainerStarted","Data":"0679de7e5346d2c42e292a28038f30918bc9e4c8326ac8990b58de50582c2979"} Jan 22 09:22:21 crc kubenswrapper[4811]: I0122 09:22:21.864136 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-2sgkl" event={"ID":"93312640-d70b-4075-af45-5bf9e6625c73","Type":"ContainerStarted","Data":"176be2bc7fb035d232ab9068cda06c56aa4fedb9535a3352191eb0f7903cd7a2"} Jan 22 09:22:21 crc kubenswrapper[4811]: I0122 09:22:21.865846 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-65d2-account-create-update-sq4sn" event={"ID":"ff4e524c-e9f6-483d-b8ae-d8abf0a7bf21","Type":"ContainerStarted","Data":"1e97e319c5598a605a81c259d39596181433fbd0299c635029cdb0b2737b162d"} Jan 22 09:22:21 crc kubenswrapper[4811]: I0122 09:22:21.865878 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-65d2-account-create-update-sq4sn" event={"ID":"ff4e524c-e9f6-483d-b8ae-d8abf0a7bf21","Type":"ContainerStarted","Data":"e7ba639dd84149306204648ed4f71d056899dbf78bf9c4bcf6c4b0746e62bdf0"} Jan 22 09:22:21 crc kubenswrapper[4811]: I0122 09:22:21.867374 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-6d99-account-create-update-rwg8r" event={"ID":"f255ce7a-ccc6-41ac-901d-92554247b909","Type":"ContainerStarted","Data":"063ba59699f0c8390ca0554cc0c3d3ebf86c60e1462ae992065ccdf7ee76bf2c"} Jan 22 09:22:21 crc kubenswrapper[4811]: I0122 09:22:21.877464 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-bzk5b" podStartSLOduration=1.877451503 podStartE2EDuration="1.877451503s" podCreationTimestamp="2026-01-22 09:22:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:22:21.871403878 +0000 UTC m=+986.193591001" watchObservedRunningTime="2026-01-22 09:22:21.877451503 +0000 UTC m=+986.199638616" Jan 22 09:22:21 crc kubenswrapper[4811]: I0122 09:22:21.895244 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-65d2-account-create-update-sq4sn" podStartSLOduration=1.895224051 podStartE2EDuration="1.895224051s" podCreationTimestamp="2026-01-22 09:22:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:22:21.89344723 +0000 UTC m=+986.215634353" watchObservedRunningTime="2026-01-22 09:22:21.895224051 +0000 UTC m=+986.217411173" Jan 22 09:22:22 crc kubenswrapper[4811]: I0122 09:22:22.880717 4811 generic.go:334] "Generic (PLEG): container finished" podID="ff4e524c-e9f6-483d-b8ae-d8abf0a7bf21" containerID="1e97e319c5598a605a81c259d39596181433fbd0299c635029cdb0b2737b162d" exitCode=0 Jan 22 09:22:22 crc kubenswrapper[4811]: I0122 09:22:22.880861 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-65d2-account-create-update-sq4sn" event={"ID":"ff4e524c-e9f6-483d-b8ae-d8abf0a7bf21","Type":"ContainerDied","Data":"1e97e319c5598a605a81c259d39596181433fbd0299c635029cdb0b2737b162d"} Jan 22 09:22:22 crc kubenswrapper[4811]: I0122 09:22:22.887328 4811 generic.go:334] "Generic (PLEG): container finished" podID="f255ce7a-ccc6-41ac-901d-92554247b909" containerID="92920163f2f0459ac3ad2fb37d7c79c9692c5ff530f60f2da244826a41ee33d7" exitCode=0 Jan 22 09:22:22 crc kubenswrapper[4811]: I0122 09:22:22.887388 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-6d99-account-create-update-rwg8r" event={"ID":"f255ce7a-ccc6-41ac-901d-92554247b909","Type":"ContainerDied","Data":"92920163f2f0459ac3ad2fb37d7c79c9692c5ff530f60f2da244826a41ee33d7"} Jan 22 09:22:22 crc kubenswrapper[4811]: I0122 09:22:22.889712 4811 generic.go:334] "Generic (PLEG): container finished" podID="433d526d-dde0-4815-9863-d934d1a30739" containerID="17d599313a9544d613306230c92244873cca42fc11d1f673aaa176c5284b9387" exitCode=0 Jan 22 09:22:22 crc kubenswrapper[4811]: I0122 09:22:22.889752 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-7b3d-account-create-update-jq477" event={"ID":"433d526d-dde0-4815-9863-d934d1a30739","Type":"ContainerDied","Data":"17d599313a9544d613306230c92244873cca42fc11d1f673aaa176c5284b9387"} Jan 22 09:22:22 crc kubenswrapper[4811]: I0122 09:22:22.894257 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"31ef50f1-c093-47e5-80ee-410c0a972865","Type":"ContainerStarted","Data":"f9f86de77346e28cf14a423bff186a549fee605cf808fec9667dd4fabeac566a"} Jan 22 09:22:22 crc kubenswrapper[4811]: I0122 09:22:22.894299 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"31ef50f1-c093-47e5-80ee-410c0a972865","Type":"ContainerStarted","Data":"ad0e224c2cb1d23bb86fad6d639b42b2f3b90a6aa2a78d6c7929d87b03795ada"} Jan 22 09:22:22 crc kubenswrapper[4811]: I0122 09:22:22.898985 4811 generic.go:334] "Generic (PLEG): container finished" podID="64241d93-e4db-4880-a25b-2c68cacb0f5c" containerID="da609af4ccb38238d155ee06b057e9c863a95f9fa6cba50923c1a5783c7e6cea" exitCode=0 Jan 22 09:22:22 crc kubenswrapper[4811]: I0122 09:22:22.899117 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-bzk5b" event={"ID":"64241d93-e4db-4880-a25b-2c68cacb0f5c","Type":"ContainerDied","Data":"da609af4ccb38238d155ee06b057e9c863a95f9fa6cba50923c1a5783c7e6cea"} Jan 22 09:22:22 crc kubenswrapper[4811]: I0122 09:22:22.900446 4811 generic.go:334] "Generic (PLEG): container finished" podID="f33f4ea9-6343-42e6-8666-6e34e9926dd9" containerID="d11d0a4ad66ab56b3148ac99e51066b6826d4e4fcf9b984d48dcff5d5f0c572b" exitCode=0 Jan 22 09:22:22 crc kubenswrapper[4811]: I0122 09:22:22.900539 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-zxgc6" event={"ID":"f33f4ea9-6343-42e6-8666-6e34e9926dd9","Type":"ContainerDied","Data":"d11d0a4ad66ab56b3148ac99e51066b6826d4e4fcf9b984d48dcff5d5f0c572b"} Jan 22 09:22:22 crc kubenswrapper[4811]: I0122 09:22:22.903743 4811 generic.go:334] "Generic (PLEG): container finished" podID="93312640-d70b-4075-af45-5bf9e6625c73" containerID="d5df28b27fb6d0fc5527a4fb36e96463df8a165d16cf3b63d79db681fba8d3a6" exitCode=0 Jan 22 09:22:22 crc kubenswrapper[4811]: I0122 09:22:22.903782 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-2sgkl" event={"ID":"93312640-d70b-4075-af45-5bf9e6625c73","Type":"ContainerDied","Data":"d5df28b27fb6d0fc5527a4fb36e96463df8a165d16cf3b63d79db681fba8d3a6"} Jan 22 09:22:23 crc kubenswrapper[4811]: I0122 09:22:23.975914 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-864bc8bfcf-nvbzn" Jan 22 09:22:24 crc kubenswrapper[4811]: I0122 09:22:24.060238 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-8b9759fbd-gggfs"] Jan 22 09:22:24 crc kubenswrapper[4811]: I0122 09:22:24.060679 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-8b9759fbd-gggfs" podUID="3ccb4838-2855-4017-9c00-e8765846d47e" containerName="neutron-api" containerID="cri-o://b5c695e389fdee90e681326b0f6901391f4404a9ccb1ebc6e28255e7dfc021b6" gracePeriod=30 Jan 22 09:22:24 crc kubenswrapper[4811]: I0122 09:22:24.061474 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-8b9759fbd-gggfs" podUID="3ccb4838-2855-4017-9c00-e8765846d47e" containerName="neutron-httpd" containerID="cri-o://391d864b3a391cb1cd483f20345fe978b43e8dff672200e2da4b9b0d1d331b6e" gracePeriod=30 Jan 22 09:22:24 crc kubenswrapper[4811]: I0122 09:22:24.375152 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-zxgc6" Jan 22 09:22:24 crc kubenswrapper[4811]: I0122 09:22:24.510211 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c9ldw\" (UniqueName: \"kubernetes.io/projected/f33f4ea9-6343-42e6-8666-6e34e9926dd9-kube-api-access-c9ldw\") pod \"f33f4ea9-6343-42e6-8666-6e34e9926dd9\" (UID: \"f33f4ea9-6343-42e6-8666-6e34e9926dd9\") " Jan 22 09:22:24 crc kubenswrapper[4811]: I0122 09:22:24.510506 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f33f4ea9-6343-42e6-8666-6e34e9926dd9-operator-scripts\") pod \"f33f4ea9-6343-42e6-8666-6e34e9926dd9\" (UID: \"f33f4ea9-6343-42e6-8666-6e34e9926dd9\") " Jan 22 09:22:24 crc kubenswrapper[4811]: I0122 09:22:24.513896 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f33f4ea9-6343-42e6-8666-6e34e9926dd9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f33f4ea9-6343-42e6-8666-6e34e9926dd9" (UID: "f33f4ea9-6343-42e6-8666-6e34e9926dd9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:22:24 crc kubenswrapper[4811]: I0122 09:22:24.518620 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f33f4ea9-6343-42e6-8666-6e34e9926dd9-kube-api-access-c9ldw" (OuterVolumeSpecName: "kube-api-access-c9ldw") pod "f33f4ea9-6343-42e6-8666-6e34e9926dd9" (UID: "f33f4ea9-6343-42e6-8666-6e34e9926dd9"). InnerVolumeSpecName "kube-api-access-c9ldw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:22:24 crc kubenswrapper[4811]: I0122 09:22:24.538895 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-7b3d-account-create-update-jq477" Jan 22 09:22:24 crc kubenswrapper[4811]: I0122 09:22:24.557293 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-bzk5b" Jan 22 09:22:24 crc kubenswrapper[4811]: I0122 09:22:24.594915 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-6d99-account-create-update-rwg8r" Jan 22 09:22:24 crc kubenswrapper[4811]: I0122 09:22:24.609777 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-2sgkl" Jan 22 09:22:24 crc kubenswrapper[4811]: I0122 09:22:24.610275 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-65d2-account-create-update-sq4sn" Jan 22 09:22:24 crc kubenswrapper[4811]: I0122 09:22:24.612769 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6szs7\" (UniqueName: \"kubernetes.io/projected/64241d93-e4db-4880-a25b-2c68cacb0f5c-kube-api-access-6szs7\") pod \"64241d93-e4db-4880-a25b-2c68cacb0f5c\" (UID: \"64241d93-e4db-4880-a25b-2c68cacb0f5c\") " Jan 22 09:22:24 crc kubenswrapper[4811]: I0122 09:22:24.612843 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/64241d93-e4db-4880-a25b-2c68cacb0f5c-operator-scripts\") pod \"64241d93-e4db-4880-a25b-2c68cacb0f5c\" (UID: \"64241d93-e4db-4880-a25b-2c68cacb0f5c\") " Jan 22 09:22:24 crc kubenswrapper[4811]: I0122 09:22:24.613043 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/433d526d-dde0-4815-9863-d934d1a30739-operator-scripts\") pod \"433d526d-dde0-4815-9863-d934d1a30739\" (UID: \"433d526d-dde0-4815-9863-d934d1a30739\") " Jan 22 09:22:24 crc kubenswrapper[4811]: I0122 09:22:24.613123 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vslp9\" (UniqueName: \"kubernetes.io/projected/433d526d-dde0-4815-9863-d934d1a30739-kube-api-access-vslp9\") pod \"433d526d-dde0-4815-9863-d934d1a30739\" (UID: \"433d526d-dde0-4815-9863-d934d1a30739\") " Jan 22 09:22:24 crc kubenswrapper[4811]: I0122 09:22:24.613886 4811 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f33f4ea9-6343-42e6-8666-6e34e9926dd9-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 09:22:24 crc kubenswrapper[4811]: I0122 09:22:24.613905 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c9ldw\" (UniqueName: \"kubernetes.io/projected/f33f4ea9-6343-42e6-8666-6e34e9926dd9-kube-api-access-c9ldw\") on node \"crc\" DevicePath \"\"" Jan 22 09:22:24 crc kubenswrapper[4811]: I0122 09:22:24.613993 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/433d526d-dde0-4815-9863-d934d1a30739-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "433d526d-dde0-4815-9863-d934d1a30739" (UID: "433d526d-dde0-4815-9863-d934d1a30739"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:22:24 crc kubenswrapper[4811]: I0122 09:22:24.615383 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64241d93-e4db-4880-a25b-2c68cacb0f5c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "64241d93-e4db-4880-a25b-2c68cacb0f5c" (UID: "64241d93-e4db-4880-a25b-2c68cacb0f5c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:22:24 crc kubenswrapper[4811]: I0122 09:22:24.621821 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64241d93-e4db-4880-a25b-2c68cacb0f5c-kube-api-access-6szs7" (OuterVolumeSpecName: "kube-api-access-6szs7") pod "64241d93-e4db-4880-a25b-2c68cacb0f5c" (UID: "64241d93-e4db-4880-a25b-2c68cacb0f5c"). InnerVolumeSpecName "kube-api-access-6szs7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:22:24 crc kubenswrapper[4811]: I0122 09:22:24.627283 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/433d526d-dde0-4815-9863-d934d1a30739-kube-api-access-vslp9" (OuterVolumeSpecName: "kube-api-access-vslp9") pod "433d526d-dde0-4815-9863-d934d1a30739" (UID: "433d526d-dde0-4815-9863-d934d1a30739"). InnerVolumeSpecName "kube-api-access-vslp9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:22:24 crc kubenswrapper[4811]: I0122 09:22:24.717776 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m8987\" (UniqueName: \"kubernetes.io/projected/93312640-d70b-4075-af45-5bf9e6625c73-kube-api-access-m8987\") pod \"93312640-d70b-4075-af45-5bf9e6625c73\" (UID: \"93312640-d70b-4075-af45-5bf9e6625c73\") " Jan 22 09:22:24 crc kubenswrapper[4811]: I0122 09:22:24.717838 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-748f8\" (UniqueName: \"kubernetes.io/projected/ff4e524c-e9f6-483d-b8ae-d8abf0a7bf21-kube-api-access-748f8\") pod \"ff4e524c-e9f6-483d-b8ae-d8abf0a7bf21\" (UID: \"ff4e524c-e9f6-483d-b8ae-d8abf0a7bf21\") " Jan 22 09:22:24 crc kubenswrapper[4811]: I0122 09:22:24.717880 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hh8w9\" (UniqueName: \"kubernetes.io/projected/f255ce7a-ccc6-41ac-901d-92554247b909-kube-api-access-hh8w9\") pod \"f255ce7a-ccc6-41ac-901d-92554247b909\" (UID: \"f255ce7a-ccc6-41ac-901d-92554247b909\") " Jan 22 09:22:24 crc kubenswrapper[4811]: I0122 09:22:24.717942 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93312640-d70b-4075-af45-5bf9e6625c73-operator-scripts\") pod \"93312640-d70b-4075-af45-5bf9e6625c73\" (UID: \"93312640-d70b-4075-af45-5bf9e6625c73\") " Jan 22 09:22:24 crc kubenswrapper[4811]: I0122 09:22:24.717989 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff4e524c-e9f6-483d-b8ae-d8abf0a7bf21-operator-scripts\") pod \"ff4e524c-e9f6-483d-b8ae-d8abf0a7bf21\" (UID: \"ff4e524c-e9f6-483d-b8ae-d8abf0a7bf21\") " Jan 22 09:22:24 crc kubenswrapper[4811]: I0122 09:22:24.718453 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93312640-d70b-4075-af45-5bf9e6625c73-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "93312640-d70b-4075-af45-5bf9e6625c73" (UID: "93312640-d70b-4075-af45-5bf9e6625c73"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:22:24 crc kubenswrapper[4811]: I0122 09:22:24.718753 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f255ce7a-ccc6-41ac-901d-92554247b909-operator-scripts\") pod \"f255ce7a-ccc6-41ac-901d-92554247b909\" (UID: \"f255ce7a-ccc6-41ac-901d-92554247b909\") " Jan 22 09:22:24 crc kubenswrapper[4811]: I0122 09:22:24.718789 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff4e524c-e9f6-483d-b8ae-d8abf0a7bf21-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ff4e524c-e9f6-483d-b8ae-d8abf0a7bf21" (UID: "ff4e524c-e9f6-483d-b8ae-d8abf0a7bf21"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:22:24 crc kubenswrapper[4811]: I0122 09:22:24.719075 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f255ce7a-ccc6-41ac-901d-92554247b909-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f255ce7a-ccc6-41ac-901d-92554247b909" (UID: "f255ce7a-ccc6-41ac-901d-92554247b909"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:22:24 crc kubenswrapper[4811]: I0122 09:22:24.719306 4811 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/64241d93-e4db-4880-a25b-2c68cacb0f5c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 09:22:24 crc kubenswrapper[4811]: I0122 09:22:24.719325 4811 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93312640-d70b-4075-af45-5bf9e6625c73-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 09:22:24 crc kubenswrapper[4811]: I0122 09:22:24.719334 4811 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff4e524c-e9f6-483d-b8ae-d8abf0a7bf21-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 09:22:24 crc kubenswrapper[4811]: I0122 09:22:24.719345 4811 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/433d526d-dde0-4815-9863-d934d1a30739-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 09:22:24 crc kubenswrapper[4811]: I0122 09:22:24.719353 4811 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f255ce7a-ccc6-41ac-901d-92554247b909-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 09:22:24 crc kubenswrapper[4811]: I0122 09:22:24.719362 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vslp9\" (UniqueName: \"kubernetes.io/projected/433d526d-dde0-4815-9863-d934d1a30739-kube-api-access-vslp9\") on node \"crc\" DevicePath \"\"" Jan 22 09:22:24 crc kubenswrapper[4811]: I0122 09:22:24.719373 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6szs7\" (UniqueName: \"kubernetes.io/projected/64241d93-e4db-4880-a25b-2c68cacb0f5c-kube-api-access-6szs7\") on node \"crc\" DevicePath \"\"" Jan 22 09:22:24 crc kubenswrapper[4811]: I0122 09:22:24.724738 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff4e524c-e9f6-483d-b8ae-d8abf0a7bf21-kube-api-access-748f8" (OuterVolumeSpecName: "kube-api-access-748f8") pod "ff4e524c-e9f6-483d-b8ae-d8abf0a7bf21" (UID: "ff4e524c-e9f6-483d-b8ae-d8abf0a7bf21"). InnerVolumeSpecName "kube-api-access-748f8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:22:24 crc kubenswrapper[4811]: I0122 09:22:24.728903 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f255ce7a-ccc6-41ac-901d-92554247b909-kube-api-access-hh8w9" (OuterVolumeSpecName: "kube-api-access-hh8w9") pod "f255ce7a-ccc6-41ac-901d-92554247b909" (UID: "f255ce7a-ccc6-41ac-901d-92554247b909"). InnerVolumeSpecName "kube-api-access-hh8w9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:22:24 crc kubenswrapper[4811]: I0122 09:22:24.728973 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93312640-d70b-4075-af45-5bf9e6625c73-kube-api-access-m8987" (OuterVolumeSpecName: "kube-api-access-m8987") pod "93312640-d70b-4075-af45-5bf9e6625c73" (UID: "93312640-d70b-4075-af45-5bf9e6625c73"). InnerVolumeSpecName "kube-api-access-m8987". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:22:24 crc kubenswrapper[4811]: I0122 09:22:24.820678 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m8987\" (UniqueName: \"kubernetes.io/projected/93312640-d70b-4075-af45-5bf9e6625c73-kube-api-access-m8987\") on node \"crc\" DevicePath \"\"" Jan 22 09:22:24 crc kubenswrapper[4811]: I0122 09:22:24.820705 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-748f8\" (UniqueName: \"kubernetes.io/projected/ff4e524c-e9f6-483d-b8ae-d8abf0a7bf21-kube-api-access-748f8\") on node \"crc\" DevicePath \"\"" Jan 22 09:22:24 crc kubenswrapper[4811]: I0122 09:22:24.820716 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hh8w9\" (UniqueName: \"kubernetes.io/projected/f255ce7a-ccc6-41ac-901d-92554247b909-kube-api-access-hh8w9\") on node \"crc\" DevicePath \"\"" Jan 22 09:22:24 crc kubenswrapper[4811]: I0122 09:22:24.927336 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-7b3d-account-create-update-jq477" Jan 22 09:22:24 crc kubenswrapper[4811]: I0122 09:22:24.927265 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-7b3d-account-create-update-jq477" event={"ID":"433d526d-dde0-4815-9863-d934d1a30739","Type":"ContainerDied","Data":"f210aea8cfca7313240f0db3af276c434584e030a74c8e86a9a1ee11da77fafa"} Jan 22 09:22:24 crc kubenswrapper[4811]: I0122 09:22:24.927772 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f210aea8cfca7313240f0db3af276c434584e030a74c8e86a9a1ee11da77fafa" Jan 22 09:22:24 crc kubenswrapper[4811]: I0122 09:22:24.929959 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"31ef50f1-c093-47e5-80ee-410c0a972865","Type":"ContainerStarted","Data":"0866a602094589aa794a1d803f6b33d360d7e808c48ed2a4f37cf4aa315d21cb"} Jan 22 09:22:24 crc kubenswrapper[4811]: I0122 09:22:24.930190 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 22 09:22:24 crc kubenswrapper[4811]: I0122 09:22:24.930181 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="31ef50f1-c093-47e5-80ee-410c0a972865" containerName="ceilometer-central-agent" containerID="cri-o://00cf6f3bf2bc84798d764115be93a799a88a57a56dac19bdc01a0a941399fe43" gracePeriod=30 Jan 22 09:22:24 crc kubenswrapper[4811]: I0122 09:22:24.930475 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="31ef50f1-c093-47e5-80ee-410c0a972865" containerName="proxy-httpd" containerID="cri-o://0866a602094589aa794a1d803f6b33d360d7e808c48ed2a4f37cf4aa315d21cb" gracePeriod=30 Jan 22 09:22:24 crc kubenswrapper[4811]: I0122 09:22:24.930560 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="31ef50f1-c093-47e5-80ee-410c0a972865" containerName="sg-core" containerID="cri-o://f9f86de77346e28cf14a423bff186a549fee605cf808fec9667dd4fabeac566a" gracePeriod=30 Jan 22 09:22:24 crc kubenswrapper[4811]: I0122 09:22:24.930607 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="31ef50f1-c093-47e5-80ee-410c0a972865" containerName="ceilometer-notification-agent" containerID="cri-o://ad0e224c2cb1d23bb86fad6d639b42b2f3b90a6aa2a78d6c7929d87b03795ada" gracePeriod=30 Jan 22 09:22:24 crc kubenswrapper[4811]: I0122 09:22:24.938797 4811 generic.go:334] "Generic (PLEG): container finished" podID="3ccb4838-2855-4017-9c00-e8765846d47e" containerID="391d864b3a391cb1cd483f20345fe978b43e8dff672200e2da4b9b0d1d331b6e" exitCode=0 Jan 22 09:22:24 crc kubenswrapper[4811]: I0122 09:22:24.938871 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8b9759fbd-gggfs" event={"ID":"3ccb4838-2855-4017-9c00-e8765846d47e","Type":"ContainerDied","Data":"391d864b3a391cb1cd483f20345fe978b43e8dff672200e2da4b9b0d1d331b6e"} Jan 22 09:22:24 crc kubenswrapper[4811]: I0122 09:22:24.940161 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-bzk5b" event={"ID":"64241d93-e4db-4880-a25b-2c68cacb0f5c","Type":"ContainerDied","Data":"5e8cd76ac16e016de389a8dc8fb7a3fb851b0af4550e9a6f8bb7f65e1a631694"} Jan 22 09:22:24 crc kubenswrapper[4811]: I0122 09:22:24.940187 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5e8cd76ac16e016de389a8dc8fb7a3fb851b0af4550e9a6f8bb7f65e1a631694" Jan 22 09:22:24 crc kubenswrapper[4811]: I0122 09:22:24.940242 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-bzk5b" Jan 22 09:22:24 crc kubenswrapper[4811]: I0122 09:22:24.956379 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-zxgc6" event={"ID":"f33f4ea9-6343-42e6-8666-6e34e9926dd9","Type":"ContainerDied","Data":"0679de7e5346d2c42e292a28038f30918bc9e4c8326ac8990b58de50582c2979"} Jan 22 09:22:24 crc kubenswrapper[4811]: I0122 09:22:24.956404 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0679de7e5346d2c42e292a28038f30918bc9e4c8326ac8990b58de50582c2979" Jan 22 09:22:24 crc kubenswrapper[4811]: I0122 09:22:24.956449 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-zxgc6" Jan 22 09:22:24 crc kubenswrapper[4811]: I0122 09:22:24.961933 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-2sgkl" event={"ID":"93312640-d70b-4075-af45-5bf9e6625c73","Type":"ContainerDied","Data":"176be2bc7fb035d232ab9068cda06c56aa4fedb9535a3352191eb0f7903cd7a2"} Jan 22 09:22:24 crc kubenswrapper[4811]: I0122 09:22:24.961971 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="176be2bc7fb035d232ab9068cda06c56aa4fedb9535a3352191eb0f7903cd7a2" Jan 22 09:22:24 crc kubenswrapper[4811]: I0122 09:22:24.962044 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-2sgkl" Jan 22 09:22:24 crc kubenswrapper[4811]: I0122 09:22:24.967849 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-65d2-account-create-update-sq4sn" Jan 22 09:22:24 crc kubenswrapper[4811]: I0122 09:22:24.969079 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-65d2-account-create-update-sq4sn" event={"ID":"ff4e524c-e9f6-483d-b8ae-d8abf0a7bf21","Type":"ContainerDied","Data":"e7ba639dd84149306204648ed4f71d056899dbf78bf9c4bcf6c4b0746e62bdf0"} Jan 22 09:22:24 crc kubenswrapper[4811]: I0122 09:22:24.969107 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e7ba639dd84149306204648ed4f71d056899dbf78bf9c4bcf6c4b0746e62bdf0" Jan 22 09:22:24 crc kubenswrapper[4811]: I0122 09:22:24.970172 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-6d99-account-create-update-rwg8r" event={"ID":"f255ce7a-ccc6-41ac-901d-92554247b909","Type":"ContainerDied","Data":"063ba59699f0c8390ca0554cc0c3d3ebf86c60e1462ae992065ccdf7ee76bf2c"} Jan 22 09:22:24 crc kubenswrapper[4811]: I0122 09:22:24.970213 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="063ba59699f0c8390ca0554cc0c3d3ebf86c60e1462ae992065ccdf7ee76bf2c" Jan 22 09:22:24 crc kubenswrapper[4811]: I0122 09:22:24.970252 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-6d99-account-create-update-rwg8r" Jan 22 09:22:24 crc kubenswrapper[4811]: I0122 09:22:24.971483 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.453546539 podStartE2EDuration="6.971452413s" podCreationTimestamp="2026-01-22 09:22:18 +0000 UTC" firstStartedPulling="2026-01-22 09:22:19.670611746 +0000 UTC m=+983.992798869" lastFinishedPulling="2026-01-22 09:22:24.18851762 +0000 UTC m=+988.510704743" observedRunningTime="2026-01-22 09:22:24.962967211 +0000 UTC m=+989.285154324" watchObservedRunningTime="2026-01-22 09:22:24.971452413 +0000 UTC m=+989.293639536" Jan 22 09:22:25 crc kubenswrapper[4811]: I0122 09:22:25.994415 4811 generic.go:334] "Generic (PLEG): container finished" podID="31ef50f1-c093-47e5-80ee-410c0a972865" containerID="0866a602094589aa794a1d803f6b33d360d7e808c48ed2a4f37cf4aa315d21cb" exitCode=0 Jan 22 09:22:25 crc kubenswrapper[4811]: I0122 09:22:25.994721 4811 generic.go:334] "Generic (PLEG): container finished" podID="31ef50f1-c093-47e5-80ee-410c0a972865" containerID="f9f86de77346e28cf14a423bff186a549fee605cf808fec9667dd4fabeac566a" exitCode=2 Jan 22 09:22:25 crc kubenswrapper[4811]: I0122 09:22:25.994731 4811 generic.go:334] "Generic (PLEG): container finished" podID="31ef50f1-c093-47e5-80ee-410c0a972865" containerID="ad0e224c2cb1d23bb86fad6d639b42b2f3b90a6aa2a78d6c7929d87b03795ada" exitCode=0 Jan 22 09:22:26 crc kubenswrapper[4811]: I0122 09:22:26.015939 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"31ef50f1-c093-47e5-80ee-410c0a972865","Type":"ContainerDied","Data":"0866a602094589aa794a1d803f6b33d360d7e808c48ed2a4f37cf4aa315d21cb"} Jan 22 09:22:26 crc kubenswrapper[4811]: I0122 09:22:26.015983 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"31ef50f1-c093-47e5-80ee-410c0a972865","Type":"ContainerDied","Data":"f9f86de77346e28cf14a423bff186a549fee605cf808fec9667dd4fabeac566a"} Jan 22 09:22:26 crc kubenswrapper[4811]: I0122 09:22:26.016001 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"31ef50f1-c093-47e5-80ee-410c0a972865","Type":"ContainerDied","Data":"ad0e224c2cb1d23bb86fad6d639b42b2f3b90a6aa2a78d6c7929d87b03795ada"} Jan 22 09:22:28 crc kubenswrapper[4811]: I0122 09:22:28.894790 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 09:22:29 crc kubenswrapper[4811]: I0122 09:22:29.001641 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/31ef50f1-c093-47e5-80ee-410c0a972865-scripts\") pod \"31ef50f1-c093-47e5-80ee-410c0a972865\" (UID: \"31ef50f1-c093-47e5-80ee-410c0a972865\") " Jan 22 09:22:29 crc kubenswrapper[4811]: I0122 09:22:29.001952 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31ef50f1-c093-47e5-80ee-410c0a972865-config-data\") pod \"31ef50f1-c093-47e5-80ee-410c0a972865\" (UID: \"31ef50f1-c093-47e5-80ee-410c0a972865\") " Jan 22 09:22:29 crc kubenswrapper[4811]: I0122 09:22:29.001991 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/31ef50f1-c093-47e5-80ee-410c0a972865-log-httpd\") pod \"31ef50f1-c093-47e5-80ee-410c0a972865\" (UID: \"31ef50f1-c093-47e5-80ee-410c0a972865\") " Jan 22 09:22:29 crc kubenswrapper[4811]: I0122 09:22:29.002161 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31ef50f1-c093-47e5-80ee-410c0a972865-combined-ca-bundle\") pod \"31ef50f1-c093-47e5-80ee-410c0a972865\" (UID: \"31ef50f1-c093-47e5-80ee-410c0a972865\") " Jan 22 09:22:29 crc kubenswrapper[4811]: I0122 09:22:29.002198 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gqcrs\" (UniqueName: \"kubernetes.io/projected/31ef50f1-c093-47e5-80ee-410c0a972865-kube-api-access-gqcrs\") pod \"31ef50f1-c093-47e5-80ee-410c0a972865\" (UID: \"31ef50f1-c093-47e5-80ee-410c0a972865\") " Jan 22 09:22:29 crc kubenswrapper[4811]: I0122 09:22:29.002538 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/31ef50f1-c093-47e5-80ee-410c0a972865-sg-core-conf-yaml\") pod \"31ef50f1-c093-47e5-80ee-410c0a972865\" (UID: \"31ef50f1-c093-47e5-80ee-410c0a972865\") " Jan 22 09:22:29 crc kubenswrapper[4811]: I0122 09:22:29.002678 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/31ef50f1-c093-47e5-80ee-410c0a972865-run-httpd\") pod \"31ef50f1-c093-47e5-80ee-410c0a972865\" (UID: \"31ef50f1-c093-47e5-80ee-410c0a972865\") " Jan 22 09:22:29 crc kubenswrapper[4811]: I0122 09:22:29.002710 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31ef50f1-c093-47e5-80ee-410c0a972865-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "31ef50f1-c093-47e5-80ee-410c0a972865" (UID: "31ef50f1-c093-47e5-80ee-410c0a972865"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:22:29 crc kubenswrapper[4811]: I0122 09:22:29.002915 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31ef50f1-c093-47e5-80ee-410c0a972865-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "31ef50f1-c093-47e5-80ee-410c0a972865" (UID: "31ef50f1-c093-47e5-80ee-410c0a972865"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:22:29 crc kubenswrapper[4811]: I0122 09:22:29.003975 4811 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/31ef50f1-c093-47e5-80ee-410c0a972865-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 09:22:29 crc kubenswrapper[4811]: I0122 09:22:29.004007 4811 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/31ef50f1-c093-47e5-80ee-410c0a972865-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 09:22:29 crc kubenswrapper[4811]: I0122 09:22:29.008395 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31ef50f1-c093-47e5-80ee-410c0a972865-kube-api-access-gqcrs" (OuterVolumeSpecName: "kube-api-access-gqcrs") pod "31ef50f1-c093-47e5-80ee-410c0a972865" (UID: "31ef50f1-c093-47e5-80ee-410c0a972865"). InnerVolumeSpecName "kube-api-access-gqcrs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:22:29 crc kubenswrapper[4811]: I0122 09:22:29.008473 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31ef50f1-c093-47e5-80ee-410c0a972865-scripts" (OuterVolumeSpecName: "scripts") pod "31ef50f1-c093-47e5-80ee-410c0a972865" (UID: "31ef50f1-c093-47e5-80ee-410c0a972865"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:22:29 crc kubenswrapper[4811]: I0122 09:22:29.022768 4811 generic.go:334] "Generic (PLEG): container finished" podID="31ef50f1-c093-47e5-80ee-410c0a972865" containerID="00cf6f3bf2bc84798d764115be93a799a88a57a56dac19bdc01a0a941399fe43" exitCode=0 Jan 22 09:22:29 crc kubenswrapper[4811]: I0122 09:22:29.022817 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"31ef50f1-c093-47e5-80ee-410c0a972865","Type":"ContainerDied","Data":"00cf6f3bf2bc84798d764115be93a799a88a57a56dac19bdc01a0a941399fe43"} Jan 22 09:22:29 crc kubenswrapper[4811]: I0122 09:22:29.022841 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"31ef50f1-c093-47e5-80ee-410c0a972865","Type":"ContainerDied","Data":"d637d4e4624e333bfa2e81bca7c609483e7dfda4268b16b8d2f57e62b489d858"} Jan 22 09:22:29 crc kubenswrapper[4811]: I0122 09:22:29.022857 4811 scope.go:117] "RemoveContainer" containerID="0866a602094589aa794a1d803f6b33d360d7e808c48ed2a4f37cf4aa315d21cb" Jan 22 09:22:29 crc kubenswrapper[4811]: I0122 09:22:29.022952 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 09:22:29 crc kubenswrapper[4811]: I0122 09:22:29.031331 4811 generic.go:334] "Generic (PLEG): container finished" podID="3ccb4838-2855-4017-9c00-e8765846d47e" containerID="b5c695e389fdee90e681326b0f6901391f4404a9ccb1ebc6e28255e7dfc021b6" exitCode=0 Jan 22 09:22:29 crc kubenswrapper[4811]: I0122 09:22:29.031357 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8b9759fbd-gggfs" event={"ID":"3ccb4838-2855-4017-9c00-e8765846d47e","Type":"ContainerDied","Data":"b5c695e389fdee90e681326b0f6901391f4404a9ccb1ebc6e28255e7dfc021b6"} Jan 22 09:22:29 crc kubenswrapper[4811]: I0122 09:22:29.041515 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31ef50f1-c093-47e5-80ee-410c0a972865-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "31ef50f1-c093-47e5-80ee-410c0a972865" (UID: "31ef50f1-c093-47e5-80ee-410c0a972865"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:22:29 crc kubenswrapper[4811]: I0122 09:22:29.064254 4811 scope.go:117] "RemoveContainer" containerID="f9f86de77346e28cf14a423bff186a549fee605cf808fec9667dd4fabeac566a" Jan 22 09:22:29 crc kubenswrapper[4811]: I0122 09:22:29.082424 4811 scope.go:117] "RemoveContainer" containerID="ad0e224c2cb1d23bb86fad6d639b42b2f3b90a6aa2a78d6c7929d87b03795ada" Jan 22 09:22:29 crc kubenswrapper[4811]: I0122 09:22:29.089920 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31ef50f1-c093-47e5-80ee-410c0a972865-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "31ef50f1-c093-47e5-80ee-410c0a972865" (UID: "31ef50f1-c093-47e5-80ee-410c0a972865"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:22:29 crc kubenswrapper[4811]: I0122 09:22:29.104318 4811 scope.go:117] "RemoveContainer" containerID="00cf6f3bf2bc84798d764115be93a799a88a57a56dac19bdc01a0a941399fe43" Jan 22 09:22:29 crc kubenswrapper[4811]: I0122 09:22:29.105910 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gqcrs\" (UniqueName: \"kubernetes.io/projected/31ef50f1-c093-47e5-80ee-410c0a972865-kube-api-access-gqcrs\") on node \"crc\" DevicePath \"\"" Jan 22 09:22:29 crc kubenswrapper[4811]: I0122 09:22:29.105929 4811 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/31ef50f1-c093-47e5-80ee-410c0a972865-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 22 09:22:29 crc kubenswrapper[4811]: I0122 09:22:29.105939 4811 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/31ef50f1-c093-47e5-80ee-410c0a972865-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 09:22:29 crc kubenswrapper[4811]: I0122 09:22:29.105949 4811 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31ef50f1-c093-47e5-80ee-410c0a972865-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 09:22:29 crc kubenswrapper[4811]: I0122 09:22:29.126850 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31ef50f1-c093-47e5-80ee-410c0a972865-config-data" (OuterVolumeSpecName: "config-data") pod "31ef50f1-c093-47e5-80ee-410c0a972865" (UID: "31ef50f1-c093-47e5-80ee-410c0a972865"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:22:29 crc kubenswrapper[4811]: I0122 09:22:29.134072 4811 scope.go:117] "RemoveContainer" containerID="0866a602094589aa794a1d803f6b33d360d7e808c48ed2a4f37cf4aa315d21cb" Jan 22 09:22:29 crc kubenswrapper[4811]: E0122 09:22:29.134888 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0866a602094589aa794a1d803f6b33d360d7e808c48ed2a4f37cf4aa315d21cb\": container with ID starting with 0866a602094589aa794a1d803f6b33d360d7e808c48ed2a4f37cf4aa315d21cb not found: ID does not exist" containerID="0866a602094589aa794a1d803f6b33d360d7e808c48ed2a4f37cf4aa315d21cb" Jan 22 09:22:29 crc kubenswrapper[4811]: I0122 09:22:29.134934 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0866a602094589aa794a1d803f6b33d360d7e808c48ed2a4f37cf4aa315d21cb"} err="failed to get container status \"0866a602094589aa794a1d803f6b33d360d7e808c48ed2a4f37cf4aa315d21cb\": rpc error: code = NotFound desc = could not find container \"0866a602094589aa794a1d803f6b33d360d7e808c48ed2a4f37cf4aa315d21cb\": container with ID starting with 0866a602094589aa794a1d803f6b33d360d7e808c48ed2a4f37cf4aa315d21cb not found: ID does not exist" Jan 22 09:22:29 crc kubenswrapper[4811]: I0122 09:22:29.134972 4811 scope.go:117] "RemoveContainer" containerID="f9f86de77346e28cf14a423bff186a549fee605cf808fec9667dd4fabeac566a" Jan 22 09:22:29 crc kubenswrapper[4811]: E0122 09:22:29.135324 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f9f86de77346e28cf14a423bff186a549fee605cf808fec9667dd4fabeac566a\": container with ID starting with f9f86de77346e28cf14a423bff186a549fee605cf808fec9667dd4fabeac566a not found: ID does not exist" containerID="f9f86de77346e28cf14a423bff186a549fee605cf808fec9667dd4fabeac566a" Jan 22 09:22:29 crc kubenswrapper[4811]: I0122 09:22:29.135359 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9f86de77346e28cf14a423bff186a549fee605cf808fec9667dd4fabeac566a"} err="failed to get container status \"f9f86de77346e28cf14a423bff186a549fee605cf808fec9667dd4fabeac566a\": rpc error: code = NotFound desc = could not find container \"f9f86de77346e28cf14a423bff186a549fee605cf808fec9667dd4fabeac566a\": container with ID starting with f9f86de77346e28cf14a423bff186a549fee605cf808fec9667dd4fabeac566a not found: ID does not exist" Jan 22 09:22:29 crc kubenswrapper[4811]: I0122 09:22:29.135382 4811 scope.go:117] "RemoveContainer" containerID="ad0e224c2cb1d23bb86fad6d639b42b2f3b90a6aa2a78d6c7929d87b03795ada" Jan 22 09:22:29 crc kubenswrapper[4811]: E0122 09:22:29.136170 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad0e224c2cb1d23bb86fad6d639b42b2f3b90a6aa2a78d6c7929d87b03795ada\": container with ID starting with ad0e224c2cb1d23bb86fad6d639b42b2f3b90a6aa2a78d6c7929d87b03795ada not found: ID does not exist" containerID="ad0e224c2cb1d23bb86fad6d639b42b2f3b90a6aa2a78d6c7929d87b03795ada" Jan 22 09:22:29 crc kubenswrapper[4811]: I0122 09:22:29.136197 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad0e224c2cb1d23bb86fad6d639b42b2f3b90a6aa2a78d6c7929d87b03795ada"} err="failed to get container status \"ad0e224c2cb1d23bb86fad6d639b42b2f3b90a6aa2a78d6c7929d87b03795ada\": rpc error: code = NotFound desc = could not find container \"ad0e224c2cb1d23bb86fad6d639b42b2f3b90a6aa2a78d6c7929d87b03795ada\": container with ID starting with ad0e224c2cb1d23bb86fad6d639b42b2f3b90a6aa2a78d6c7929d87b03795ada not found: ID does not exist" Jan 22 09:22:29 crc kubenswrapper[4811]: I0122 09:22:29.136212 4811 scope.go:117] "RemoveContainer" containerID="00cf6f3bf2bc84798d764115be93a799a88a57a56dac19bdc01a0a941399fe43" Jan 22 09:22:29 crc kubenswrapper[4811]: E0122 09:22:29.136497 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"00cf6f3bf2bc84798d764115be93a799a88a57a56dac19bdc01a0a941399fe43\": container with ID starting with 00cf6f3bf2bc84798d764115be93a799a88a57a56dac19bdc01a0a941399fe43 not found: ID does not exist" containerID="00cf6f3bf2bc84798d764115be93a799a88a57a56dac19bdc01a0a941399fe43" Jan 22 09:22:29 crc kubenswrapper[4811]: I0122 09:22:29.136552 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00cf6f3bf2bc84798d764115be93a799a88a57a56dac19bdc01a0a941399fe43"} err="failed to get container status \"00cf6f3bf2bc84798d764115be93a799a88a57a56dac19bdc01a0a941399fe43\": rpc error: code = NotFound desc = could not find container \"00cf6f3bf2bc84798d764115be93a799a88a57a56dac19bdc01a0a941399fe43\": container with ID starting with 00cf6f3bf2bc84798d764115be93a799a88a57a56dac19bdc01a0a941399fe43 not found: ID does not exist" Jan 22 09:22:29 crc kubenswrapper[4811]: I0122 09:22:29.209712 4811 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31ef50f1-c093-47e5-80ee-410c0a972865-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 09:22:29 crc kubenswrapper[4811]: I0122 09:22:29.332217 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8b9759fbd-gggfs" Jan 22 09:22:29 crc kubenswrapper[4811]: I0122 09:22:29.415701 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3ccb4838-2855-4017-9c00-e8765846d47e-httpd-config\") pod \"3ccb4838-2855-4017-9c00-e8765846d47e\" (UID: \"3ccb4838-2855-4017-9c00-e8765846d47e\") " Jan 22 09:22:29 crc kubenswrapper[4811]: I0122 09:22:29.415792 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ccb4838-2855-4017-9c00-e8765846d47e-combined-ca-bundle\") pod \"3ccb4838-2855-4017-9c00-e8765846d47e\" (UID: \"3ccb4838-2855-4017-9c00-e8765846d47e\") " Jan 22 09:22:29 crc kubenswrapper[4811]: I0122 09:22:29.416742 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3ccb4838-2855-4017-9c00-e8765846d47e-config\") pod \"3ccb4838-2855-4017-9c00-e8765846d47e\" (UID: \"3ccb4838-2855-4017-9c00-e8765846d47e\") " Jan 22 09:22:29 crc kubenswrapper[4811]: I0122 09:22:29.416846 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ccb4838-2855-4017-9c00-e8765846d47e-ovndb-tls-certs\") pod \"3ccb4838-2855-4017-9c00-e8765846d47e\" (UID: \"3ccb4838-2855-4017-9c00-e8765846d47e\") " Jan 22 09:22:29 crc kubenswrapper[4811]: I0122 09:22:29.416882 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6lw5l\" (UniqueName: \"kubernetes.io/projected/3ccb4838-2855-4017-9c00-e8765846d47e-kube-api-access-6lw5l\") pod \"3ccb4838-2855-4017-9c00-e8765846d47e\" (UID: \"3ccb4838-2855-4017-9c00-e8765846d47e\") " Jan 22 09:22:29 crc kubenswrapper[4811]: I0122 09:22:29.424749 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ccb4838-2855-4017-9c00-e8765846d47e-kube-api-access-6lw5l" (OuterVolumeSpecName: "kube-api-access-6lw5l") pod "3ccb4838-2855-4017-9c00-e8765846d47e" (UID: "3ccb4838-2855-4017-9c00-e8765846d47e"). InnerVolumeSpecName "kube-api-access-6lw5l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:22:29 crc kubenswrapper[4811]: I0122 09:22:29.424800 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ccb4838-2855-4017-9c00-e8765846d47e-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "3ccb4838-2855-4017-9c00-e8765846d47e" (UID: "3ccb4838-2855-4017-9c00-e8765846d47e"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:22:29 crc kubenswrapper[4811]: I0122 09:22:29.444606 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 22 09:22:29 crc kubenswrapper[4811]: I0122 09:22:29.456614 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 22 09:22:29 crc kubenswrapper[4811]: I0122 09:22:29.465203 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 22 09:22:29 crc kubenswrapper[4811]: E0122 09:22:29.465550 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31ef50f1-c093-47e5-80ee-410c0a972865" containerName="proxy-httpd" Jan 22 09:22:29 crc kubenswrapper[4811]: I0122 09:22:29.465572 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="31ef50f1-c093-47e5-80ee-410c0a972865" containerName="proxy-httpd" Jan 22 09:22:29 crc kubenswrapper[4811]: E0122 09:22:29.465584 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f255ce7a-ccc6-41ac-901d-92554247b909" containerName="mariadb-account-create-update" Jan 22 09:22:29 crc kubenswrapper[4811]: I0122 09:22:29.465591 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="f255ce7a-ccc6-41ac-901d-92554247b909" containerName="mariadb-account-create-update" Jan 22 09:22:29 crc kubenswrapper[4811]: E0122 09:22:29.465599 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31ef50f1-c093-47e5-80ee-410c0a972865" containerName="sg-core" Jan 22 09:22:29 crc kubenswrapper[4811]: I0122 09:22:29.465606 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="31ef50f1-c093-47e5-80ee-410c0a972865" containerName="sg-core" Jan 22 09:22:29 crc kubenswrapper[4811]: E0122 09:22:29.465615 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93312640-d70b-4075-af45-5bf9e6625c73" containerName="mariadb-database-create" Jan 22 09:22:29 crc kubenswrapper[4811]: I0122 09:22:29.465620 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="93312640-d70b-4075-af45-5bf9e6625c73" containerName="mariadb-database-create" Jan 22 09:22:29 crc kubenswrapper[4811]: E0122 09:22:29.465643 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff4e524c-e9f6-483d-b8ae-d8abf0a7bf21" containerName="mariadb-account-create-update" Jan 22 09:22:29 crc kubenswrapper[4811]: I0122 09:22:29.465649 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff4e524c-e9f6-483d-b8ae-d8abf0a7bf21" containerName="mariadb-account-create-update" Jan 22 09:22:29 crc kubenswrapper[4811]: E0122 09:22:29.465657 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ccb4838-2855-4017-9c00-e8765846d47e" containerName="neutron-api" Jan 22 09:22:29 crc kubenswrapper[4811]: I0122 09:22:29.465663 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ccb4838-2855-4017-9c00-e8765846d47e" containerName="neutron-api" Jan 22 09:22:29 crc kubenswrapper[4811]: E0122 09:22:29.465676 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31ef50f1-c093-47e5-80ee-410c0a972865" containerName="ceilometer-notification-agent" Jan 22 09:22:29 crc kubenswrapper[4811]: I0122 09:22:29.465682 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="31ef50f1-c093-47e5-80ee-410c0a972865" containerName="ceilometer-notification-agent" Jan 22 09:22:29 crc kubenswrapper[4811]: E0122 09:22:29.465691 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ccb4838-2855-4017-9c00-e8765846d47e" containerName="neutron-httpd" Jan 22 09:22:29 crc kubenswrapper[4811]: I0122 09:22:29.465696 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ccb4838-2855-4017-9c00-e8765846d47e" containerName="neutron-httpd" Jan 22 09:22:29 crc kubenswrapper[4811]: E0122 09:22:29.465705 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="433d526d-dde0-4815-9863-d934d1a30739" containerName="mariadb-account-create-update" Jan 22 09:22:29 crc kubenswrapper[4811]: I0122 09:22:29.465710 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="433d526d-dde0-4815-9863-d934d1a30739" containerName="mariadb-account-create-update" Jan 22 09:22:29 crc kubenswrapper[4811]: E0122 09:22:29.465721 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31ef50f1-c093-47e5-80ee-410c0a972865" containerName="ceilometer-central-agent" Jan 22 09:22:29 crc kubenswrapper[4811]: I0122 09:22:29.465727 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="31ef50f1-c093-47e5-80ee-410c0a972865" containerName="ceilometer-central-agent" Jan 22 09:22:29 crc kubenswrapper[4811]: E0122 09:22:29.465737 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64241d93-e4db-4880-a25b-2c68cacb0f5c" containerName="mariadb-database-create" Jan 22 09:22:29 crc kubenswrapper[4811]: I0122 09:22:29.465745 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="64241d93-e4db-4880-a25b-2c68cacb0f5c" containerName="mariadb-database-create" Jan 22 09:22:29 crc kubenswrapper[4811]: E0122 09:22:29.465755 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f33f4ea9-6343-42e6-8666-6e34e9926dd9" containerName="mariadb-database-create" Jan 22 09:22:29 crc kubenswrapper[4811]: I0122 09:22:29.465761 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="f33f4ea9-6343-42e6-8666-6e34e9926dd9" containerName="mariadb-database-create" Jan 22 09:22:29 crc kubenswrapper[4811]: I0122 09:22:29.465908 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="93312640-d70b-4075-af45-5bf9e6625c73" containerName="mariadb-database-create" Jan 22 09:22:29 crc kubenswrapper[4811]: I0122 09:22:29.465919 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="433d526d-dde0-4815-9863-d934d1a30739" containerName="mariadb-account-create-update" Jan 22 09:22:29 crc kubenswrapper[4811]: I0122 09:22:29.465926 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="31ef50f1-c093-47e5-80ee-410c0a972865" containerName="ceilometer-notification-agent" Jan 22 09:22:29 crc kubenswrapper[4811]: I0122 09:22:29.465935 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ccb4838-2855-4017-9c00-e8765846d47e" containerName="neutron-httpd" Jan 22 09:22:29 crc kubenswrapper[4811]: I0122 09:22:29.465942 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="64241d93-e4db-4880-a25b-2c68cacb0f5c" containerName="mariadb-database-create" Jan 22 09:22:29 crc kubenswrapper[4811]: I0122 09:22:29.465951 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="f255ce7a-ccc6-41ac-901d-92554247b909" containerName="mariadb-account-create-update" Jan 22 09:22:29 crc kubenswrapper[4811]: I0122 09:22:29.465961 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="f33f4ea9-6343-42e6-8666-6e34e9926dd9" containerName="mariadb-database-create" Jan 22 09:22:29 crc kubenswrapper[4811]: I0122 09:22:29.465969 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="31ef50f1-c093-47e5-80ee-410c0a972865" containerName="ceilometer-central-agent" Jan 22 09:22:29 crc kubenswrapper[4811]: I0122 09:22:29.465976 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="31ef50f1-c093-47e5-80ee-410c0a972865" containerName="proxy-httpd" Jan 22 09:22:29 crc kubenswrapper[4811]: I0122 09:22:29.465984 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ccb4838-2855-4017-9c00-e8765846d47e" containerName="neutron-api" Jan 22 09:22:29 crc kubenswrapper[4811]: I0122 09:22:29.465993 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff4e524c-e9f6-483d-b8ae-d8abf0a7bf21" containerName="mariadb-account-create-update" Jan 22 09:22:29 crc kubenswrapper[4811]: I0122 09:22:29.466003 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="31ef50f1-c093-47e5-80ee-410c0a972865" containerName="sg-core" Jan 22 09:22:29 crc kubenswrapper[4811]: I0122 09:22:29.466866 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ccb4838-2855-4017-9c00-e8765846d47e-config" (OuterVolumeSpecName: "config") pod "3ccb4838-2855-4017-9c00-e8765846d47e" (UID: "3ccb4838-2855-4017-9c00-e8765846d47e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:22:29 crc kubenswrapper[4811]: I0122 09:22:29.467606 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 09:22:29 crc kubenswrapper[4811]: I0122 09:22:29.470377 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 22 09:22:29 crc kubenswrapper[4811]: I0122 09:22:29.474291 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 22 09:22:29 crc kubenswrapper[4811]: I0122 09:22:29.477941 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 22 09:22:29 crc kubenswrapper[4811]: I0122 09:22:29.489012 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ccb4838-2855-4017-9c00-e8765846d47e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3ccb4838-2855-4017-9c00-e8765846d47e" (UID: "3ccb4838-2855-4017-9c00-e8765846d47e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:22:29 crc kubenswrapper[4811]: I0122 09:22:29.502289 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ccb4838-2855-4017-9c00-e8765846d47e-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "3ccb4838-2855-4017-9c00-e8765846d47e" (UID: "3ccb4838-2855-4017-9c00-e8765846d47e"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:22:29 crc kubenswrapper[4811]: I0122 09:22:29.522399 4811 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3ccb4838-2855-4017-9c00-e8765846d47e-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 22 09:22:29 crc kubenswrapper[4811]: I0122 09:22:29.522428 4811 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ccb4838-2855-4017-9c00-e8765846d47e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 09:22:29 crc kubenswrapper[4811]: I0122 09:22:29.522438 4811 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/3ccb4838-2855-4017-9c00-e8765846d47e-config\") on node \"crc\" DevicePath \"\"" Jan 22 09:22:29 crc kubenswrapper[4811]: I0122 09:22:29.522449 4811 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ccb4838-2855-4017-9c00-e8765846d47e-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 09:22:29 crc kubenswrapper[4811]: I0122 09:22:29.522459 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6lw5l\" (UniqueName: \"kubernetes.io/projected/3ccb4838-2855-4017-9c00-e8765846d47e-kube-api-access-6lw5l\") on node \"crc\" DevicePath \"\"" Jan 22 09:22:29 crc kubenswrapper[4811]: I0122 09:22:29.624454 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10556691-8d2e-434b-b43f-447a3c122f37-config-data\") pod \"ceilometer-0\" (UID: \"10556691-8d2e-434b-b43f-447a3c122f37\") " pod="openstack/ceilometer-0" Jan 22 09:22:29 crc kubenswrapper[4811]: I0122 09:22:29.624499 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10556691-8d2e-434b-b43f-447a3c122f37-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"10556691-8d2e-434b-b43f-447a3c122f37\") " pod="openstack/ceilometer-0" Jan 22 09:22:29 crc kubenswrapper[4811]: I0122 09:22:29.624535 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/10556691-8d2e-434b-b43f-447a3c122f37-log-httpd\") pod \"ceilometer-0\" (UID: \"10556691-8d2e-434b-b43f-447a3c122f37\") " pod="openstack/ceilometer-0" Jan 22 09:22:29 crc kubenswrapper[4811]: I0122 09:22:29.624576 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/10556691-8d2e-434b-b43f-447a3c122f37-run-httpd\") pod \"ceilometer-0\" (UID: \"10556691-8d2e-434b-b43f-447a3c122f37\") " pod="openstack/ceilometer-0" Jan 22 09:22:29 crc kubenswrapper[4811]: I0122 09:22:29.624651 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gxtn\" (UniqueName: \"kubernetes.io/projected/10556691-8d2e-434b-b43f-447a3c122f37-kube-api-access-8gxtn\") pod \"ceilometer-0\" (UID: \"10556691-8d2e-434b-b43f-447a3c122f37\") " pod="openstack/ceilometer-0" Jan 22 09:22:29 crc kubenswrapper[4811]: I0122 09:22:29.624735 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/10556691-8d2e-434b-b43f-447a3c122f37-scripts\") pod \"ceilometer-0\" (UID: \"10556691-8d2e-434b-b43f-447a3c122f37\") " pod="openstack/ceilometer-0" Jan 22 09:22:29 crc kubenswrapper[4811]: I0122 09:22:29.624768 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/10556691-8d2e-434b-b43f-447a3c122f37-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"10556691-8d2e-434b-b43f-447a3c122f37\") " pod="openstack/ceilometer-0" Jan 22 09:22:29 crc kubenswrapper[4811]: I0122 09:22:29.726511 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10556691-8d2e-434b-b43f-447a3c122f37-config-data\") pod \"ceilometer-0\" (UID: \"10556691-8d2e-434b-b43f-447a3c122f37\") " pod="openstack/ceilometer-0" Jan 22 09:22:29 crc kubenswrapper[4811]: I0122 09:22:29.726562 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10556691-8d2e-434b-b43f-447a3c122f37-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"10556691-8d2e-434b-b43f-447a3c122f37\") " pod="openstack/ceilometer-0" Jan 22 09:22:29 crc kubenswrapper[4811]: I0122 09:22:29.726606 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/10556691-8d2e-434b-b43f-447a3c122f37-log-httpd\") pod \"ceilometer-0\" (UID: \"10556691-8d2e-434b-b43f-447a3c122f37\") " pod="openstack/ceilometer-0" Jan 22 09:22:29 crc kubenswrapper[4811]: I0122 09:22:29.726691 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/10556691-8d2e-434b-b43f-447a3c122f37-run-httpd\") pod \"ceilometer-0\" (UID: \"10556691-8d2e-434b-b43f-447a3c122f37\") " pod="openstack/ceilometer-0" Jan 22 09:22:29 crc kubenswrapper[4811]: I0122 09:22:29.726779 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8gxtn\" (UniqueName: \"kubernetes.io/projected/10556691-8d2e-434b-b43f-447a3c122f37-kube-api-access-8gxtn\") pod \"ceilometer-0\" (UID: \"10556691-8d2e-434b-b43f-447a3c122f37\") " pod="openstack/ceilometer-0" Jan 22 09:22:29 crc kubenswrapper[4811]: I0122 09:22:29.726917 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/10556691-8d2e-434b-b43f-447a3c122f37-scripts\") pod \"ceilometer-0\" (UID: \"10556691-8d2e-434b-b43f-447a3c122f37\") " pod="openstack/ceilometer-0" Jan 22 09:22:29 crc kubenswrapper[4811]: I0122 09:22:29.726966 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/10556691-8d2e-434b-b43f-447a3c122f37-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"10556691-8d2e-434b-b43f-447a3c122f37\") " pod="openstack/ceilometer-0" Jan 22 09:22:29 crc kubenswrapper[4811]: I0122 09:22:29.727828 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/10556691-8d2e-434b-b43f-447a3c122f37-log-httpd\") pod \"ceilometer-0\" (UID: \"10556691-8d2e-434b-b43f-447a3c122f37\") " pod="openstack/ceilometer-0" Jan 22 09:22:29 crc kubenswrapper[4811]: I0122 09:22:29.727843 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/10556691-8d2e-434b-b43f-447a3c122f37-run-httpd\") pod \"ceilometer-0\" (UID: \"10556691-8d2e-434b-b43f-447a3c122f37\") " pod="openstack/ceilometer-0" Jan 22 09:22:29 crc kubenswrapper[4811]: I0122 09:22:29.732876 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/10556691-8d2e-434b-b43f-447a3c122f37-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"10556691-8d2e-434b-b43f-447a3c122f37\") " pod="openstack/ceilometer-0" Jan 22 09:22:29 crc kubenswrapper[4811]: I0122 09:22:29.733211 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10556691-8d2e-434b-b43f-447a3c122f37-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"10556691-8d2e-434b-b43f-447a3c122f37\") " pod="openstack/ceilometer-0" Jan 22 09:22:29 crc kubenswrapper[4811]: I0122 09:22:29.733596 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/10556691-8d2e-434b-b43f-447a3c122f37-scripts\") pod \"ceilometer-0\" (UID: \"10556691-8d2e-434b-b43f-447a3c122f37\") " pod="openstack/ceilometer-0" Jan 22 09:22:29 crc kubenswrapper[4811]: I0122 09:22:29.735746 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10556691-8d2e-434b-b43f-447a3c122f37-config-data\") pod \"ceilometer-0\" (UID: \"10556691-8d2e-434b-b43f-447a3c122f37\") " pod="openstack/ceilometer-0" Jan 22 09:22:29 crc kubenswrapper[4811]: I0122 09:22:29.750392 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8gxtn\" (UniqueName: \"kubernetes.io/projected/10556691-8d2e-434b-b43f-447a3c122f37-kube-api-access-8gxtn\") pod \"ceilometer-0\" (UID: \"10556691-8d2e-434b-b43f-447a3c122f37\") " pod="openstack/ceilometer-0" Jan 22 09:22:29 crc kubenswrapper[4811]: I0122 09:22:29.789003 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 09:22:30 crc kubenswrapper[4811]: I0122 09:22:30.007539 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31ef50f1-c093-47e5-80ee-410c0a972865" path="/var/lib/kubelet/pods/31ef50f1-c093-47e5-80ee-410c0a972865/volumes" Jan 22 09:22:30 crc kubenswrapper[4811]: I0122 09:22:30.045140 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8b9759fbd-gggfs" event={"ID":"3ccb4838-2855-4017-9c00-e8765846d47e","Type":"ContainerDied","Data":"db5c72b99798c0998f8f31e711dcaa1ba168fbadaef7902707f3d0325f8b399b"} Jan 22 09:22:30 crc kubenswrapper[4811]: I0122 09:22:30.045223 4811 scope.go:117] "RemoveContainer" containerID="391d864b3a391cb1cd483f20345fe978b43e8dff672200e2da4b9b0d1d331b6e" Jan 22 09:22:30 crc kubenswrapper[4811]: I0122 09:22:30.045392 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8b9759fbd-gggfs" Jan 22 09:22:30 crc kubenswrapper[4811]: I0122 09:22:30.066094 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-8b9759fbd-gggfs"] Jan 22 09:22:30 crc kubenswrapper[4811]: I0122 09:22:30.075322 4811 scope.go:117] "RemoveContainer" containerID="b5c695e389fdee90e681326b0f6901391f4404a9ccb1ebc6e28255e7dfc021b6" Jan 22 09:22:30 crc kubenswrapper[4811]: I0122 09:22:30.076381 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-8b9759fbd-gggfs"] Jan 22 09:22:30 crc kubenswrapper[4811]: I0122 09:22:30.229854 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 22 09:22:30 crc kubenswrapper[4811]: I0122 09:22:30.628201 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-4v42r"] Jan 22 09:22:30 crc kubenswrapper[4811]: I0122 09:22:30.629127 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-4v42r" Jan 22 09:22:30 crc kubenswrapper[4811]: I0122 09:22:30.635892 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 22 09:22:30 crc kubenswrapper[4811]: I0122 09:22:30.635896 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 22 09:22:30 crc kubenswrapper[4811]: I0122 09:22:30.636196 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-76dll" Jan 22 09:22:30 crc kubenswrapper[4811]: I0122 09:22:30.647440 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-4v42r"] Jan 22 09:22:30 crc kubenswrapper[4811]: I0122 09:22:30.750205 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67f94f00-3db6-4cd9-a4e4-5c466abb76c5-scripts\") pod \"nova-cell0-conductor-db-sync-4v42r\" (UID: \"67f94f00-3db6-4cd9-a4e4-5c466abb76c5\") " pod="openstack/nova-cell0-conductor-db-sync-4v42r" Jan 22 09:22:30 crc kubenswrapper[4811]: I0122 09:22:30.750703 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67f94f00-3db6-4cd9-a4e4-5c466abb76c5-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-4v42r\" (UID: \"67f94f00-3db6-4cd9-a4e4-5c466abb76c5\") " pod="openstack/nova-cell0-conductor-db-sync-4v42r" Jan 22 09:22:30 crc kubenswrapper[4811]: I0122 09:22:30.750772 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7x6vr\" (UniqueName: \"kubernetes.io/projected/67f94f00-3db6-4cd9-a4e4-5c466abb76c5-kube-api-access-7x6vr\") pod \"nova-cell0-conductor-db-sync-4v42r\" (UID: \"67f94f00-3db6-4cd9-a4e4-5c466abb76c5\") " pod="openstack/nova-cell0-conductor-db-sync-4v42r" Jan 22 09:22:30 crc kubenswrapper[4811]: I0122 09:22:30.750929 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67f94f00-3db6-4cd9-a4e4-5c466abb76c5-config-data\") pod \"nova-cell0-conductor-db-sync-4v42r\" (UID: \"67f94f00-3db6-4cd9-a4e4-5c466abb76c5\") " pod="openstack/nova-cell0-conductor-db-sync-4v42r" Jan 22 09:22:30 crc kubenswrapper[4811]: I0122 09:22:30.853132 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67f94f00-3db6-4cd9-a4e4-5c466abb76c5-scripts\") pod \"nova-cell0-conductor-db-sync-4v42r\" (UID: \"67f94f00-3db6-4cd9-a4e4-5c466abb76c5\") " pod="openstack/nova-cell0-conductor-db-sync-4v42r" Jan 22 09:22:30 crc kubenswrapper[4811]: I0122 09:22:30.853304 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67f94f00-3db6-4cd9-a4e4-5c466abb76c5-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-4v42r\" (UID: \"67f94f00-3db6-4cd9-a4e4-5c466abb76c5\") " pod="openstack/nova-cell0-conductor-db-sync-4v42r" Jan 22 09:22:30 crc kubenswrapper[4811]: I0122 09:22:30.853448 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7x6vr\" (UniqueName: \"kubernetes.io/projected/67f94f00-3db6-4cd9-a4e4-5c466abb76c5-kube-api-access-7x6vr\") pod \"nova-cell0-conductor-db-sync-4v42r\" (UID: \"67f94f00-3db6-4cd9-a4e4-5c466abb76c5\") " pod="openstack/nova-cell0-conductor-db-sync-4v42r" Jan 22 09:22:30 crc kubenswrapper[4811]: I0122 09:22:30.853709 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67f94f00-3db6-4cd9-a4e4-5c466abb76c5-config-data\") pod \"nova-cell0-conductor-db-sync-4v42r\" (UID: \"67f94f00-3db6-4cd9-a4e4-5c466abb76c5\") " pod="openstack/nova-cell0-conductor-db-sync-4v42r" Jan 22 09:22:30 crc kubenswrapper[4811]: I0122 09:22:30.861472 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67f94f00-3db6-4cd9-a4e4-5c466abb76c5-config-data\") pod \"nova-cell0-conductor-db-sync-4v42r\" (UID: \"67f94f00-3db6-4cd9-a4e4-5c466abb76c5\") " pod="openstack/nova-cell0-conductor-db-sync-4v42r" Jan 22 09:22:30 crc kubenswrapper[4811]: I0122 09:22:30.862022 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67f94f00-3db6-4cd9-a4e4-5c466abb76c5-scripts\") pod \"nova-cell0-conductor-db-sync-4v42r\" (UID: \"67f94f00-3db6-4cd9-a4e4-5c466abb76c5\") " pod="openstack/nova-cell0-conductor-db-sync-4v42r" Jan 22 09:22:30 crc kubenswrapper[4811]: I0122 09:22:30.866389 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67f94f00-3db6-4cd9-a4e4-5c466abb76c5-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-4v42r\" (UID: \"67f94f00-3db6-4cd9-a4e4-5c466abb76c5\") " pod="openstack/nova-cell0-conductor-db-sync-4v42r" Jan 22 09:22:30 crc kubenswrapper[4811]: I0122 09:22:30.873386 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7x6vr\" (UniqueName: \"kubernetes.io/projected/67f94f00-3db6-4cd9-a4e4-5c466abb76c5-kube-api-access-7x6vr\") pod \"nova-cell0-conductor-db-sync-4v42r\" (UID: \"67f94f00-3db6-4cd9-a4e4-5c466abb76c5\") " pod="openstack/nova-cell0-conductor-db-sync-4v42r" Jan 22 09:22:30 crc kubenswrapper[4811]: I0122 09:22:30.985444 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-4v42r" Jan 22 09:22:31 crc kubenswrapper[4811]: I0122 09:22:31.073922 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"10556691-8d2e-434b-b43f-447a3c122f37","Type":"ContainerStarted","Data":"558a905a05ab0259bc4b20e03972ae5f0b5347363258b525317494b7f0c0bc9a"} Jan 22 09:22:31 crc kubenswrapper[4811]: I0122 09:22:31.074151 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"10556691-8d2e-434b-b43f-447a3c122f37","Type":"ContainerStarted","Data":"958ff839a323175bd21337f3695e537d4d852f86dcf7bc90cea2103c9ea07dd4"} Jan 22 09:22:31 crc kubenswrapper[4811]: I0122 09:22:31.426472 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-4v42r"] Jan 22 09:22:32 crc kubenswrapper[4811]: I0122 09:22:32.003394 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ccb4838-2855-4017-9c00-e8765846d47e" path="/var/lib/kubelet/pods/3ccb4838-2855-4017-9c00-e8765846d47e/volumes" Jan 22 09:22:32 crc kubenswrapper[4811]: I0122 09:22:32.086474 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-4v42r" event={"ID":"67f94f00-3db6-4cd9-a4e4-5c466abb76c5","Type":"ContainerStarted","Data":"a109e4560cb65a5491be00ff660fdbd96cf10059d73035aed62f12f0a61f4609"} Jan 22 09:22:32 crc kubenswrapper[4811]: I0122 09:22:32.088337 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"10556691-8d2e-434b-b43f-447a3c122f37","Type":"ContainerStarted","Data":"61dbdfeef6f008ab56058ce8f11c449091b8b5e3324e88b2b604762ec1543812"} Jan 22 09:22:33 crc kubenswrapper[4811]: I0122 09:22:33.101863 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"10556691-8d2e-434b-b43f-447a3c122f37","Type":"ContainerStarted","Data":"e42e1c803d0aa27c252b883b9afc445237cb02674b2d9e39e48bd5f88fb2d14d"} Jan 22 09:22:34 crc kubenswrapper[4811]: I0122 09:22:34.117642 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"10556691-8d2e-434b-b43f-447a3c122f37","Type":"ContainerStarted","Data":"1778041c95dc1c837c21cc15d5d98c3f3c7eb91aa43268d1a3862ffbf600993e"} Jan 22 09:22:34 crc kubenswrapper[4811]: I0122 09:22:34.118277 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 22 09:22:34 crc kubenswrapper[4811]: I0122 09:22:34.141583 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.6313216910000001 podStartE2EDuration="5.1415657s" podCreationTimestamp="2026-01-22 09:22:29 +0000 UTC" firstStartedPulling="2026-01-22 09:22:30.234715382 +0000 UTC m=+994.556902506" lastFinishedPulling="2026-01-22 09:22:33.744959392 +0000 UTC m=+998.067146515" observedRunningTime="2026-01-22 09:22:34.137304864 +0000 UTC m=+998.459491978" watchObservedRunningTime="2026-01-22 09:22:34.1415657 +0000 UTC m=+998.463752824" Jan 22 09:22:34 crc kubenswrapper[4811]: I0122 09:22:34.433770 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 22 09:22:36 crc kubenswrapper[4811]: I0122 09:22:36.138905 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="10556691-8d2e-434b-b43f-447a3c122f37" containerName="ceilometer-central-agent" containerID="cri-o://558a905a05ab0259bc4b20e03972ae5f0b5347363258b525317494b7f0c0bc9a" gracePeriod=30 Jan 22 09:22:36 crc kubenswrapper[4811]: I0122 09:22:36.138952 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="10556691-8d2e-434b-b43f-447a3c122f37" containerName="sg-core" containerID="cri-o://e42e1c803d0aa27c252b883b9afc445237cb02674b2d9e39e48bd5f88fb2d14d" gracePeriod=30 Jan 22 09:22:36 crc kubenswrapper[4811]: I0122 09:22:36.138931 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="10556691-8d2e-434b-b43f-447a3c122f37" containerName="proxy-httpd" containerID="cri-o://1778041c95dc1c837c21cc15d5d98c3f3c7eb91aa43268d1a3862ffbf600993e" gracePeriod=30 Jan 22 09:22:36 crc kubenswrapper[4811]: I0122 09:22:36.138936 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="10556691-8d2e-434b-b43f-447a3c122f37" containerName="ceilometer-notification-agent" containerID="cri-o://61dbdfeef6f008ab56058ce8f11c449091b8b5e3324e88b2b604762ec1543812" gracePeriod=30 Jan 22 09:22:37 crc kubenswrapper[4811]: I0122 09:22:37.148769 4811 generic.go:334] "Generic (PLEG): container finished" podID="10556691-8d2e-434b-b43f-447a3c122f37" containerID="1778041c95dc1c837c21cc15d5d98c3f3c7eb91aa43268d1a3862ffbf600993e" exitCode=0 Jan 22 09:22:37 crc kubenswrapper[4811]: I0122 09:22:37.148804 4811 generic.go:334] "Generic (PLEG): container finished" podID="10556691-8d2e-434b-b43f-447a3c122f37" containerID="e42e1c803d0aa27c252b883b9afc445237cb02674b2d9e39e48bd5f88fb2d14d" exitCode=2 Jan 22 09:22:37 crc kubenswrapper[4811]: I0122 09:22:37.148811 4811 generic.go:334] "Generic (PLEG): container finished" podID="10556691-8d2e-434b-b43f-447a3c122f37" containerID="61dbdfeef6f008ab56058ce8f11c449091b8b5e3324e88b2b604762ec1543812" exitCode=0 Jan 22 09:22:37 crc kubenswrapper[4811]: I0122 09:22:37.148831 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"10556691-8d2e-434b-b43f-447a3c122f37","Type":"ContainerDied","Data":"1778041c95dc1c837c21cc15d5d98c3f3c7eb91aa43268d1a3862ffbf600993e"} Jan 22 09:22:37 crc kubenswrapper[4811]: I0122 09:22:37.148857 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"10556691-8d2e-434b-b43f-447a3c122f37","Type":"ContainerDied","Data":"e42e1c803d0aa27c252b883b9afc445237cb02674b2d9e39e48bd5f88fb2d14d"} Jan 22 09:22:37 crc kubenswrapper[4811]: I0122 09:22:37.148866 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"10556691-8d2e-434b-b43f-447a3c122f37","Type":"ContainerDied","Data":"61dbdfeef6f008ab56058ce8f11c449091b8b5e3324e88b2b604762ec1543812"} Jan 22 09:22:38 crc kubenswrapper[4811]: I0122 09:22:38.168013 4811 generic.go:334] "Generic (PLEG): container finished" podID="10556691-8d2e-434b-b43f-447a3c122f37" containerID="558a905a05ab0259bc4b20e03972ae5f0b5347363258b525317494b7f0c0bc9a" exitCode=0 Jan 22 09:22:38 crc kubenswrapper[4811]: I0122 09:22:38.168274 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"10556691-8d2e-434b-b43f-447a3c122f37","Type":"ContainerDied","Data":"558a905a05ab0259bc4b20e03972ae5f0b5347363258b525317494b7f0c0bc9a"} Jan 22 09:22:39 crc kubenswrapper[4811]: I0122 09:22:39.889824 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 09:22:40 crc kubenswrapper[4811]: I0122 09:22:40.061255 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/10556691-8d2e-434b-b43f-447a3c122f37-sg-core-conf-yaml\") pod \"10556691-8d2e-434b-b43f-447a3c122f37\" (UID: \"10556691-8d2e-434b-b43f-447a3c122f37\") " Jan 22 09:22:40 crc kubenswrapper[4811]: I0122 09:22:40.061340 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/10556691-8d2e-434b-b43f-447a3c122f37-scripts\") pod \"10556691-8d2e-434b-b43f-447a3c122f37\" (UID: \"10556691-8d2e-434b-b43f-447a3c122f37\") " Jan 22 09:22:40 crc kubenswrapper[4811]: I0122 09:22:40.061388 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8gxtn\" (UniqueName: \"kubernetes.io/projected/10556691-8d2e-434b-b43f-447a3c122f37-kube-api-access-8gxtn\") pod \"10556691-8d2e-434b-b43f-447a3c122f37\" (UID: \"10556691-8d2e-434b-b43f-447a3c122f37\") " Jan 22 09:22:40 crc kubenswrapper[4811]: I0122 09:22:40.061412 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/10556691-8d2e-434b-b43f-447a3c122f37-log-httpd\") pod \"10556691-8d2e-434b-b43f-447a3c122f37\" (UID: \"10556691-8d2e-434b-b43f-447a3c122f37\") " Jan 22 09:22:40 crc kubenswrapper[4811]: I0122 09:22:40.061507 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10556691-8d2e-434b-b43f-447a3c122f37-config-data\") pod \"10556691-8d2e-434b-b43f-447a3c122f37\" (UID: \"10556691-8d2e-434b-b43f-447a3c122f37\") " Jan 22 09:22:40 crc kubenswrapper[4811]: I0122 09:22:40.061542 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10556691-8d2e-434b-b43f-447a3c122f37-combined-ca-bundle\") pod \"10556691-8d2e-434b-b43f-447a3c122f37\" (UID: \"10556691-8d2e-434b-b43f-447a3c122f37\") " Jan 22 09:22:40 crc kubenswrapper[4811]: I0122 09:22:40.061580 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/10556691-8d2e-434b-b43f-447a3c122f37-run-httpd\") pod \"10556691-8d2e-434b-b43f-447a3c122f37\" (UID: \"10556691-8d2e-434b-b43f-447a3c122f37\") " Jan 22 09:22:40 crc kubenswrapper[4811]: I0122 09:22:40.062598 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/10556691-8d2e-434b-b43f-447a3c122f37-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "10556691-8d2e-434b-b43f-447a3c122f37" (UID: "10556691-8d2e-434b-b43f-447a3c122f37"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:22:40 crc kubenswrapper[4811]: I0122 09:22:40.062672 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/10556691-8d2e-434b-b43f-447a3c122f37-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "10556691-8d2e-434b-b43f-447a3c122f37" (UID: "10556691-8d2e-434b-b43f-447a3c122f37"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:22:40 crc kubenswrapper[4811]: I0122 09:22:40.067109 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10556691-8d2e-434b-b43f-447a3c122f37-scripts" (OuterVolumeSpecName: "scripts") pod "10556691-8d2e-434b-b43f-447a3c122f37" (UID: "10556691-8d2e-434b-b43f-447a3c122f37"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:22:40 crc kubenswrapper[4811]: I0122 09:22:40.067396 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10556691-8d2e-434b-b43f-447a3c122f37-kube-api-access-8gxtn" (OuterVolumeSpecName: "kube-api-access-8gxtn") pod "10556691-8d2e-434b-b43f-447a3c122f37" (UID: "10556691-8d2e-434b-b43f-447a3c122f37"). InnerVolumeSpecName "kube-api-access-8gxtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:22:40 crc kubenswrapper[4811]: I0122 09:22:40.084545 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10556691-8d2e-434b-b43f-447a3c122f37-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "10556691-8d2e-434b-b43f-447a3c122f37" (UID: "10556691-8d2e-434b-b43f-447a3c122f37"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:22:40 crc kubenswrapper[4811]: I0122 09:22:40.114284 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10556691-8d2e-434b-b43f-447a3c122f37-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "10556691-8d2e-434b-b43f-447a3c122f37" (UID: "10556691-8d2e-434b-b43f-447a3c122f37"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:22:40 crc kubenswrapper[4811]: I0122 09:22:40.125651 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10556691-8d2e-434b-b43f-447a3c122f37-config-data" (OuterVolumeSpecName: "config-data") pod "10556691-8d2e-434b-b43f-447a3c122f37" (UID: "10556691-8d2e-434b-b43f-447a3c122f37"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:22:40 crc kubenswrapper[4811]: I0122 09:22:40.165129 4811 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/10556691-8d2e-434b-b43f-447a3c122f37-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 22 09:22:40 crc kubenswrapper[4811]: I0122 09:22:40.165160 4811 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/10556691-8d2e-434b-b43f-447a3c122f37-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 09:22:40 crc kubenswrapper[4811]: I0122 09:22:40.165172 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8gxtn\" (UniqueName: \"kubernetes.io/projected/10556691-8d2e-434b-b43f-447a3c122f37-kube-api-access-8gxtn\") on node \"crc\" DevicePath \"\"" Jan 22 09:22:40 crc kubenswrapper[4811]: I0122 09:22:40.165187 4811 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/10556691-8d2e-434b-b43f-447a3c122f37-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 09:22:40 crc kubenswrapper[4811]: I0122 09:22:40.165197 4811 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10556691-8d2e-434b-b43f-447a3c122f37-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 09:22:40 crc kubenswrapper[4811]: I0122 09:22:40.165206 4811 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10556691-8d2e-434b-b43f-447a3c122f37-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 09:22:40 crc kubenswrapper[4811]: I0122 09:22:40.165216 4811 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/10556691-8d2e-434b-b43f-447a3c122f37-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 09:22:40 crc kubenswrapper[4811]: I0122 09:22:40.186776 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"10556691-8d2e-434b-b43f-447a3c122f37","Type":"ContainerDied","Data":"958ff839a323175bd21337f3695e537d4d852f86dcf7bc90cea2103c9ea07dd4"} Jan 22 09:22:40 crc kubenswrapper[4811]: I0122 09:22:40.186761 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 09:22:40 crc kubenswrapper[4811]: I0122 09:22:40.187101 4811 scope.go:117] "RemoveContainer" containerID="1778041c95dc1c837c21cc15d5d98c3f3c7eb91aa43268d1a3862ffbf600993e" Jan 22 09:22:40 crc kubenswrapper[4811]: I0122 09:22:40.188634 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-4v42r" event={"ID":"67f94f00-3db6-4cd9-a4e4-5c466abb76c5","Type":"ContainerStarted","Data":"928ec6ee32df2dcc225afd92c486bbeb6f2818bcd79cbfeab616c5b1518ccd9d"} Jan 22 09:22:40 crc kubenswrapper[4811]: I0122 09:22:40.208173 4811 scope.go:117] "RemoveContainer" containerID="e42e1c803d0aa27c252b883b9afc445237cb02674b2d9e39e48bd5f88fb2d14d" Jan 22 09:22:40 crc kubenswrapper[4811]: I0122 09:22:40.223609 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-4v42r" podStartSLOduration=2.017109118 podStartE2EDuration="10.223594806s" podCreationTimestamp="2026-01-22 09:22:30 +0000 UTC" firstStartedPulling="2026-01-22 09:22:31.438334771 +0000 UTC m=+995.760521894" lastFinishedPulling="2026-01-22 09:22:39.644820459 +0000 UTC m=+1003.967007582" observedRunningTime="2026-01-22 09:22:40.217373523 +0000 UTC m=+1004.539560646" watchObservedRunningTime="2026-01-22 09:22:40.223594806 +0000 UTC m=+1004.545781929" Jan 22 09:22:40 crc kubenswrapper[4811]: I0122 09:22:40.242857 4811 scope.go:117] "RemoveContainer" containerID="61dbdfeef6f008ab56058ce8f11c449091b8b5e3324e88b2b604762ec1543812" Jan 22 09:22:40 crc kubenswrapper[4811]: I0122 09:22:40.245132 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 22 09:22:40 crc kubenswrapper[4811]: I0122 09:22:40.253531 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 22 09:22:40 crc kubenswrapper[4811]: I0122 09:22:40.264360 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 22 09:22:40 crc kubenswrapper[4811]: E0122 09:22:40.266910 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10556691-8d2e-434b-b43f-447a3c122f37" containerName="ceilometer-central-agent" Jan 22 09:22:40 crc kubenswrapper[4811]: I0122 09:22:40.266936 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="10556691-8d2e-434b-b43f-447a3c122f37" containerName="ceilometer-central-agent" Jan 22 09:22:40 crc kubenswrapper[4811]: E0122 09:22:40.266951 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10556691-8d2e-434b-b43f-447a3c122f37" containerName="ceilometer-notification-agent" Jan 22 09:22:40 crc kubenswrapper[4811]: I0122 09:22:40.266956 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="10556691-8d2e-434b-b43f-447a3c122f37" containerName="ceilometer-notification-agent" Jan 22 09:22:40 crc kubenswrapper[4811]: E0122 09:22:40.266977 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10556691-8d2e-434b-b43f-447a3c122f37" containerName="proxy-httpd" Jan 22 09:22:40 crc kubenswrapper[4811]: I0122 09:22:40.266982 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="10556691-8d2e-434b-b43f-447a3c122f37" containerName="proxy-httpd" Jan 22 09:22:40 crc kubenswrapper[4811]: E0122 09:22:40.266995 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10556691-8d2e-434b-b43f-447a3c122f37" containerName="sg-core" Jan 22 09:22:40 crc kubenswrapper[4811]: I0122 09:22:40.267000 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="10556691-8d2e-434b-b43f-447a3c122f37" containerName="sg-core" Jan 22 09:22:40 crc kubenswrapper[4811]: I0122 09:22:40.267282 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="10556691-8d2e-434b-b43f-447a3c122f37" containerName="ceilometer-central-agent" Jan 22 09:22:40 crc kubenswrapper[4811]: I0122 09:22:40.267296 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="10556691-8d2e-434b-b43f-447a3c122f37" containerName="ceilometer-notification-agent" Jan 22 09:22:40 crc kubenswrapper[4811]: I0122 09:22:40.267304 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="10556691-8d2e-434b-b43f-447a3c122f37" containerName="proxy-httpd" Jan 22 09:22:40 crc kubenswrapper[4811]: I0122 09:22:40.267322 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="10556691-8d2e-434b-b43f-447a3c122f37" containerName="sg-core" Jan 22 09:22:40 crc kubenswrapper[4811]: I0122 09:22:40.269397 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 09:22:40 crc kubenswrapper[4811]: I0122 09:22:40.270344 4811 scope.go:117] "RemoveContainer" containerID="558a905a05ab0259bc4b20e03972ae5f0b5347363258b525317494b7f0c0bc9a" Jan 22 09:22:40 crc kubenswrapper[4811]: I0122 09:22:40.272411 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 22 09:22:40 crc kubenswrapper[4811]: I0122 09:22:40.272557 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 22 09:22:40 crc kubenswrapper[4811]: I0122 09:22:40.276159 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 22 09:22:40 crc kubenswrapper[4811]: I0122 09:22:40.374034 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9c1e6c8-785c-4978-8053-23f809789e23-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e9c1e6c8-785c-4978-8053-23f809789e23\") " pod="openstack/ceilometer-0" Jan 22 09:22:40 crc kubenswrapper[4811]: I0122 09:22:40.374101 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e9c1e6c8-785c-4978-8053-23f809789e23-scripts\") pod \"ceilometer-0\" (UID: \"e9c1e6c8-785c-4978-8053-23f809789e23\") " pod="openstack/ceilometer-0" Jan 22 09:22:40 crc kubenswrapper[4811]: I0122 09:22:40.374148 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgqbn\" (UniqueName: \"kubernetes.io/projected/e9c1e6c8-785c-4978-8053-23f809789e23-kube-api-access-zgqbn\") pod \"ceilometer-0\" (UID: \"e9c1e6c8-785c-4978-8053-23f809789e23\") " pod="openstack/ceilometer-0" Jan 22 09:22:40 crc kubenswrapper[4811]: I0122 09:22:40.374444 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9c1e6c8-785c-4978-8053-23f809789e23-config-data\") pod \"ceilometer-0\" (UID: \"e9c1e6c8-785c-4978-8053-23f809789e23\") " pod="openstack/ceilometer-0" Jan 22 09:22:40 crc kubenswrapper[4811]: I0122 09:22:40.374606 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e9c1e6c8-785c-4978-8053-23f809789e23-log-httpd\") pod \"ceilometer-0\" (UID: \"e9c1e6c8-785c-4978-8053-23f809789e23\") " pod="openstack/ceilometer-0" Jan 22 09:22:40 crc kubenswrapper[4811]: I0122 09:22:40.374705 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e9c1e6c8-785c-4978-8053-23f809789e23-run-httpd\") pod \"ceilometer-0\" (UID: \"e9c1e6c8-785c-4978-8053-23f809789e23\") " pod="openstack/ceilometer-0" Jan 22 09:22:40 crc kubenswrapper[4811]: I0122 09:22:40.374770 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e9c1e6c8-785c-4978-8053-23f809789e23-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e9c1e6c8-785c-4978-8053-23f809789e23\") " pod="openstack/ceilometer-0" Jan 22 09:22:40 crc kubenswrapper[4811]: I0122 09:22:40.478542 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9c1e6c8-785c-4978-8053-23f809789e23-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e9c1e6c8-785c-4978-8053-23f809789e23\") " pod="openstack/ceilometer-0" Jan 22 09:22:40 crc kubenswrapper[4811]: I0122 09:22:40.480009 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e9c1e6c8-785c-4978-8053-23f809789e23-scripts\") pod \"ceilometer-0\" (UID: \"e9c1e6c8-785c-4978-8053-23f809789e23\") " pod="openstack/ceilometer-0" Jan 22 09:22:40 crc kubenswrapper[4811]: I0122 09:22:40.480533 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zgqbn\" (UniqueName: \"kubernetes.io/projected/e9c1e6c8-785c-4978-8053-23f809789e23-kube-api-access-zgqbn\") pod \"ceilometer-0\" (UID: \"e9c1e6c8-785c-4978-8053-23f809789e23\") " pod="openstack/ceilometer-0" Jan 22 09:22:40 crc kubenswrapper[4811]: I0122 09:22:40.480675 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9c1e6c8-785c-4978-8053-23f809789e23-config-data\") pod \"ceilometer-0\" (UID: \"e9c1e6c8-785c-4978-8053-23f809789e23\") " pod="openstack/ceilometer-0" Jan 22 09:22:40 crc kubenswrapper[4811]: I0122 09:22:40.480756 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e9c1e6c8-785c-4978-8053-23f809789e23-log-httpd\") pod \"ceilometer-0\" (UID: \"e9c1e6c8-785c-4978-8053-23f809789e23\") " pod="openstack/ceilometer-0" Jan 22 09:22:40 crc kubenswrapper[4811]: I0122 09:22:40.480804 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e9c1e6c8-785c-4978-8053-23f809789e23-run-httpd\") pod \"ceilometer-0\" (UID: \"e9c1e6c8-785c-4978-8053-23f809789e23\") " pod="openstack/ceilometer-0" Jan 22 09:22:40 crc kubenswrapper[4811]: I0122 09:22:40.480840 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e9c1e6c8-785c-4978-8053-23f809789e23-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e9c1e6c8-785c-4978-8053-23f809789e23\") " pod="openstack/ceilometer-0" Jan 22 09:22:40 crc kubenswrapper[4811]: I0122 09:22:40.481333 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e9c1e6c8-785c-4978-8053-23f809789e23-log-httpd\") pod \"ceilometer-0\" (UID: \"e9c1e6c8-785c-4978-8053-23f809789e23\") " pod="openstack/ceilometer-0" Jan 22 09:22:40 crc kubenswrapper[4811]: I0122 09:22:40.481419 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e9c1e6c8-785c-4978-8053-23f809789e23-run-httpd\") pod \"ceilometer-0\" (UID: \"e9c1e6c8-785c-4978-8053-23f809789e23\") " pod="openstack/ceilometer-0" Jan 22 09:22:40 crc kubenswrapper[4811]: I0122 09:22:40.485733 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e9c1e6c8-785c-4978-8053-23f809789e23-scripts\") pod \"ceilometer-0\" (UID: \"e9c1e6c8-785c-4978-8053-23f809789e23\") " pod="openstack/ceilometer-0" Jan 22 09:22:40 crc kubenswrapper[4811]: I0122 09:22:40.486466 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9c1e6c8-785c-4978-8053-23f809789e23-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e9c1e6c8-785c-4978-8053-23f809789e23\") " pod="openstack/ceilometer-0" Jan 22 09:22:40 crc kubenswrapper[4811]: I0122 09:22:40.486939 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e9c1e6c8-785c-4978-8053-23f809789e23-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e9c1e6c8-785c-4978-8053-23f809789e23\") " pod="openstack/ceilometer-0" Jan 22 09:22:40 crc kubenswrapper[4811]: I0122 09:22:40.487442 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9c1e6c8-785c-4978-8053-23f809789e23-config-data\") pod \"ceilometer-0\" (UID: \"e9c1e6c8-785c-4978-8053-23f809789e23\") " pod="openstack/ceilometer-0" Jan 22 09:22:40 crc kubenswrapper[4811]: I0122 09:22:40.498231 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zgqbn\" (UniqueName: \"kubernetes.io/projected/e9c1e6c8-785c-4978-8053-23f809789e23-kube-api-access-zgqbn\") pod \"ceilometer-0\" (UID: \"e9c1e6c8-785c-4978-8053-23f809789e23\") " pod="openstack/ceilometer-0" Jan 22 09:22:40 crc kubenswrapper[4811]: I0122 09:22:40.592238 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 09:22:41 crc kubenswrapper[4811]: I0122 09:22:41.000313 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 22 09:22:41 crc kubenswrapper[4811]: I0122 09:22:41.201329 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e9c1e6c8-785c-4978-8053-23f809789e23","Type":"ContainerStarted","Data":"77ff4e0ded122ed2a18494b9744a20963f25cf5b9185162976c31f1de19f48d1"} Jan 22 09:22:42 crc kubenswrapper[4811]: I0122 09:22:42.000976 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10556691-8d2e-434b-b43f-447a3c122f37" path="/var/lib/kubelet/pods/10556691-8d2e-434b-b43f-447a3c122f37/volumes" Jan 22 09:22:42 crc kubenswrapper[4811]: I0122 09:22:42.210969 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e9c1e6c8-785c-4978-8053-23f809789e23","Type":"ContainerStarted","Data":"c9ac90b166e61dfffcbbe492fb91e0b8db5ccfc2e6da6dd2002fb66f630a8852"} Jan 22 09:22:43 crc kubenswrapper[4811]: I0122 09:22:43.219023 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e9c1e6c8-785c-4978-8053-23f809789e23","Type":"ContainerStarted","Data":"c2b2c19cf1f5b522633e4d88febe0d8198acff87ec9a4995871dc26e41e9b6e2"} Jan 22 09:22:43 crc kubenswrapper[4811]: I0122 09:22:43.219318 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e9c1e6c8-785c-4978-8053-23f809789e23","Type":"ContainerStarted","Data":"89381f32d9ce0812b9b272069e646036b69b498ed584a8a30ce2da796e494565"} Jan 22 09:22:45 crc kubenswrapper[4811]: I0122 09:22:45.243309 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e9c1e6c8-785c-4978-8053-23f809789e23","Type":"ContainerStarted","Data":"064c4e3a661f91b4a6e7a9c49315caef2ae118d01e452d7ce04335343afdcf1a"} Jan 22 09:22:45 crc kubenswrapper[4811]: I0122 09:22:45.244051 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 22 09:22:45 crc kubenswrapper[4811]: I0122 09:22:45.265284 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.112455457 podStartE2EDuration="5.265260857s" podCreationTimestamp="2026-01-22 09:22:40 +0000 UTC" firstStartedPulling="2026-01-22 09:22:41.014956531 +0000 UTC m=+1005.337143653" lastFinishedPulling="2026-01-22 09:22:44.167761929 +0000 UTC m=+1008.489949053" observedRunningTime="2026-01-22 09:22:45.260450105 +0000 UTC m=+1009.582637228" watchObservedRunningTime="2026-01-22 09:22:45.265260857 +0000 UTC m=+1009.587447981" Jan 22 09:22:46 crc kubenswrapper[4811]: I0122 09:22:46.256275 4811 generic.go:334] "Generic (PLEG): container finished" podID="67f94f00-3db6-4cd9-a4e4-5c466abb76c5" containerID="928ec6ee32df2dcc225afd92c486bbeb6f2818bcd79cbfeab616c5b1518ccd9d" exitCode=0 Jan 22 09:22:46 crc kubenswrapper[4811]: I0122 09:22:46.256351 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-4v42r" event={"ID":"67f94f00-3db6-4cd9-a4e4-5c466abb76c5","Type":"ContainerDied","Data":"928ec6ee32df2dcc225afd92c486bbeb6f2818bcd79cbfeab616c5b1518ccd9d"} Jan 22 09:22:47 crc kubenswrapper[4811]: I0122 09:22:47.547324 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-4v42r" Jan 22 09:22:47 crc kubenswrapper[4811]: I0122 09:22:47.723568 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67f94f00-3db6-4cd9-a4e4-5c466abb76c5-scripts\") pod \"67f94f00-3db6-4cd9-a4e4-5c466abb76c5\" (UID: \"67f94f00-3db6-4cd9-a4e4-5c466abb76c5\") " Jan 22 09:22:47 crc kubenswrapper[4811]: I0122 09:22:47.723640 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7x6vr\" (UniqueName: \"kubernetes.io/projected/67f94f00-3db6-4cd9-a4e4-5c466abb76c5-kube-api-access-7x6vr\") pod \"67f94f00-3db6-4cd9-a4e4-5c466abb76c5\" (UID: \"67f94f00-3db6-4cd9-a4e4-5c466abb76c5\") " Jan 22 09:22:47 crc kubenswrapper[4811]: I0122 09:22:47.723748 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67f94f00-3db6-4cd9-a4e4-5c466abb76c5-config-data\") pod \"67f94f00-3db6-4cd9-a4e4-5c466abb76c5\" (UID: \"67f94f00-3db6-4cd9-a4e4-5c466abb76c5\") " Jan 22 09:22:47 crc kubenswrapper[4811]: I0122 09:22:47.723858 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67f94f00-3db6-4cd9-a4e4-5c466abb76c5-combined-ca-bundle\") pod \"67f94f00-3db6-4cd9-a4e4-5c466abb76c5\" (UID: \"67f94f00-3db6-4cd9-a4e4-5c466abb76c5\") " Jan 22 09:22:47 crc kubenswrapper[4811]: I0122 09:22:47.729194 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67f94f00-3db6-4cd9-a4e4-5c466abb76c5-scripts" (OuterVolumeSpecName: "scripts") pod "67f94f00-3db6-4cd9-a4e4-5c466abb76c5" (UID: "67f94f00-3db6-4cd9-a4e4-5c466abb76c5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:22:47 crc kubenswrapper[4811]: I0122 09:22:47.742356 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67f94f00-3db6-4cd9-a4e4-5c466abb76c5-kube-api-access-7x6vr" (OuterVolumeSpecName: "kube-api-access-7x6vr") pod "67f94f00-3db6-4cd9-a4e4-5c466abb76c5" (UID: "67f94f00-3db6-4cd9-a4e4-5c466abb76c5"). InnerVolumeSpecName "kube-api-access-7x6vr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:22:47 crc kubenswrapper[4811]: I0122 09:22:47.744184 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67f94f00-3db6-4cd9-a4e4-5c466abb76c5-config-data" (OuterVolumeSpecName: "config-data") pod "67f94f00-3db6-4cd9-a4e4-5c466abb76c5" (UID: "67f94f00-3db6-4cd9-a4e4-5c466abb76c5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:22:47 crc kubenswrapper[4811]: I0122 09:22:47.745792 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67f94f00-3db6-4cd9-a4e4-5c466abb76c5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "67f94f00-3db6-4cd9-a4e4-5c466abb76c5" (UID: "67f94f00-3db6-4cd9-a4e4-5c466abb76c5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:22:47 crc kubenswrapper[4811]: I0122 09:22:47.825471 4811 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67f94f00-3db6-4cd9-a4e4-5c466abb76c5-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 09:22:47 crc kubenswrapper[4811]: I0122 09:22:47.825515 4811 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67f94f00-3db6-4cd9-a4e4-5c466abb76c5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 09:22:47 crc kubenswrapper[4811]: I0122 09:22:47.825528 4811 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67f94f00-3db6-4cd9-a4e4-5c466abb76c5-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 09:22:47 crc kubenswrapper[4811]: I0122 09:22:47.825539 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7x6vr\" (UniqueName: \"kubernetes.io/projected/67f94f00-3db6-4cd9-a4e4-5c466abb76c5-kube-api-access-7x6vr\") on node \"crc\" DevicePath \"\"" Jan 22 09:22:48 crc kubenswrapper[4811]: I0122 09:22:48.272135 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-4v42r" event={"ID":"67f94f00-3db6-4cd9-a4e4-5c466abb76c5","Type":"ContainerDied","Data":"a109e4560cb65a5491be00ff660fdbd96cf10059d73035aed62f12f0a61f4609"} Jan 22 09:22:48 crc kubenswrapper[4811]: I0122 09:22:48.272434 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a109e4560cb65a5491be00ff660fdbd96cf10059d73035aed62f12f0a61f4609" Jan 22 09:22:48 crc kubenswrapper[4811]: I0122 09:22:48.272165 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-4v42r" Jan 22 09:22:48 crc kubenswrapper[4811]: I0122 09:22:48.347510 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 22 09:22:48 crc kubenswrapper[4811]: E0122 09:22:48.348000 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67f94f00-3db6-4cd9-a4e4-5c466abb76c5" containerName="nova-cell0-conductor-db-sync" Jan 22 09:22:48 crc kubenswrapper[4811]: I0122 09:22:48.348021 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="67f94f00-3db6-4cd9-a4e4-5c466abb76c5" containerName="nova-cell0-conductor-db-sync" Jan 22 09:22:48 crc kubenswrapper[4811]: I0122 09:22:48.348233 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="67f94f00-3db6-4cd9-a4e4-5c466abb76c5" containerName="nova-cell0-conductor-db-sync" Jan 22 09:22:48 crc kubenswrapper[4811]: I0122 09:22:48.348794 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 22 09:22:48 crc kubenswrapper[4811]: I0122 09:22:48.352422 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 22 09:22:48 crc kubenswrapper[4811]: I0122 09:22:48.352587 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-76dll" Jan 22 09:22:48 crc kubenswrapper[4811]: I0122 09:22:48.359174 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 22 09:22:48 crc kubenswrapper[4811]: I0122 09:22:48.373250 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-s6xs2"] Jan 22 09:22:48 crc kubenswrapper[4811]: I0122 09:22:48.375229 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s6xs2" Jan 22 09:22:48 crc kubenswrapper[4811]: I0122 09:22:48.411707 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-s6xs2"] Jan 22 09:22:48 crc kubenswrapper[4811]: I0122 09:22:48.536315 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb52292c-6627-4dbb-a981-f97886db6f7a-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"bb52292c-6627-4dbb-a981-f97886db6f7a\") " pod="openstack/nova-cell0-conductor-0" Jan 22 09:22:48 crc kubenswrapper[4811]: I0122 09:22:48.536584 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb52292c-6627-4dbb-a981-f97886db6f7a-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"bb52292c-6627-4dbb-a981-f97886db6f7a\") " pod="openstack/nova-cell0-conductor-0" Jan 22 09:22:48 crc kubenswrapper[4811]: I0122 09:22:48.536740 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d3a96c1-245e-40ed-85a2-d46fe505f247-utilities\") pod \"certified-operators-s6xs2\" (UID: \"4d3a96c1-245e-40ed-85a2-d46fe505f247\") " pod="openshift-marketplace/certified-operators-s6xs2" Jan 22 09:22:48 crc kubenswrapper[4811]: I0122 09:22:48.536929 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m568j\" (UniqueName: \"kubernetes.io/projected/bb52292c-6627-4dbb-a981-f97886db6f7a-kube-api-access-m568j\") pod \"nova-cell0-conductor-0\" (UID: \"bb52292c-6627-4dbb-a981-f97886db6f7a\") " pod="openstack/nova-cell0-conductor-0" Jan 22 09:22:48 crc kubenswrapper[4811]: I0122 09:22:48.537106 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d3a96c1-245e-40ed-85a2-d46fe505f247-catalog-content\") pod \"certified-operators-s6xs2\" (UID: \"4d3a96c1-245e-40ed-85a2-d46fe505f247\") " pod="openshift-marketplace/certified-operators-s6xs2" Jan 22 09:22:48 crc kubenswrapper[4811]: I0122 09:22:48.537244 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djzrg\" (UniqueName: \"kubernetes.io/projected/4d3a96c1-245e-40ed-85a2-d46fe505f247-kube-api-access-djzrg\") pod \"certified-operators-s6xs2\" (UID: \"4d3a96c1-245e-40ed-85a2-d46fe505f247\") " pod="openshift-marketplace/certified-operators-s6xs2" Jan 22 09:22:48 crc kubenswrapper[4811]: I0122 09:22:48.639051 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d3a96c1-245e-40ed-85a2-d46fe505f247-catalog-content\") pod \"certified-operators-s6xs2\" (UID: \"4d3a96c1-245e-40ed-85a2-d46fe505f247\") " pod="openshift-marketplace/certified-operators-s6xs2" Jan 22 09:22:48 crc kubenswrapper[4811]: I0122 09:22:48.639109 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djzrg\" (UniqueName: \"kubernetes.io/projected/4d3a96c1-245e-40ed-85a2-d46fe505f247-kube-api-access-djzrg\") pod \"certified-operators-s6xs2\" (UID: \"4d3a96c1-245e-40ed-85a2-d46fe505f247\") " pod="openshift-marketplace/certified-operators-s6xs2" Jan 22 09:22:48 crc kubenswrapper[4811]: I0122 09:22:48.639160 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb52292c-6627-4dbb-a981-f97886db6f7a-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"bb52292c-6627-4dbb-a981-f97886db6f7a\") " pod="openstack/nova-cell0-conductor-0" Jan 22 09:22:48 crc kubenswrapper[4811]: I0122 09:22:48.639178 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb52292c-6627-4dbb-a981-f97886db6f7a-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"bb52292c-6627-4dbb-a981-f97886db6f7a\") " pod="openstack/nova-cell0-conductor-0" Jan 22 09:22:48 crc kubenswrapper[4811]: I0122 09:22:48.639201 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d3a96c1-245e-40ed-85a2-d46fe505f247-utilities\") pod \"certified-operators-s6xs2\" (UID: \"4d3a96c1-245e-40ed-85a2-d46fe505f247\") " pod="openshift-marketplace/certified-operators-s6xs2" Jan 22 09:22:48 crc kubenswrapper[4811]: I0122 09:22:48.639244 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m568j\" (UniqueName: \"kubernetes.io/projected/bb52292c-6627-4dbb-a981-f97886db6f7a-kube-api-access-m568j\") pod \"nova-cell0-conductor-0\" (UID: \"bb52292c-6627-4dbb-a981-f97886db6f7a\") " pod="openstack/nova-cell0-conductor-0" Jan 22 09:22:48 crc kubenswrapper[4811]: I0122 09:22:48.640178 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d3a96c1-245e-40ed-85a2-d46fe505f247-catalog-content\") pod \"certified-operators-s6xs2\" (UID: \"4d3a96c1-245e-40ed-85a2-d46fe505f247\") " pod="openshift-marketplace/certified-operators-s6xs2" Jan 22 09:22:48 crc kubenswrapper[4811]: I0122 09:22:48.641122 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d3a96c1-245e-40ed-85a2-d46fe505f247-utilities\") pod \"certified-operators-s6xs2\" (UID: \"4d3a96c1-245e-40ed-85a2-d46fe505f247\") " pod="openshift-marketplace/certified-operators-s6xs2" Jan 22 09:22:48 crc kubenswrapper[4811]: I0122 09:22:48.652744 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb52292c-6627-4dbb-a981-f97886db6f7a-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"bb52292c-6627-4dbb-a981-f97886db6f7a\") " pod="openstack/nova-cell0-conductor-0" Jan 22 09:22:48 crc kubenswrapper[4811]: I0122 09:22:48.652751 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb52292c-6627-4dbb-a981-f97886db6f7a-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"bb52292c-6627-4dbb-a981-f97886db6f7a\") " pod="openstack/nova-cell0-conductor-0" Jan 22 09:22:48 crc kubenswrapper[4811]: I0122 09:22:48.659704 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m568j\" (UniqueName: \"kubernetes.io/projected/bb52292c-6627-4dbb-a981-f97886db6f7a-kube-api-access-m568j\") pod \"nova-cell0-conductor-0\" (UID: \"bb52292c-6627-4dbb-a981-f97886db6f7a\") " pod="openstack/nova-cell0-conductor-0" Jan 22 09:22:48 crc kubenswrapper[4811]: I0122 09:22:48.660234 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djzrg\" (UniqueName: \"kubernetes.io/projected/4d3a96c1-245e-40ed-85a2-d46fe505f247-kube-api-access-djzrg\") pod \"certified-operators-s6xs2\" (UID: \"4d3a96c1-245e-40ed-85a2-d46fe505f247\") " pod="openshift-marketplace/certified-operators-s6xs2" Jan 22 09:22:48 crc kubenswrapper[4811]: I0122 09:22:48.670908 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 22 09:22:48 crc kubenswrapper[4811]: I0122 09:22:48.695049 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s6xs2" Jan 22 09:22:49 crc kubenswrapper[4811]: I0122 09:22:49.066363 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 22 09:22:49 crc kubenswrapper[4811]: I0122 09:22:49.120799 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-s6xs2"] Jan 22 09:22:49 crc kubenswrapper[4811]: W0122 09:22:49.125581 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4d3a96c1_245e_40ed_85a2_d46fe505f247.slice/crio-eb85ce1ccb9cc9ce6acc4077548d9808d0fa2ec6cf15becf80761d1300de3561 WatchSource:0}: Error finding container eb85ce1ccb9cc9ce6acc4077548d9808d0fa2ec6cf15becf80761d1300de3561: Status 404 returned error can't find the container with id eb85ce1ccb9cc9ce6acc4077548d9808d0fa2ec6cf15becf80761d1300de3561 Jan 22 09:22:49 crc kubenswrapper[4811]: I0122 09:22:49.279797 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s6xs2" event={"ID":"4d3a96c1-245e-40ed-85a2-d46fe505f247","Type":"ContainerStarted","Data":"eb85ce1ccb9cc9ce6acc4077548d9808d0fa2ec6cf15becf80761d1300de3561"} Jan 22 09:22:49 crc kubenswrapper[4811]: I0122 09:22:49.282764 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"bb52292c-6627-4dbb-a981-f97886db6f7a","Type":"ContainerStarted","Data":"fd5e169d0a589c9999d10ff94ee4e7ec4660268460e8b5b655f4e8a7930d7f84"} Jan 22 09:22:50 crc kubenswrapper[4811]: I0122 09:22:50.292269 4811 generic.go:334] "Generic (PLEG): container finished" podID="4d3a96c1-245e-40ed-85a2-d46fe505f247" containerID="3ba9cfa95ef95a52f7aa91175c21c929c0250a0088eef268a40d84f4bce1cb00" exitCode=0 Jan 22 09:22:50 crc kubenswrapper[4811]: I0122 09:22:50.292452 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s6xs2" event={"ID":"4d3a96c1-245e-40ed-85a2-d46fe505f247","Type":"ContainerDied","Data":"3ba9cfa95ef95a52f7aa91175c21c929c0250a0088eef268a40d84f4bce1cb00"} Jan 22 09:22:50 crc kubenswrapper[4811]: I0122 09:22:50.295501 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"bb52292c-6627-4dbb-a981-f97886db6f7a","Type":"ContainerStarted","Data":"becc05d2a6dc45b9f35b8de66d819181edd61ac9bcf9425d2bab1af2bfef975f"} Jan 22 09:22:50 crc kubenswrapper[4811]: I0122 09:22:50.296162 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 22 09:22:50 crc kubenswrapper[4811]: I0122 09:22:50.326741 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.326727129 podStartE2EDuration="2.326727129s" podCreationTimestamp="2026-01-22 09:22:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:22:50.323163709 +0000 UTC m=+1014.645350822" watchObservedRunningTime="2026-01-22 09:22:50.326727129 +0000 UTC m=+1014.648914252" Jan 22 09:22:51 crc kubenswrapper[4811]: I0122 09:22:51.308882 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s6xs2" event={"ID":"4d3a96c1-245e-40ed-85a2-d46fe505f247","Type":"ContainerStarted","Data":"5d63fc719688d3d78105fc732e3cde7de333f5d561b5f0b30c3617615731d906"} Jan 22 09:22:52 crc kubenswrapper[4811]: I0122 09:22:52.317162 4811 generic.go:334] "Generic (PLEG): container finished" podID="4d3a96c1-245e-40ed-85a2-d46fe505f247" containerID="5d63fc719688d3d78105fc732e3cde7de333f5d561b5f0b30c3617615731d906" exitCode=0 Jan 22 09:22:52 crc kubenswrapper[4811]: I0122 09:22:52.317243 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s6xs2" event={"ID":"4d3a96c1-245e-40ed-85a2-d46fe505f247","Type":"ContainerDied","Data":"5d63fc719688d3d78105fc732e3cde7de333f5d561b5f0b30c3617615731d906"} Jan 22 09:22:53 crc kubenswrapper[4811]: I0122 09:22:53.326020 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s6xs2" event={"ID":"4d3a96c1-245e-40ed-85a2-d46fe505f247","Type":"ContainerStarted","Data":"f7cf5a285917f7adaba2fa743b43cafe16bbc9b14904886c728ef1bd9d3b2f16"} Jan 22 09:22:53 crc kubenswrapper[4811]: I0122 09:22:53.346507 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-s6xs2" podStartSLOduration=2.77226756 podStartE2EDuration="5.346492328s" podCreationTimestamp="2026-01-22 09:22:48 +0000 UTC" firstStartedPulling="2026-01-22 09:22:50.29595182 +0000 UTC m=+1014.618138943" lastFinishedPulling="2026-01-22 09:22:52.870176588 +0000 UTC m=+1017.192363711" observedRunningTime="2026-01-22 09:22:53.339247887 +0000 UTC m=+1017.661435009" watchObservedRunningTime="2026-01-22 09:22:53.346492328 +0000 UTC m=+1017.668679452" Jan 22 09:22:58 crc kubenswrapper[4811]: I0122 09:22:58.690463 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 22 09:22:58 crc kubenswrapper[4811]: I0122 09:22:58.696048 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-s6xs2" Jan 22 09:22:58 crc kubenswrapper[4811]: I0122 09:22:58.696078 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-s6xs2" Jan 22 09:22:58 crc kubenswrapper[4811]: I0122 09:22:58.730338 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-s6xs2" Jan 22 09:22:59 crc kubenswrapper[4811]: I0122 09:22:59.091949 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-fpztn"] Jan 22 09:22:59 crc kubenswrapper[4811]: I0122 09:22:59.092923 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-fpztn" Jan 22 09:22:59 crc kubenswrapper[4811]: I0122 09:22:59.094636 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 22 09:22:59 crc kubenswrapper[4811]: I0122 09:22:59.101552 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 22 09:22:59 crc kubenswrapper[4811]: I0122 09:22:59.103286 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-fpztn"] Jan 22 09:22:59 crc kubenswrapper[4811]: I0122 09:22:59.117968 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60d099f9-c44f-4d8c-9983-d478c424eff9-scripts\") pod \"nova-cell0-cell-mapping-fpztn\" (UID: \"60d099f9-c44f-4d8c-9983-d478c424eff9\") " pod="openstack/nova-cell0-cell-mapping-fpztn" Jan 22 09:22:59 crc kubenswrapper[4811]: I0122 09:22:59.118126 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60d099f9-c44f-4d8c-9983-d478c424eff9-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-fpztn\" (UID: \"60d099f9-c44f-4d8c-9983-d478c424eff9\") " pod="openstack/nova-cell0-cell-mapping-fpztn" Jan 22 09:22:59 crc kubenswrapper[4811]: I0122 09:22:59.118259 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60d099f9-c44f-4d8c-9983-d478c424eff9-config-data\") pod \"nova-cell0-cell-mapping-fpztn\" (UID: \"60d099f9-c44f-4d8c-9983-d478c424eff9\") " pod="openstack/nova-cell0-cell-mapping-fpztn" Jan 22 09:22:59 crc kubenswrapper[4811]: I0122 09:22:59.118283 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fm6hb\" (UniqueName: \"kubernetes.io/projected/60d099f9-c44f-4d8c-9983-d478c424eff9-kube-api-access-fm6hb\") pod \"nova-cell0-cell-mapping-fpztn\" (UID: \"60d099f9-c44f-4d8c-9983-d478c424eff9\") " pod="openstack/nova-cell0-cell-mapping-fpztn" Jan 22 09:22:59 crc kubenswrapper[4811]: I0122 09:22:59.219467 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60d099f9-c44f-4d8c-9983-d478c424eff9-config-data\") pod \"nova-cell0-cell-mapping-fpztn\" (UID: \"60d099f9-c44f-4d8c-9983-d478c424eff9\") " pod="openstack/nova-cell0-cell-mapping-fpztn" Jan 22 09:22:59 crc kubenswrapper[4811]: I0122 09:22:59.219770 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fm6hb\" (UniqueName: \"kubernetes.io/projected/60d099f9-c44f-4d8c-9983-d478c424eff9-kube-api-access-fm6hb\") pod \"nova-cell0-cell-mapping-fpztn\" (UID: \"60d099f9-c44f-4d8c-9983-d478c424eff9\") " pod="openstack/nova-cell0-cell-mapping-fpztn" Jan 22 09:22:59 crc kubenswrapper[4811]: I0122 09:22:59.220007 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60d099f9-c44f-4d8c-9983-d478c424eff9-scripts\") pod \"nova-cell0-cell-mapping-fpztn\" (UID: \"60d099f9-c44f-4d8c-9983-d478c424eff9\") " pod="openstack/nova-cell0-cell-mapping-fpztn" Jan 22 09:22:59 crc kubenswrapper[4811]: I0122 09:22:59.220516 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60d099f9-c44f-4d8c-9983-d478c424eff9-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-fpztn\" (UID: \"60d099f9-c44f-4d8c-9983-d478c424eff9\") " pod="openstack/nova-cell0-cell-mapping-fpztn" Jan 22 09:22:59 crc kubenswrapper[4811]: I0122 09:22:59.220801 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 22 09:22:59 crc kubenswrapper[4811]: I0122 09:22:59.222001 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 22 09:22:59 crc kubenswrapper[4811]: I0122 09:22:59.224942 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60d099f9-c44f-4d8c-9983-d478c424eff9-config-data\") pod \"nova-cell0-cell-mapping-fpztn\" (UID: \"60d099f9-c44f-4d8c-9983-d478c424eff9\") " pod="openstack/nova-cell0-cell-mapping-fpztn" Jan 22 09:22:59 crc kubenswrapper[4811]: I0122 09:22:59.225244 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60d099f9-c44f-4d8c-9983-d478c424eff9-scripts\") pod \"nova-cell0-cell-mapping-fpztn\" (UID: \"60d099f9-c44f-4d8c-9983-d478c424eff9\") " pod="openstack/nova-cell0-cell-mapping-fpztn" Jan 22 09:22:59 crc kubenswrapper[4811]: I0122 09:22:59.235327 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 22 09:22:59 crc kubenswrapper[4811]: I0122 09:22:59.235417 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60d099f9-c44f-4d8c-9983-d478c424eff9-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-fpztn\" (UID: \"60d099f9-c44f-4d8c-9983-d478c424eff9\") " pod="openstack/nova-cell0-cell-mapping-fpztn" Jan 22 09:22:59 crc kubenswrapper[4811]: I0122 09:22:59.241133 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fm6hb\" (UniqueName: \"kubernetes.io/projected/60d099f9-c44f-4d8c-9983-d478c424eff9-kube-api-access-fm6hb\") pod \"nova-cell0-cell-mapping-fpztn\" (UID: \"60d099f9-c44f-4d8c-9983-d478c424eff9\") " pod="openstack/nova-cell0-cell-mapping-fpztn" Jan 22 09:22:59 crc kubenswrapper[4811]: I0122 09:22:59.264430 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 22 09:22:59 crc kubenswrapper[4811]: I0122 09:22:59.265071 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 22 09:22:59 crc kubenswrapper[4811]: I0122 09:22:59.265948 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 22 09:22:59 crc kubenswrapper[4811]: I0122 09:22:59.268802 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 22 09:22:59 crc kubenswrapper[4811]: I0122 09:22:59.288488 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 22 09:22:59 crc kubenswrapper[4811]: I0122 09:22:59.321729 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87478694-0c10-49d8-9d91-1ce054e05dee-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"87478694-0c10-49d8-9d91-1ce054e05dee\") " pod="openstack/nova-api-0" Jan 22 09:22:59 crc kubenswrapper[4811]: I0122 09:22:59.321780 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfebf11f-55b3-459e-b0c3-36c20de9bd1d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"dfebf11f-55b3-459e-b0c3-36c20de9bd1d\") " pod="openstack/nova-scheduler-0" Jan 22 09:22:59 crc kubenswrapper[4811]: I0122 09:22:59.321800 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dblv\" (UniqueName: \"kubernetes.io/projected/87478694-0c10-49d8-9d91-1ce054e05dee-kube-api-access-7dblv\") pod \"nova-api-0\" (UID: \"87478694-0c10-49d8-9d91-1ce054e05dee\") " pod="openstack/nova-api-0" Jan 22 09:22:59 crc kubenswrapper[4811]: I0122 09:22:59.321864 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87478694-0c10-49d8-9d91-1ce054e05dee-config-data\") pod \"nova-api-0\" (UID: \"87478694-0c10-49d8-9d91-1ce054e05dee\") " pod="openstack/nova-api-0" Jan 22 09:22:59 crc kubenswrapper[4811]: I0122 09:22:59.321896 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/87478694-0c10-49d8-9d91-1ce054e05dee-logs\") pod \"nova-api-0\" (UID: \"87478694-0c10-49d8-9d91-1ce054e05dee\") " pod="openstack/nova-api-0" Jan 22 09:22:59 crc kubenswrapper[4811]: I0122 09:22:59.322034 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vwhq\" (UniqueName: \"kubernetes.io/projected/dfebf11f-55b3-459e-b0c3-36c20de9bd1d-kube-api-access-7vwhq\") pod \"nova-scheduler-0\" (UID: \"dfebf11f-55b3-459e-b0c3-36c20de9bd1d\") " pod="openstack/nova-scheduler-0" Jan 22 09:22:59 crc kubenswrapper[4811]: I0122 09:22:59.322147 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dfebf11f-55b3-459e-b0c3-36c20de9bd1d-config-data\") pod \"nova-scheduler-0\" (UID: \"dfebf11f-55b3-459e-b0c3-36c20de9bd1d\") " pod="openstack/nova-scheduler-0" Jan 22 09:22:59 crc kubenswrapper[4811]: I0122 09:22:59.380823 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 22 09:22:59 crc kubenswrapper[4811]: I0122 09:22:59.381971 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 22 09:22:59 crc kubenswrapper[4811]: I0122 09:22:59.387315 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 22 09:22:59 crc kubenswrapper[4811]: I0122 09:22:59.407362 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-fpztn" Jan 22 09:22:59 crc kubenswrapper[4811]: I0122 09:22:59.415058 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 22 09:22:59 crc kubenswrapper[4811]: I0122 09:22:59.423285 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87478694-0c10-49d8-9d91-1ce054e05dee-config-data\") pod \"nova-api-0\" (UID: \"87478694-0c10-49d8-9d91-1ce054e05dee\") " pod="openstack/nova-api-0" Jan 22 09:22:59 crc kubenswrapper[4811]: I0122 09:22:59.423322 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3da71c17-c624-4ba1-b5b2-dd5410951026-logs\") pod \"nova-metadata-0\" (UID: \"3da71c17-c624-4ba1-b5b2-dd5410951026\") " pod="openstack/nova-metadata-0" Jan 22 09:22:59 crc kubenswrapper[4811]: I0122 09:22:59.423345 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/87478694-0c10-49d8-9d91-1ce054e05dee-logs\") pod \"nova-api-0\" (UID: \"87478694-0c10-49d8-9d91-1ce054e05dee\") " pod="openstack/nova-api-0" Jan 22 09:22:59 crc kubenswrapper[4811]: I0122 09:22:59.423392 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vwhq\" (UniqueName: \"kubernetes.io/projected/dfebf11f-55b3-459e-b0c3-36c20de9bd1d-kube-api-access-7vwhq\") pod \"nova-scheduler-0\" (UID: \"dfebf11f-55b3-459e-b0c3-36c20de9bd1d\") " pod="openstack/nova-scheduler-0" Jan 22 09:22:59 crc kubenswrapper[4811]: I0122 09:22:59.423434 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwmpj\" (UniqueName: \"kubernetes.io/projected/3da71c17-c624-4ba1-b5b2-dd5410951026-kube-api-access-nwmpj\") pod \"nova-metadata-0\" (UID: \"3da71c17-c624-4ba1-b5b2-dd5410951026\") " pod="openstack/nova-metadata-0" Jan 22 09:22:59 crc kubenswrapper[4811]: I0122 09:22:59.423452 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dfebf11f-55b3-459e-b0c3-36c20de9bd1d-config-data\") pod \"nova-scheduler-0\" (UID: \"dfebf11f-55b3-459e-b0c3-36c20de9bd1d\") " pod="openstack/nova-scheduler-0" Jan 22 09:22:59 crc kubenswrapper[4811]: I0122 09:22:59.423476 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3da71c17-c624-4ba1-b5b2-dd5410951026-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3da71c17-c624-4ba1-b5b2-dd5410951026\") " pod="openstack/nova-metadata-0" Jan 22 09:22:59 crc kubenswrapper[4811]: I0122 09:22:59.423507 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87478694-0c10-49d8-9d91-1ce054e05dee-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"87478694-0c10-49d8-9d91-1ce054e05dee\") " pod="openstack/nova-api-0" Jan 22 09:22:59 crc kubenswrapper[4811]: I0122 09:22:59.423524 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3da71c17-c624-4ba1-b5b2-dd5410951026-config-data\") pod \"nova-metadata-0\" (UID: \"3da71c17-c624-4ba1-b5b2-dd5410951026\") " pod="openstack/nova-metadata-0" Jan 22 09:22:59 crc kubenswrapper[4811]: I0122 09:22:59.423556 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfebf11f-55b3-459e-b0c3-36c20de9bd1d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"dfebf11f-55b3-459e-b0c3-36c20de9bd1d\") " pod="openstack/nova-scheduler-0" Jan 22 09:22:59 crc kubenswrapper[4811]: I0122 09:22:59.423571 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7dblv\" (UniqueName: \"kubernetes.io/projected/87478694-0c10-49d8-9d91-1ce054e05dee-kube-api-access-7dblv\") pod \"nova-api-0\" (UID: \"87478694-0c10-49d8-9d91-1ce054e05dee\") " pod="openstack/nova-api-0" Jan 22 09:22:59 crc kubenswrapper[4811]: I0122 09:22:59.424398 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/87478694-0c10-49d8-9d91-1ce054e05dee-logs\") pod \"nova-api-0\" (UID: \"87478694-0c10-49d8-9d91-1ce054e05dee\") " pod="openstack/nova-api-0" Jan 22 09:22:59 crc kubenswrapper[4811]: I0122 09:22:59.424537 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-s6xs2" Jan 22 09:22:59 crc kubenswrapper[4811]: I0122 09:22:59.447333 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87478694-0c10-49d8-9d91-1ce054e05dee-config-data\") pod \"nova-api-0\" (UID: \"87478694-0c10-49d8-9d91-1ce054e05dee\") " pod="openstack/nova-api-0" Jan 22 09:22:59 crc kubenswrapper[4811]: I0122 09:22:59.447471 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dfebf11f-55b3-459e-b0c3-36c20de9bd1d-config-data\") pod \"nova-scheduler-0\" (UID: \"dfebf11f-55b3-459e-b0c3-36c20de9bd1d\") " pod="openstack/nova-scheduler-0" Jan 22 09:22:59 crc kubenswrapper[4811]: I0122 09:22:59.448062 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87478694-0c10-49d8-9d91-1ce054e05dee-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"87478694-0c10-49d8-9d91-1ce054e05dee\") " pod="openstack/nova-api-0" Jan 22 09:22:59 crc kubenswrapper[4811]: I0122 09:22:59.454508 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vwhq\" (UniqueName: \"kubernetes.io/projected/dfebf11f-55b3-459e-b0c3-36c20de9bd1d-kube-api-access-7vwhq\") pod \"nova-scheduler-0\" (UID: \"dfebf11f-55b3-459e-b0c3-36c20de9bd1d\") " pod="openstack/nova-scheduler-0" Jan 22 09:22:59 crc kubenswrapper[4811]: I0122 09:22:59.460060 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dblv\" (UniqueName: \"kubernetes.io/projected/87478694-0c10-49d8-9d91-1ce054e05dee-kube-api-access-7dblv\") pod \"nova-api-0\" (UID: \"87478694-0c10-49d8-9d91-1ce054e05dee\") " pod="openstack/nova-api-0" Jan 22 09:22:59 crc kubenswrapper[4811]: I0122 09:22:59.476198 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 22 09:22:59 crc kubenswrapper[4811]: I0122 09:22:59.476656 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfebf11f-55b3-459e-b0c3-36c20de9bd1d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"dfebf11f-55b3-459e-b0c3-36c20de9bd1d\") " pod="openstack/nova-scheduler-0" Jan 22 09:22:59 crc kubenswrapper[4811]: I0122 09:22:59.484723 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 22 09:22:59 crc kubenswrapper[4811]: I0122 09:22:59.494151 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 22 09:22:59 crc kubenswrapper[4811]: I0122 09:22:59.508712 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 22 09:22:59 crc kubenswrapper[4811]: I0122 09:22:59.524766 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3da71c17-c624-4ba1-b5b2-dd5410951026-logs\") pod \"nova-metadata-0\" (UID: \"3da71c17-c624-4ba1-b5b2-dd5410951026\") " pod="openstack/nova-metadata-0" Jan 22 09:22:59 crc kubenswrapper[4811]: I0122 09:22:59.524953 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwmpj\" (UniqueName: \"kubernetes.io/projected/3da71c17-c624-4ba1-b5b2-dd5410951026-kube-api-access-nwmpj\") pod \"nova-metadata-0\" (UID: \"3da71c17-c624-4ba1-b5b2-dd5410951026\") " pod="openstack/nova-metadata-0" Jan 22 09:22:59 crc kubenswrapper[4811]: I0122 09:22:59.524996 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3da71c17-c624-4ba1-b5b2-dd5410951026-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3da71c17-c624-4ba1-b5b2-dd5410951026\") " pod="openstack/nova-metadata-0" Jan 22 09:22:59 crc kubenswrapper[4811]: I0122 09:22:59.525047 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3da71c17-c624-4ba1-b5b2-dd5410951026-config-data\") pod \"nova-metadata-0\" (UID: \"3da71c17-c624-4ba1-b5b2-dd5410951026\") " pod="openstack/nova-metadata-0" Jan 22 09:22:59 crc kubenswrapper[4811]: I0122 09:22:59.525891 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3da71c17-c624-4ba1-b5b2-dd5410951026-logs\") pod \"nova-metadata-0\" (UID: \"3da71c17-c624-4ba1-b5b2-dd5410951026\") " pod="openstack/nova-metadata-0" Jan 22 09:22:59 crc kubenswrapper[4811]: I0122 09:22:59.528467 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3da71c17-c624-4ba1-b5b2-dd5410951026-config-data\") pod \"nova-metadata-0\" (UID: \"3da71c17-c624-4ba1-b5b2-dd5410951026\") " pod="openstack/nova-metadata-0" Jan 22 09:22:59 crc kubenswrapper[4811]: I0122 09:22:59.558299 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwmpj\" (UniqueName: \"kubernetes.io/projected/3da71c17-c624-4ba1-b5b2-dd5410951026-kube-api-access-nwmpj\") pod \"nova-metadata-0\" (UID: \"3da71c17-c624-4ba1-b5b2-dd5410951026\") " pod="openstack/nova-metadata-0" Jan 22 09:22:59 crc kubenswrapper[4811]: I0122 09:22:59.571968 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3da71c17-c624-4ba1-b5b2-dd5410951026-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3da71c17-c624-4ba1-b5b2-dd5410951026\") " pod="openstack/nova-metadata-0" Jan 22 09:22:59 crc kubenswrapper[4811]: I0122 09:22:59.575618 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-59fd54bbff-sw5lk"] Jan 22 09:22:59 crc kubenswrapper[4811]: I0122 09:22:59.585934 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59fd54bbff-sw5lk" Jan 22 09:22:59 crc kubenswrapper[4811]: I0122 09:22:59.604734 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 22 09:22:59 crc kubenswrapper[4811]: I0122 09:22:59.619404 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 22 09:22:59 crc kubenswrapper[4811]: I0122 09:22:59.622884 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-59fd54bbff-sw5lk"] Jan 22 09:22:59 crc kubenswrapper[4811]: I0122 09:22:59.632917 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2nzz\" (UniqueName: \"kubernetes.io/projected/a4121565-e932-4f2e-a6f1-62ecfeaab63b-kube-api-access-g2nzz\") pod \"nova-cell1-novncproxy-0\" (UID: \"a4121565-e932-4f2e-a6f1-62ecfeaab63b\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 09:22:59 crc kubenswrapper[4811]: I0122 09:22:59.633033 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4121565-e932-4f2e-a6f1-62ecfeaab63b-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"a4121565-e932-4f2e-a6f1-62ecfeaab63b\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 09:22:59 crc kubenswrapper[4811]: I0122 09:22:59.633131 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4121565-e932-4f2e-a6f1-62ecfeaab63b-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"a4121565-e932-4f2e-a6f1-62ecfeaab63b\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 09:22:59 crc kubenswrapper[4811]: I0122 09:22:59.690330 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-s6xs2"] Jan 22 09:22:59 crc kubenswrapper[4811]: I0122 09:22:59.718942 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 22 09:22:59 crc kubenswrapper[4811]: I0122 09:22:59.735634 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g2nzz\" (UniqueName: \"kubernetes.io/projected/a4121565-e932-4f2e-a6f1-62ecfeaab63b-kube-api-access-g2nzz\") pod \"nova-cell1-novncproxy-0\" (UID: \"a4121565-e932-4f2e-a6f1-62ecfeaab63b\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 09:22:59 crc kubenswrapper[4811]: I0122 09:22:59.735697 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ebb2968f-70cb-4b6b-8fab-9647ddaa5e93-ovsdbserver-nb\") pod \"dnsmasq-dns-59fd54bbff-sw5lk\" (UID: \"ebb2968f-70cb-4b6b-8fab-9647ddaa5e93\") " pod="openstack/dnsmasq-dns-59fd54bbff-sw5lk" Jan 22 09:22:59 crc kubenswrapper[4811]: I0122 09:22:59.735721 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ebb2968f-70cb-4b6b-8fab-9647ddaa5e93-ovsdbserver-sb\") pod \"dnsmasq-dns-59fd54bbff-sw5lk\" (UID: \"ebb2968f-70cb-4b6b-8fab-9647ddaa5e93\") " pod="openstack/dnsmasq-dns-59fd54bbff-sw5lk" Jan 22 09:22:59 crc kubenswrapper[4811]: I0122 09:22:59.735818 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l27gj\" (UniqueName: \"kubernetes.io/projected/ebb2968f-70cb-4b6b-8fab-9647ddaa5e93-kube-api-access-l27gj\") pod \"dnsmasq-dns-59fd54bbff-sw5lk\" (UID: \"ebb2968f-70cb-4b6b-8fab-9647ddaa5e93\") " pod="openstack/dnsmasq-dns-59fd54bbff-sw5lk" Jan 22 09:22:59 crc kubenswrapper[4811]: I0122 09:22:59.735842 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4121565-e932-4f2e-a6f1-62ecfeaab63b-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"a4121565-e932-4f2e-a6f1-62ecfeaab63b\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 09:22:59 crc kubenswrapper[4811]: I0122 09:22:59.735864 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ebb2968f-70cb-4b6b-8fab-9647ddaa5e93-dns-svc\") pod \"dnsmasq-dns-59fd54bbff-sw5lk\" (UID: \"ebb2968f-70cb-4b6b-8fab-9647ddaa5e93\") " pod="openstack/dnsmasq-dns-59fd54bbff-sw5lk" Jan 22 09:22:59 crc kubenswrapper[4811]: I0122 09:22:59.735957 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebb2968f-70cb-4b6b-8fab-9647ddaa5e93-config\") pod \"dnsmasq-dns-59fd54bbff-sw5lk\" (UID: \"ebb2968f-70cb-4b6b-8fab-9647ddaa5e93\") " pod="openstack/dnsmasq-dns-59fd54bbff-sw5lk" Jan 22 09:22:59 crc kubenswrapper[4811]: I0122 09:22:59.735978 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4121565-e932-4f2e-a6f1-62ecfeaab63b-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"a4121565-e932-4f2e-a6f1-62ecfeaab63b\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 09:22:59 crc kubenswrapper[4811]: I0122 09:22:59.741708 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4121565-e932-4f2e-a6f1-62ecfeaab63b-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"a4121565-e932-4f2e-a6f1-62ecfeaab63b\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 09:22:59 crc kubenswrapper[4811]: I0122 09:22:59.741894 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4121565-e932-4f2e-a6f1-62ecfeaab63b-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"a4121565-e932-4f2e-a6f1-62ecfeaab63b\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 09:22:59 crc kubenswrapper[4811]: I0122 09:22:59.754995 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g2nzz\" (UniqueName: \"kubernetes.io/projected/a4121565-e932-4f2e-a6f1-62ecfeaab63b-kube-api-access-g2nzz\") pod \"nova-cell1-novncproxy-0\" (UID: \"a4121565-e932-4f2e-a6f1-62ecfeaab63b\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 09:22:59 crc kubenswrapper[4811]: I0122 09:22:59.838639 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ebb2968f-70cb-4b6b-8fab-9647ddaa5e93-ovsdbserver-nb\") pod \"dnsmasq-dns-59fd54bbff-sw5lk\" (UID: \"ebb2968f-70cb-4b6b-8fab-9647ddaa5e93\") " pod="openstack/dnsmasq-dns-59fd54bbff-sw5lk" Jan 22 09:22:59 crc kubenswrapper[4811]: I0122 09:22:59.838823 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ebb2968f-70cb-4b6b-8fab-9647ddaa5e93-ovsdbserver-sb\") pod \"dnsmasq-dns-59fd54bbff-sw5lk\" (UID: \"ebb2968f-70cb-4b6b-8fab-9647ddaa5e93\") " pod="openstack/dnsmasq-dns-59fd54bbff-sw5lk" Jan 22 09:22:59 crc kubenswrapper[4811]: I0122 09:22:59.838932 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l27gj\" (UniqueName: \"kubernetes.io/projected/ebb2968f-70cb-4b6b-8fab-9647ddaa5e93-kube-api-access-l27gj\") pod \"dnsmasq-dns-59fd54bbff-sw5lk\" (UID: \"ebb2968f-70cb-4b6b-8fab-9647ddaa5e93\") " pod="openstack/dnsmasq-dns-59fd54bbff-sw5lk" Jan 22 09:22:59 crc kubenswrapper[4811]: I0122 09:22:59.838960 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ebb2968f-70cb-4b6b-8fab-9647ddaa5e93-dns-svc\") pod \"dnsmasq-dns-59fd54bbff-sw5lk\" (UID: \"ebb2968f-70cb-4b6b-8fab-9647ddaa5e93\") " pod="openstack/dnsmasq-dns-59fd54bbff-sw5lk" Jan 22 09:22:59 crc kubenswrapper[4811]: I0122 09:22:59.839045 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebb2968f-70cb-4b6b-8fab-9647ddaa5e93-config\") pod \"dnsmasq-dns-59fd54bbff-sw5lk\" (UID: \"ebb2968f-70cb-4b6b-8fab-9647ddaa5e93\") " pod="openstack/dnsmasq-dns-59fd54bbff-sw5lk" Jan 22 09:22:59 crc kubenswrapper[4811]: I0122 09:22:59.839772 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebb2968f-70cb-4b6b-8fab-9647ddaa5e93-config\") pod \"dnsmasq-dns-59fd54bbff-sw5lk\" (UID: \"ebb2968f-70cb-4b6b-8fab-9647ddaa5e93\") " pod="openstack/dnsmasq-dns-59fd54bbff-sw5lk" Jan 22 09:22:59 crc kubenswrapper[4811]: I0122 09:22:59.842402 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ebb2968f-70cb-4b6b-8fab-9647ddaa5e93-ovsdbserver-nb\") pod \"dnsmasq-dns-59fd54bbff-sw5lk\" (UID: \"ebb2968f-70cb-4b6b-8fab-9647ddaa5e93\") " pod="openstack/dnsmasq-dns-59fd54bbff-sw5lk" Jan 22 09:22:59 crc kubenswrapper[4811]: I0122 09:22:59.842939 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ebb2968f-70cb-4b6b-8fab-9647ddaa5e93-ovsdbserver-sb\") pod \"dnsmasq-dns-59fd54bbff-sw5lk\" (UID: \"ebb2968f-70cb-4b6b-8fab-9647ddaa5e93\") " pod="openstack/dnsmasq-dns-59fd54bbff-sw5lk" Jan 22 09:22:59 crc kubenswrapper[4811]: I0122 09:22:59.843899 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ebb2968f-70cb-4b6b-8fab-9647ddaa5e93-dns-svc\") pod \"dnsmasq-dns-59fd54bbff-sw5lk\" (UID: \"ebb2968f-70cb-4b6b-8fab-9647ddaa5e93\") " pod="openstack/dnsmasq-dns-59fd54bbff-sw5lk" Jan 22 09:22:59 crc kubenswrapper[4811]: I0122 09:22:59.863385 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l27gj\" (UniqueName: \"kubernetes.io/projected/ebb2968f-70cb-4b6b-8fab-9647ddaa5e93-kube-api-access-l27gj\") pod \"dnsmasq-dns-59fd54bbff-sw5lk\" (UID: \"ebb2968f-70cb-4b6b-8fab-9647ddaa5e93\") " pod="openstack/dnsmasq-dns-59fd54bbff-sw5lk" Jan 22 09:22:59 crc kubenswrapper[4811]: I0122 09:22:59.875336 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 22 09:22:59 crc kubenswrapper[4811]: I0122 09:22:59.933053 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59fd54bbff-sw5lk" Jan 22 09:22:59 crc kubenswrapper[4811]: I0122 09:22:59.953022 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-fpztn"] Jan 22 09:23:00 crc kubenswrapper[4811]: I0122 09:23:00.108056 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 22 09:23:00 crc kubenswrapper[4811]: I0122 09:23:00.189366 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 22 09:23:00 crc kubenswrapper[4811]: I0122 09:23:00.257250 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 22 09:23:00 crc kubenswrapper[4811]: I0122 09:23:00.379871 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3da71c17-c624-4ba1-b5b2-dd5410951026","Type":"ContainerStarted","Data":"51e32d627e4252d82e93b1a2b42ef4a2c58e0db3cc75c3a2c595e185d116fb2e"} Jan 22 09:23:00 crc kubenswrapper[4811]: I0122 09:23:00.381068 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"87478694-0c10-49d8-9d91-1ce054e05dee","Type":"ContainerStarted","Data":"765e9bff13f0743c90e3eae425152e767990987dfe85358d1e375b6c9de16084"} Jan 22 09:23:00 crc kubenswrapper[4811]: I0122 09:23:00.389259 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"dfebf11f-55b3-459e-b0c3-36c20de9bd1d","Type":"ContainerStarted","Data":"c2f632bec9dc761f8db5c7a671588935e6b22b55e9770627951d04e2f396d209"} Jan 22 09:23:00 crc kubenswrapper[4811]: I0122 09:23:00.390790 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-fpztn" event={"ID":"60d099f9-c44f-4d8c-9983-d478c424eff9","Type":"ContainerStarted","Data":"c6900d820993227d70c3c1a43bc81f14c62580d39e2e95ec8433d4296f21dc79"} Jan 22 09:23:00 crc kubenswrapper[4811]: I0122 09:23:00.390822 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-fpztn" event={"ID":"60d099f9-c44f-4d8c-9983-d478c424eff9","Type":"ContainerStarted","Data":"f46cd10d4d0b7ae8040deb8ee8623d79278374cfb91aba93b4940be23c6b2ab7"} Jan 22 09:23:00 crc kubenswrapper[4811]: I0122 09:23:00.413557 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-fpztn" podStartSLOduration=1.413542542 podStartE2EDuration="1.413542542s" podCreationTimestamp="2026-01-22 09:22:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:23:00.410045697 +0000 UTC m=+1024.732232821" watchObservedRunningTime="2026-01-22 09:23:00.413542542 +0000 UTC m=+1024.735729665" Jan 22 09:23:00 crc kubenswrapper[4811]: I0122 09:23:00.425494 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-zb8nm"] Jan 22 09:23:00 crc kubenswrapper[4811]: I0122 09:23:00.426415 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-zb8nm" Jan 22 09:23:00 crc kubenswrapper[4811]: I0122 09:23:00.428239 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 22 09:23:00 crc kubenswrapper[4811]: I0122 09:23:00.428239 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 22 09:23:00 crc kubenswrapper[4811]: I0122 09:23:00.469757 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 22 09:23:00 crc kubenswrapper[4811]: I0122 09:23:00.476806 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-zb8nm"] Jan 22 09:23:00 crc kubenswrapper[4811]: I0122 09:23:00.521575 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-59fd54bbff-sw5lk"] Jan 22 09:23:00 crc kubenswrapper[4811]: I0122 09:23:00.552339 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ed0ce90-7d83-46a0-9ed5-25f37c5e3dad-scripts\") pod \"nova-cell1-conductor-db-sync-zb8nm\" (UID: \"3ed0ce90-7d83-46a0-9ed5-25f37c5e3dad\") " pod="openstack/nova-cell1-conductor-db-sync-zb8nm" Jan 22 09:23:00 crc kubenswrapper[4811]: I0122 09:23:00.553588 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ed0ce90-7d83-46a0-9ed5-25f37c5e3dad-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-zb8nm\" (UID: \"3ed0ce90-7d83-46a0-9ed5-25f37c5e3dad\") " pod="openstack/nova-cell1-conductor-db-sync-zb8nm" Jan 22 09:23:00 crc kubenswrapper[4811]: I0122 09:23:00.553731 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kmjd\" (UniqueName: \"kubernetes.io/projected/3ed0ce90-7d83-46a0-9ed5-25f37c5e3dad-kube-api-access-7kmjd\") pod \"nova-cell1-conductor-db-sync-zb8nm\" (UID: \"3ed0ce90-7d83-46a0-9ed5-25f37c5e3dad\") " pod="openstack/nova-cell1-conductor-db-sync-zb8nm" Jan 22 09:23:00 crc kubenswrapper[4811]: I0122 09:23:00.553777 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ed0ce90-7d83-46a0-9ed5-25f37c5e3dad-config-data\") pod \"nova-cell1-conductor-db-sync-zb8nm\" (UID: \"3ed0ce90-7d83-46a0-9ed5-25f37c5e3dad\") " pod="openstack/nova-cell1-conductor-db-sync-zb8nm" Jan 22 09:23:00 crc kubenswrapper[4811]: I0122 09:23:00.654946 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ed0ce90-7d83-46a0-9ed5-25f37c5e3dad-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-zb8nm\" (UID: \"3ed0ce90-7d83-46a0-9ed5-25f37c5e3dad\") " pod="openstack/nova-cell1-conductor-db-sync-zb8nm" Jan 22 09:23:00 crc kubenswrapper[4811]: I0122 09:23:00.655070 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7kmjd\" (UniqueName: \"kubernetes.io/projected/3ed0ce90-7d83-46a0-9ed5-25f37c5e3dad-kube-api-access-7kmjd\") pod \"nova-cell1-conductor-db-sync-zb8nm\" (UID: \"3ed0ce90-7d83-46a0-9ed5-25f37c5e3dad\") " pod="openstack/nova-cell1-conductor-db-sync-zb8nm" Jan 22 09:23:00 crc kubenswrapper[4811]: I0122 09:23:00.655121 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ed0ce90-7d83-46a0-9ed5-25f37c5e3dad-config-data\") pod \"nova-cell1-conductor-db-sync-zb8nm\" (UID: \"3ed0ce90-7d83-46a0-9ed5-25f37c5e3dad\") " pod="openstack/nova-cell1-conductor-db-sync-zb8nm" Jan 22 09:23:00 crc kubenswrapper[4811]: I0122 09:23:00.655232 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ed0ce90-7d83-46a0-9ed5-25f37c5e3dad-scripts\") pod \"nova-cell1-conductor-db-sync-zb8nm\" (UID: \"3ed0ce90-7d83-46a0-9ed5-25f37c5e3dad\") " pod="openstack/nova-cell1-conductor-db-sync-zb8nm" Jan 22 09:23:00 crc kubenswrapper[4811]: I0122 09:23:00.659018 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ed0ce90-7d83-46a0-9ed5-25f37c5e3dad-scripts\") pod \"nova-cell1-conductor-db-sync-zb8nm\" (UID: \"3ed0ce90-7d83-46a0-9ed5-25f37c5e3dad\") " pod="openstack/nova-cell1-conductor-db-sync-zb8nm" Jan 22 09:23:00 crc kubenswrapper[4811]: I0122 09:23:00.659419 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ed0ce90-7d83-46a0-9ed5-25f37c5e3dad-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-zb8nm\" (UID: \"3ed0ce90-7d83-46a0-9ed5-25f37c5e3dad\") " pod="openstack/nova-cell1-conductor-db-sync-zb8nm" Jan 22 09:23:00 crc kubenswrapper[4811]: I0122 09:23:00.663830 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ed0ce90-7d83-46a0-9ed5-25f37c5e3dad-config-data\") pod \"nova-cell1-conductor-db-sync-zb8nm\" (UID: \"3ed0ce90-7d83-46a0-9ed5-25f37c5e3dad\") " pod="openstack/nova-cell1-conductor-db-sync-zb8nm" Jan 22 09:23:00 crc kubenswrapper[4811]: I0122 09:23:00.675649 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kmjd\" (UniqueName: \"kubernetes.io/projected/3ed0ce90-7d83-46a0-9ed5-25f37c5e3dad-kube-api-access-7kmjd\") pod \"nova-cell1-conductor-db-sync-zb8nm\" (UID: \"3ed0ce90-7d83-46a0-9ed5-25f37c5e3dad\") " pod="openstack/nova-cell1-conductor-db-sync-zb8nm" Jan 22 09:23:00 crc kubenswrapper[4811]: I0122 09:23:00.787778 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-zb8nm" Jan 22 09:23:01 crc kubenswrapper[4811]: I0122 09:23:01.225591 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-zb8nm"] Jan 22 09:23:01 crc kubenswrapper[4811]: W0122 09:23:01.233149 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3ed0ce90_7d83_46a0_9ed5_25f37c5e3dad.slice/crio-2755967d41479aae81b514efd6ff2b2787dc2440a26c75d45c114772232f6f30 WatchSource:0}: Error finding container 2755967d41479aae81b514efd6ff2b2787dc2440a26c75d45c114772232f6f30: Status 404 returned error can't find the container with id 2755967d41479aae81b514efd6ff2b2787dc2440a26c75d45c114772232f6f30 Jan 22 09:23:01 crc kubenswrapper[4811]: I0122 09:23:01.402358 4811 generic.go:334] "Generic (PLEG): container finished" podID="ebb2968f-70cb-4b6b-8fab-9647ddaa5e93" containerID="66073c6bb12fed90c567e6cbabda3721f5e0393d0089870514a396c28741676f" exitCode=0 Jan 22 09:23:01 crc kubenswrapper[4811]: I0122 09:23:01.402573 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59fd54bbff-sw5lk" event={"ID":"ebb2968f-70cb-4b6b-8fab-9647ddaa5e93","Type":"ContainerDied","Data":"66073c6bb12fed90c567e6cbabda3721f5e0393d0089870514a396c28741676f"} Jan 22 09:23:01 crc kubenswrapper[4811]: I0122 09:23:01.402813 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59fd54bbff-sw5lk" event={"ID":"ebb2968f-70cb-4b6b-8fab-9647ddaa5e93","Type":"ContainerStarted","Data":"d184f635d21c075129fc76fcf0060e89d5955345ab51fda239ed7329a7648c3b"} Jan 22 09:23:01 crc kubenswrapper[4811]: I0122 09:23:01.409115 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"a4121565-e932-4f2e-a6f1-62ecfeaab63b","Type":"ContainerStarted","Data":"def4d4322ba7e97f9c878fb7b20863be30755dddad81995c09d2c4daa4352e9e"} Jan 22 09:23:01 crc kubenswrapper[4811]: I0122 09:23:01.413440 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-s6xs2" podUID="4d3a96c1-245e-40ed-85a2-d46fe505f247" containerName="registry-server" containerID="cri-o://f7cf5a285917f7adaba2fa743b43cafe16bbc9b14904886c728ef1bd9d3b2f16" gracePeriod=2 Jan 22 09:23:01 crc kubenswrapper[4811]: I0122 09:23:01.413650 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-zb8nm" event={"ID":"3ed0ce90-7d83-46a0-9ed5-25f37c5e3dad","Type":"ContainerStarted","Data":"2755967d41479aae81b514efd6ff2b2787dc2440a26c75d45c114772232f6f30"} Jan 22 09:23:01 crc kubenswrapper[4811]: I0122 09:23:01.821710 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s6xs2" Jan 22 09:23:01 crc kubenswrapper[4811]: I0122 09:23:01.987559 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-djzrg\" (UniqueName: \"kubernetes.io/projected/4d3a96c1-245e-40ed-85a2-d46fe505f247-kube-api-access-djzrg\") pod \"4d3a96c1-245e-40ed-85a2-d46fe505f247\" (UID: \"4d3a96c1-245e-40ed-85a2-d46fe505f247\") " Jan 22 09:23:01 crc kubenswrapper[4811]: I0122 09:23:01.987663 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d3a96c1-245e-40ed-85a2-d46fe505f247-catalog-content\") pod \"4d3a96c1-245e-40ed-85a2-d46fe505f247\" (UID: \"4d3a96c1-245e-40ed-85a2-d46fe505f247\") " Jan 22 09:23:01 crc kubenswrapper[4811]: I0122 09:23:01.987758 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d3a96c1-245e-40ed-85a2-d46fe505f247-utilities\") pod \"4d3a96c1-245e-40ed-85a2-d46fe505f247\" (UID: \"4d3a96c1-245e-40ed-85a2-d46fe505f247\") " Jan 22 09:23:01 crc kubenswrapper[4811]: I0122 09:23:01.988611 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d3a96c1-245e-40ed-85a2-d46fe505f247-utilities" (OuterVolumeSpecName: "utilities") pod "4d3a96c1-245e-40ed-85a2-d46fe505f247" (UID: "4d3a96c1-245e-40ed-85a2-d46fe505f247"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:23:01 crc kubenswrapper[4811]: I0122 09:23:01.989421 4811 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d3a96c1-245e-40ed-85a2-d46fe505f247-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 09:23:01 crc kubenswrapper[4811]: I0122 09:23:01.994215 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d3a96c1-245e-40ed-85a2-d46fe505f247-kube-api-access-djzrg" (OuterVolumeSpecName: "kube-api-access-djzrg") pod "4d3a96c1-245e-40ed-85a2-d46fe505f247" (UID: "4d3a96c1-245e-40ed-85a2-d46fe505f247"). InnerVolumeSpecName "kube-api-access-djzrg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:23:02 crc kubenswrapper[4811]: I0122 09:23:02.065286 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d3a96c1-245e-40ed-85a2-d46fe505f247-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4d3a96c1-245e-40ed-85a2-d46fe505f247" (UID: "4d3a96c1-245e-40ed-85a2-d46fe505f247"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:23:02 crc kubenswrapper[4811]: I0122 09:23:02.091550 4811 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d3a96c1-245e-40ed-85a2-d46fe505f247-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 09:23:02 crc kubenswrapper[4811]: I0122 09:23:02.091648 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-djzrg\" (UniqueName: \"kubernetes.io/projected/4d3a96c1-245e-40ed-85a2-d46fe505f247-kube-api-access-djzrg\") on node \"crc\" DevicePath \"\"" Jan 22 09:23:02 crc kubenswrapper[4811]: I0122 09:23:02.424451 4811 generic.go:334] "Generic (PLEG): container finished" podID="4d3a96c1-245e-40ed-85a2-d46fe505f247" containerID="f7cf5a285917f7adaba2fa743b43cafe16bbc9b14904886c728ef1bd9d3b2f16" exitCode=0 Jan 22 09:23:02 crc kubenswrapper[4811]: I0122 09:23:02.424552 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s6xs2" Jan 22 09:23:02 crc kubenswrapper[4811]: I0122 09:23:02.424569 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s6xs2" event={"ID":"4d3a96c1-245e-40ed-85a2-d46fe505f247","Type":"ContainerDied","Data":"f7cf5a285917f7adaba2fa743b43cafe16bbc9b14904886c728ef1bd9d3b2f16"} Jan 22 09:23:02 crc kubenswrapper[4811]: I0122 09:23:02.424681 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s6xs2" event={"ID":"4d3a96c1-245e-40ed-85a2-d46fe505f247","Type":"ContainerDied","Data":"eb85ce1ccb9cc9ce6acc4077548d9808d0fa2ec6cf15becf80761d1300de3561"} Jan 22 09:23:02 crc kubenswrapper[4811]: I0122 09:23:02.424713 4811 scope.go:117] "RemoveContainer" containerID="f7cf5a285917f7adaba2fa743b43cafe16bbc9b14904886c728ef1bd9d3b2f16" Jan 22 09:23:02 crc kubenswrapper[4811]: I0122 09:23:02.435379 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-zb8nm" event={"ID":"3ed0ce90-7d83-46a0-9ed5-25f37c5e3dad","Type":"ContainerStarted","Data":"5625955d1ac26393323331bb2f36e93a748a895ca69e7672bf21f4397522774f"} Jan 22 09:23:02 crc kubenswrapper[4811]: I0122 09:23:02.453666 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59fd54bbff-sw5lk" event={"ID":"ebb2968f-70cb-4b6b-8fab-9647ddaa5e93","Type":"ContainerStarted","Data":"aa8a3d2863c61ba38f90c8d05e036afa5a37411875d3eb114e6b0a3565d3b68a"} Jan 22 09:23:02 crc kubenswrapper[4811]: I0122 09:23:02.453898 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-59fd54bbff-sw5lk" Jan 22 09:23:02 crc kubenswrapper[4811]: I0122 09:23:02.476046 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-s6xs2"] Jan 22 09:23:02 crc kubenswrapper[4811]: I0122 09:23:02.490006 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-s6xs2"] Jan 22 09:23:02 crc kubenswrapper[4811]: I0122 09:23:02.545256 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-zb8nm" podStartSLOduration=2.545239688 podStartE2EDuration="2.545239688s" podCreationTimestamp="2026-01-22 09:23:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:23:02.517864331 +0000 UTC m=+1026.840051454" watchObservedRunningTime="2026-01-22 09:23:02.545239688 +0000 UTC m=+1026.867426801" Jan 22 09:23:02 crc kubenswrapper[4811]: I0122 09:23:02.546511 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-59fd54bbff-sw5lk" podStartSLOduration=3.546505785 podStartE2EDuration="3.546505785s" podCreationTimestamp="2026-01-22 09:22:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:23:02.538005395 +0000 UTC m=+1026.860192518" watchObservedRunningTime="2026-01-22 09:23:02.546505785 +0000 UTC m=+1026.868692909" Jan 22 09:23:02 crc kubenswrapper[4811]: I0122 09:23:02.794776 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 22 09:23:02 crc kubenswrapper[4811]: I0122 09:23:02.823111 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 22 09:23:03 crc kubenswrapper[4811]: I0122 09:23:03.836590 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-p7zqj"] Jan 22 09:23:03 crc kubenswrapper[4811]: E0122 09:23:03.836951 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d3a96c1-245e-40ed-85a2-d46fe505f247" containerName="extract-content" Jan 22 09:23:03 crc kubenswrapper[4811]: I0122 09:23:03.836966 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d3a96c1-245e-40ed-85a2-d46fe505f247" containerName="extract-content" Jan 22 09:23:03 crc kubenswrapper[4811]: E0122 09:23:03.836977 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d3a96c1-245e-40ed-85a2-d46fe505f247" containerName="registry-server" Jan 22 09:23:03 crc kubenswrapper[4811]: I0122 09:23:03.836983 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d3a96c1-245e-40ed-85a2-d46fe505f247" containerName="registry-server" Jan 22 09:23:03 crc kubenswrapper[4811]: E0122 09:23:03.837004 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d3a96c1-245e-40ed-85a2-d46fe505f247" containerName="extract-utilities" Jan 22 09:23:03 crc kubenswrapper[4811]: I0122 09:23:03.837011 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d3a96c1-245e-40ed-85a2-d46fe505f247" containerName="extract-utilities" Jan 22 09:23:03 crc kubenswrapper[4811]: I0122 09:23:03.837196 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d3a96c1-245e-40ed-85a2-d46fe505f247" containerName="registry-server" Jan 22 09:23:03 crc kubenswrapper[4811]: I0122 09:23:03.838310 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p7zqj" Jan 22 09:23:03 crc kubenswrapper[4811]: I0122 09:23:03.854781 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-p7zqj"] Jan 22 09:23:03 crc kubenswrapper[4811]: I0122 09:23:03.871966 4811 scope.go:117] "RemoveContainer" containerID="5d63fc719688d3d78105fc732e3cde7de333f5d561b5f0b30c3617615731d906" Jan 22 09:23:03 crc kubenswrapper[4811]: I0122 09:23:03.907710 4811 scope.go:117] "RemoveContainer" containerID="3ba9cfa95ef95a52f7aa91175c21c929c0250a0088eef268a40d84f4bce1cb00" Jan 22 09:23:03 crc kubenswrapper[4811]: I0122 09:23:03.937687 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89733fdb-bf6a-42d3-8a8e-da3e2ff86438-catalog-content\") pod \"community-operators-p7zqj\" (UID: \"89733fdb-bf6a-42d3-8a8e-da3e2ff86438\") " pod="openshift-marketplace/community-operators-p7zqj" Jan 22 09:23:03 crc kubenswrapper[4811]: I0122 09:23:03.937730 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89733fdb-bf6a-42d3-8a8e-da3e2ff86438-utilities\") pod \"community-operators-p7zqj\" (UID: \"89733fdb-bf6a-42d3-8a8e-da3e2ff86438\") " pod="openshift-marketplace/community-operators-p7zqj" Jan 22 09:23:03 crc kubenswrapper[4811]: I0122 09:23:03.937830 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sf6st\" (UniqueName: \"kubernetes.io/projected/89733fdb-bf6a-42d3-8a8e-da3e2ff86438-kube-api-access-sf6st\") pod \"community-operators-p7zqj\" (UID: \"89733fdb-bf6a-42d3-8a8e-da3e2ff86438\") " pod="openshift-marketplace/community-operators-p7zqj" Jan 22 09:23:04 crc kubenswrapper[4811]: I0122 09:23:04.013421 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d3a96c1-245e-40ed-85a2-d46fe505f247" path="/var/lib/kubelet/pods/4d3a96c1-245e-40ed-85a2-d46fe505f247/volumes" Jan 22 09:23:04 crc kubenswrapper[4811]: I0122 09:23:04.030155 4811 scope.go:117] "RemoveContainer" containerID="f7cf5a285917f7adaba2fa743b43cafe16bbc9b14904886c728ef1bd9d3b2f16" Jan 22 09:23:04 crc kubenswrapper[4811]: E0122 09:23:04.031114 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f7cf5a285917f7adaba2fa743b43cafe16bbc9b14904886c728ef1bd9d3b2f16\": container with ID starting with f7cf5a285917f7adaba2fa743b43cafe16bbc9b14904886c728ef1bd9d3b2f16 not found: ID does not exist" containerID="f7cf5a285917f7adaba2fa743b43cafe16bbc9b14904886c728ef1bd9d3b2f16" Jan 22 09:23:04 crc kubenswrapper[4811]: I0122 09:23:04.031177 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7cf5a285917f7adaba2fa743b43cafe16bbc9b14904886c728ef1bd9d3b2f16"} err="failed to get container status \"f7cf5a285917f7adaba2fa743b43cafe16bbc9b14904886c728ef1bd9d3b2f16\": rpc error: code = NotFound desc = could not find container \"f7cf5a285917f7adaba2fa743b43cafe16bbc9b14904886c728ef1bd9d3b2f16\": container with ID starting with f7cf5a285917f7adaba2fa743b43cafe16bbc9b14904886c728ef1bd9d3b2f16 not found: ID does not exist" Jan 22 09:23:04 crc kubenswrapper[4811]: I0122 09:23:04.031222 4811 scope.go:117] "RemoveContainer" containerID="5d63fc719688d3d78105fc732e3cde7de333f5d561b5f0b30c3617615731d906" Jan 22 09:23:04 crc kubenswrapper[4811]: E0122 09:23:04.035875 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d63fc719688d3d78105fc732e3cde7de333f5d561b5f0b30c3617615731d906\": container with ID starting with 5d63fc719688d3d78105fc732e3cde7de333f5d561b5f0b30c3617615731d906 not found: ID does not exist" containerID="5d63fc719688d3d78105fc732e3cde7de333f5d561b5f0b30c3617615731d906" Jan 22 09:23:04 crc kubenswrapper[4811]: I0122 09:23:04.035901 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d63fc719688d3d78105fc732e3cde7de333f5d561b5f0b30c3617615731d906"} err="failed to get container status \"5d63fc719688d3d78105fc732e3cde7de333f5d561b5f0b30c3617615731d906\": rpc error: code = NotFound desc = could not find container \"5d63fc719688d3d78105fc732e3cde7de333f5d561b5f0b30c3617615731d906\": container with ID starting with 5d63fc719688d3d78105fc732e3cde7de333f5d561b5f0b30c3617615731d906 not found: ID does not exist" Jan 22 09:23:04 crc kubenswrapper[4811]: I0122 09:23:04.035918 4811 scope.go:117] "RemoveContainer" containerID="3ba9cfa95ef95a52f7aa91175c21c929c0250a0088eef268a40d84f4bce1cb00" Jan 22 09:23:04 crc kubenswrapper[4811]: E0122 09:23:04.036931 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ba9cfa95ef95a52f7aa91175c21c929c0250a0088eef268a40d84f4bce1cb00\": container with ID starting with 3ba9cfa95ef95a52f7aa91175c21c929c0250a0088eef268a40d84f4bce1cb00 not found: ID does not exist" containerID="3ba9cfa95ef95a52f7aa91175c21c929c0250a0088eef268a40d84f4bce1cb00" Jan 22 09:23:04 crc kubenswrapper[4811]: I0122 09:23:04.036955 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ba9cfa95ef95a52f7aa91175c21c929c0250a0088eef268a40d84f4bce1cb00"} err="failed to get container status \"3ba9cfa95ef95a52f7aa91175c21c929c0250a0088eef268a40d84f4bce1cb00\": rpc error: code = NotFound desc = could not find container \"3ba9cfa95ef95a52f7aa91175c21c929c0250a0088eef268a40d84f4bce1cb00\": container with ID starting with 3ba9cfa95ef95a52f7aa91175c21c929c0250a0088eef268a40d84f4bce1cb00 not found: ID does not exist" Jan 22 09:23:04 crc kubenswrapper[4811]: I0122 09:23:04.040616 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sf6st\" (UniqueName: \"kubernetes.io/projected/89733fdb-bf6a-42d3-8a8e-da3e2ff86438-kube-api-access-sf6st\") pod \"community-operators-p7zqj\" (UID: \"89733fdb-bf6a-42d3-8a8e-da3e2ff86438\") " pod="openshift-marketplace/community-operators-p7zqj" Jan 22 09:23:04 crc kubenswrapper[4811]: I0122 09:23:04.042113 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89733fdb-bf6a-42d3-8a8e-da3e2ff86438-catalog-content\") pod \"community-operators-p7zqj\" (UID: \"89733fdb-bf6a-42d3-8a8e-da3e2ff86438\") " pod="openshift-marketplace/community-operators-p7zqj" Jan 22 09:23:04 crc kubenswrapper[4811]: I0122 09:23:04.042283 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89733fdb-bf6a-42d3-8a8e-da3e2ff86438-utilities\") pod \"community-operators-p7zqj\" (UID: \"89733fdb-bf6a-42d3-8a8e-da3e2ff86438\") " pod="openshift-marketplace/community-operators-p7zqj" Jan 22 09:23:04 crc kubenswrapper[4811]: I0122 09:23:04.042677 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89733fdb-bf6a-42d3-8a8e-da3e2ff86438-catalog-content\") pod \"community-operators-p7zqj\" (UID: \"89733fdb-bf6a-42d3-8a8e-da3e2ff86438\") " pod="openshift-marketplace/community-operators-p7zqj" Jan 22 09:23:04 crc kubenswrapper[4811]: I0122 09:23:04.042699 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89733fdb-bf6a-42d3-8a8e-da3e2ff86438-utilities\") pod \"community-operators-p7zqj\" (UID: \"89733fdb-bf6a-42d3-8a8e-da3e2ff86438\") " pod="openshift-marketplace/community-operators-p7zqj" Jan 22 09:23:04 crc kubenswrapper[4811]: I0122 09:23:04.067806 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sf6st\" (UniqueName: \"kubernetes.io/projected/89733fdb-bf6a-42d3-8a8e-da3e2ff86438-kube-api-access-sf6st\") pod \"community-operators-p7zqj\" (UID: \"89733fdb-bf6a-42d3-8a8e-da3e2ff86438\") " pod="openshift-marketplace/community-operators-p7zqj" Jan 22 09:23:04 crc kubenswrapper[4811]: I0122 09:23:04.153459 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p7zqj" Jan 22 09:23:04 crc kubenswrapper[4811]: I0122 09:23:04.470292 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"dfebf11f-55b3-459e-b0c3-36c20de9bd1d","Type":"ContainerStarted","Data":"5ce04cee941d90f6265e592f8e7f1a276f181236771e8d62d505152c02b5bf28"} Jan 22 09:23:04 crc kubenswrapper[4811]: I0122 09:23:04.473486 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"a4121565-e932-4f2e-a6f1-62ecfeaab63b","Type":"ContainerStarted","Data":"5c691d1eade36eec09832866a57d5f1d3f5e557ade0b300b11184e4079c1c5dd"} Jan 22 09:23:04 crc kubenswrapper[4811]: I0122 09:23:04.473568 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="a4121565-e932-4f2e-a6f1-62ecfeaab63b" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://5c691d1eade36eec09832866a57d5f1d3f5e557ade0b300b11184e4079c1c5dd" gracePeriod=30 Jan 22 09:23:04 crc kubenswrapper[4811]: I0122 09:23:04.475441 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3da71c17-c624-4ba1-b5b2-dd5410951026","Type":"ContainerStarted","Data":"015ef246e1c476dabe092245e80aa87200397d00623f9c1d839a56a05bc72c84"} Jan 22 09:23:04 crc kubenswrapper[4811]: I0122 09:23:04.475473 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3da71c17-c624-4ba1-b5b2-dd5410951026","Type":"ContainerStarted","Data":"b69f3c8adef8f8bf79cb7c49a144ba2138a2c7f63dc3021fdcc0ce67ff372e7b"} Jan 22 09:23:04 crc kubenswrapper[4811]: I0122 09:23:04.475583 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="3da71c17-c624-4ba1-b5b2-dd5410951026" containerName="nova-metadata-log" containerID="cri-o://b69f3c8adef8f8bf79cb7c49a144ba2138a2c7f63dc3021fdcc0ce67ff372e7b" gracePeriod=30 Jan 22 09:23:04 crc kubenswrapper[4811]: I0122 09:23:04.475714 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="3da71c17-c624-4ba1-b5b2-dd5410951026" containerName="nova-metadata-metadata" containerID="cri-o://015ef246e1c476dabe092245e80aa87200397d00623f9c1d839a56a05bc72c84" gracePeriod=30 Jan 22 09:23:04 crc kubenswrapper[4811]: I0122 09:23:04.489541 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"87478694-0c10-49d8-9d91-1ce054e05dee","Type":"ContainerStarted","Data":"e8a533ac5c851541b1154b577802f110cf957e4e04d0f8783b10990f745a6acf"} Jan 22 09:23:04 crc kubenswrapper[4811]: I0122 09:23:04.489570 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"87478694-0c10-49d8-9d91-1ce054e05dee","Type":"ContainerStarted","Data":"c6f46f723dfc1c4db2b076b15b15bc4a6c65711e111b533da910cc2f0adcd49e"} Jan 22 09:23:04 crc kubenswrapper[4811]: I0122 09:23:04.491195 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=1.783670913 podStartE2EDuration="5.491180066s" podCreationTimestamp="2026-01-22 09:22:59 +0000 UTC" firstStartedPulling="2026-01-22 09:23:00.190063714 +0000 UTC m=+1024.512250837" lastFinishedPulling="2026-01-22 09:23:03.897572866 +0000 UTC m=+1028.219759990" observedRunningTime="2026-01-22 09:23:04.486832658 +0000 UTC m=+1028.809019780" watchObservedRunningTime="2026-01-22 09:23:04.491180066 +0000 UTC m=+1028.813367189" Jan 22 09:23:04 crc kubenswrapper[4811]: I0122 09:23:04.510714 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.050149025 podStartE2EDuration="5.510699899s" podCreationTimestamp="2026-01-22 09:22:59 +0000 UTC" firstStartedPulling="2026-01-22 09:23:00.478106874 +0000 UTC m=+1024.800293997" lastFinishedPulling="2026-01-22 09:23:03.938657748 +0000 UTC m=+1028.260844871" observedRunningTime="2026-01-22 09:23:04.509848453 +0000 UTC m=+1028.832035576" watchObservedRunningTime="2026-01-22 09:23:04.510699899 +0000 UTC m=+1028.832887022" Jan 22 09:23:04 crc kubenswrapper[4811]: I0122 09:23:04.532264 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=1.863133043 podStartE2EDuration="5.532252796s" podCreationTimestamp="2026-01-22 09:22:59 +0000 UTC" firstStartedPulling="2026-01-22 09:23:00.268803689 +0000 UTC m=+1024.590990813" lastFinishedPulling="2026-01-22 09:23:03.937923442 +0000 UTC m=+1028.260110566" observedRunningTime="2026-01-22 09:23:04.52854379 +0000 UTC m=+1028.850730913" watchObservedRunningTime="2026-01-22 09:23:04.532252796 +0000 UTC m=+1028.854439918" Jan 22 09:23:04 crc kubenswrapper[4811]: I0122 09:23:04.548643 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=1.72911649 podStartE2EDuration="5.548619321s" podCreationTimestamp="2026-01-22 09:22:59 +0000 UTC" firstStartedPulling="2026-01-22 09:23:00.125613298 +0000 UTC m=+1024.447800420" lastFinishedPulling="2026-01-22 09:23:03.945116127 +0000 UTC m=+1028.267303251" observedRunningTime="2026-01-22 09:23:04.545683064 +0000 UTC m=+1028.867870187" watchObservedRunningTime="2026-01-22 09:23:04.548619321 +0000 UTC m=+1028.870806444" Jan 22 09:23:04 crc kubenswrapper[4811]: I0122 09:23:04.621742 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 22 09:23:04 crc kubenswrapper[4811]: I0122 09:23:04.668952 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-p7zqj"] Jan 22 09:23:04 crc kubenswrapper[4811]: I0122 09:23:04.720198 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 22 09:23:04 crc kubenswrapper[4811]: I0122 09:23:04.720257 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 22 09:23:04 crc kubenswrapper[4811]: I0122 09:23:04.876299 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 22 09:23:05 crc kubenswrapper[4811]: I0122 09:23:05.501229 4811 generic.go:334] "Generic (PLEG): container finished" podID="3da71c17-c624-4ba1-b5b2-dd5410951026" containerID="b69f3c8adef8f8bf79cb7c49a144ba2138a2c7f63dc3021fdcc0ce67ff372e7b" exitCode=143 Jan 22 09:23:05 crc kubenswrapper[4811]: I0122 09:23:05.501424 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3da71c17-c624-4ba1-b5b2-dd5410951026","Type":"ContainerDied","Data":"b69f3c8adef8f8bf79cb7c49a144ba2138a2c7f63dc3021fdcc0ce67ff372e7b"} Jan 22 09:23:05 crc kubenswrapper[4811]: I0122 09:23:05.503438 4811 generic.go:334] "Generic (PLEG): container finished" podID="89733fdb-bf6a-42d3-8a8e-da3e2ff86438" containerID="32879ddefedbe01b91f139a318bbd9415c93eb5c894e8cf7b55b3c60f0d205a7" exitCode=0 Jan 22 09:23:05 crc kubenswrapper[4811]: I0122 09:23:05.503491 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p7zqj" event={"ID":"89733fdb-bf6a-42d3-8a8e-da3e2ff86438","Type":"ContainerDied","Data":"32879ddefedbe01b91f139a318bbd9415c93eb5c894e8cf7b55b3c60f0d205a7"} Jan 22 09:23:05 crc kubenswrapper[4811]: I0122 09:23:05.503526 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p7zqj" event={"ID":"89733fdb-bf6a-42d3-8a8e-da3e2ff86438","Type":"ContainerStarted","Data":"d8fc0f3c5d7546a74e62635d5d7206923d1b5ea62f8793f02dfa99e156b04ac7"} Jan 22 09:23:06 crc kubenswrapper[4811]: I0122 09:23:06.517264 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p7zqj" event={"ID":"89733fdb-bf6a-42d3-8a8e-da3e2ff86438","Type":"ContainerStarted","Data":"f086c391f8021c78ea00be742c3f4d7fc4bf312f4da3f5146df36d571e0e020d"} Jan 22 09:23:06 crc kubenswrapper[4811]: I0122 09:23:06.521364 4811 generic.go:334] "Generic (PLEG): container finished" podID="3ed0ce90-7d83-46a0-9ed5-25f37c5e3dad" containerID="5625955d1ac26393323331bb2f36e93a748a895ca69e7672bf21f4397522774f" exitCode=0 Jan 22 09:23:06 crc kubenswrapper[4811]: I0122 09:23:06.521437 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-zb8nm" event={"ID":"3ed0ce90-7d83-46a0-9ed5-25f37c5e3dad","Type":"ContainerDied","Data":"5625955d1ac26393323331bb2f36e93a748a895ca69e7672bf21f4397522774f"} Jan 22 09:23:07 crc kubenswrapper[4811]: I0122 09:23:07.529673 4811 generic.go:334] "Generic (PLEG): container finished" podID="60d099f9-c44f-4d8c-9983-d478c424eff9" containerID="c6900d820993227d70c3c1a43bc81f14c62580d39e2e95ec8433d4296f21dc79" exitCode=0 Jan 22 09:23:07 crc kubenswrapper[4811]: I0122 09:23:07.529757 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-fpztn" event={"ID":"60d099f9-c44f-4d8c-9983-d478c424eff9","Type":"ContainerDied","Data":"c6900d820993227d70c3c1a43bc81f14c62580d39e2e95ec8433d4296f21dc79"} Jan 22 09:23:07 crc kubenswrapper[4811]: I0122 09:23:07.532326 4811 generic.go:334] "Generic (PLEG): container finished" podID="89733fdb-bf6a-42d3-8a8e-da3e2ff86438" containerID="f086c391f8021c78ea00be742c3f4d7fc4bf312f4da3f5146df36d571e0e020d" exitCode=0 Jan 22 09:23:07 crc kubenswrapper[4811]: I0122 09:23:07.532411 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p7zqj" event={"ID":"89733fdb-bf6a-42d3-8a8e-da3e2ff86438","Type":"ContainerDied","Data":"f086c391f8021c78ea00be742c3f4d7fc4bf312f4da3f5146df36d571e0e020d"} Jan 22 09:23:07 crc kubenswrapper[4811]: I0122 09:23:07.918466 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-zb8nm" Jan 22 09:23:07 crc kubenswrapper[4811]: I0122 09:23:07.930943 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7kmjd\" (UniqueName: \"kubernetes.io/projected/3ed0ce90-7d83-46a0-9ed5-25f37c5e3dad-kube-api-access-7kmjd\") pod \"3ed0ce90-7d83-46a0-9ed5-25f37c5e3dad\" (UID: \"3ed0ce90-7d83-46a0-9ed5-25f37c5e3dad\") " Jan 22 09:23:07 crc kubenswrapper[4811]: I0122 09:23:07.930989 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ed0ce90-7d83-46a0-9ed5-25f37c5e3dad-scripts\") pod \"3ed0ce90-7d83-46a0-9ed5-25f37c5e3dad\" (UID: \"3ed0ce90-7d83-46a0-9ed5-25f37c5e3dad\") " Jan 22 09:23:07 crc kubenswrapper[4811]: I0122 09:23:07.931028 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ed0ce90-7d83-46a0-9ed5-25f37c5e3dad-config-data\") pod \"3ed0ce90-7d83-46a0-9ed5-25f37c5e3dad\" (UID: \"3ed0ce90-7d83-46a0-9ed5-25f37c5e3dad\") " Jan 22 09:23:07 crc kubenswrapper[4811]: I0122 09:23:07.931053 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ed0ce90-7d83-46a0-9ed5-25f37c5e3dad-combined-ca-bundle\") pod \"3ed0ce90-7d83-46a0-9ed5-25f37c5e3dad\" (UID: \"3ed0ce90-7d83-46a0-9ed5-25f37c5e3dad\") " Jan 22 09:23:07 crc kubenswrapper[4811]: I0122 09:23:07.937792 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ed0ce90-7d83-46a0-9ed5-25f37c5e3dad-kube-api-access-7kmjd" (OuterVolumeSpecName: "kube-api-access-7kmjd") pod "3ed0ce90-7d83-46a0-9ed5-25f37c5e3dad" (UID: "3ed0ce90-7d83-46a0-9ed5-25f37c5e3dad"). InnerVolumeSpecName "kube-api-access-7kmjd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:23:07 crc kubenswrapper[4811]: I0122 09:23:07.939965 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ed0ce90-7d83-46a0-9ed5-25f37c5e3dad-scripts" (OuterVolumeSpecName: "scripts") pod "3ed0ce90-7d83-46a0-9ed5-25f37c5e3dad" (UID: "3ed0ce90-7d83-46a0-9ed5-25f37c5e3dad"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:23:07 crc kubenswrapper[4811]: I0122 09:23:07.961256 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ed0ce90-7d83-46a0-9ed5-25f37c5e3dad-config-data" (OuterVolumeSpecName: "config-data") pod "3ed0ce90-7d83-46a0-9ed5-25f37c5e3dad" (UID: "3ed0ce90-7d83-46a0-9ed5-25f37c5e3dad"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:23:07 crc kubenswrapper[4811]: I0122 09:23:07.966779 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ed0ce90-7d83-46a0-9ed5-25f37c5e3dad-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3ed0ce90-7d83-46a0-9ed5-25f37c5e3dad" (UID: "3ed0ce90-7d83-46a0-9ed5-25f37c5e3dad"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:23:08 crc kubenswrapper[4811]: I0122 09:23:08.033419 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7kmjd\" (UniqueName: \"kubernetes.io/projected/3ed0ce90-7d83-46a0-9ed5-25f37c5e3dad-kube-api-access-7kmjd\") on node \"crc\" DevicePath \"\"" Jan 22 09:23:08 crc kubenswrapper[4811]: I0122 09:23:08.033450 4811 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ed0ce90-7d83-46a0-9ed5-25f37c5e3dad-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 09:23:08 crc kubenswrapper[4811]: I0122 09:23:08.033461 4811 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ed0ce90-7d83-46a0-9ed5-25f37c5e3dad-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 09:23:08 crc kubenswrapper[4811]: I0122 09:23:08.033496 4811 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ed0ce90-7d83-46a0-9ed5-25f37c5e3dad-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 09:23:08 crc kubenswrapper[4811]: I0122 09:23:08.546380 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-zb8nm" event={"ID":"3ed0ce90-7d83-46a0-9ed5-25f37c5e3dad","Type":"ContainerDied","Data":"2755967d41479aae81b514efd6ff2b2787dc2440a26c75d45c114772232f6f30"} Jan 22 09:23:08 crc kubenswrapper[4811]: I0122 09:23:08.547685 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2755967d41479aae81b514efd6ff2b2787dc2440a26c75d45c114772232f6f30" Jan 22 09:23:08 crc kubenswrapper[4811]: I0122 09:23:08.546570 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-zb8nm" Jan 22 09:23:08 crc kubenswrapper[4811]: I0122 09:23:08.553335 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p7zqj" event={"ID":"89733fdb-bf6a-42d3-8a8e-da3e2ff86438","Type":"ContainerStarted","Data":"096c08bb0503b4fb8e624b1b7c5abd60a9198e30e963666366aef2d291f24d4b"} Jan 22 09:23:08 crc kubenswrapper[4811]: I0122 09:23:08.618736 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-p7zqj" podStartSLOduration=3.080712938 podStartE2EDuration="5.618717066s" podCreationTimestamp="2026-01-22 09:23:03 +0000 UTC" firstStartedPulling="2026-01-22 09:23:05.505566613 +0000 UTC m=+1029.827753736" lastFinishedPulling="2026-01-22 09:23:08.043570741 +0000 UTC m=+1032.365757864" observedRunningTime="2026-01-22 09:23:08.587974829 +0000 UTC m=+1032.910161953" watchObservedRunningTime="2026-01-22 09:23:08.618717066 +0000 UTC m=+1032.940904189" Jan 22 09:23:08 crc kubenswrapper[4811]: I0122 09:23:08.662405 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 22 09:23:08 crc kubenswrapper[4811]: E0122 09:23:08.663465 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ed0ce90-7d83-46a0-9ed5-25f37c5e3dad" containerName="nova-cell1-conductor-db-sync" Jan 22 09:23:08 crc kubenswrapper[4811]: I0122 09:23:08.663562 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ed0ce90-7d83-46a0-9ed5-25f37c5e3dad" containerName="nova-cell1-conductor-db-sync" Jan 22 09:23:08 crc kubenswrapper[4811]: I0122 09:23:08.664564 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ed0ce90-7d83-46a0-9ed5-25f37c5e3dad" containerName="nova-cell1-conductor-db-sync" Jan 22 09:23:08 crc kubenswrapper[4811]: I0122 09:23:08.666228 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 22 09:23:08 crc kubenswrapper[4811]: I0122 09:23:08.670547 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 22 09:23:08 crc kubenswrapper[4811]: I0122 09:23:08.673201 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 22 09:23:08 crc kubenswrapper[4811]: I0122 09:23:08.753028 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f96a2c3b-9e7d-4e81-8744-bcaeca5db4bd-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"f96a2c3b-9e7d-4e81-8744-bcaeca5db4bd\") " pod="openstack/nova-cell1-conductor-0" Jan 22 09:23:08 crc kubenswrapper[4811]: I0122 09:23:08.753687 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f96a2c3b-9e7d-4e81-8744-bcaeca5db4bd-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"f96a2c3b-9e7d-4e81-8744-bcaeca5db4bd\") " pod="openstack/nova-cell1-conductor-0" Jan 22 09:23:08 crc kubenswrapper[4811]: I0122 09:23:08.753941 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxljs\" (UniqueName: \"kubernetes.io/projected/f96a2c3b-9e7d-4e81-8744-bcaeca5db4bd-kube-api-access-hxljs\") pod \"nova-cell1-conductor-0\" (UID: \"f96a2c3b-9e7d-4e81-8744-bcaeca5db4bd\") " pod="openstack/nova-cell1-conductor-0" Jan 22 09:23:08 crc kubenswrapper[4811]: I0122 09:23:08.855721 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxljs\" (UniqueName: \"kubernetes.io/projected/f96a2c3b-9e7d-4e81-8744-bcaeca5db4bd-kube-api-access-hxljs\") pod \"nova-cell1-conductor-0\" (UID: \"f96a2c3b-9e7d-4e81-8744-bcaeca5db4bd\") " pod="openstack/nova-cell1-conductor-0" Jan 22 09:23:08 crc kubenswrapper[4811]: I0122 09:23:08.855797 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f96a2c3b-9e7d-4e81-8744-bcaeca5db4bd-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"f96a2c3b-9e7d-4e81-8744-bcaeca5db4bd\") " pod="openstack/nova-cell1-conductor-0" Jan 22 09:23:08 crc kubenswrapper[4811]: I0122 09:23:08.856043 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f96a2c3b-9e7d-4e81-8744-bcaeca5db4bd-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"f96a2c3b-9e7d-4e81-8744-bcaeca5db4bd\") " pod="openstack/nova-cell1-conductor-0" Jan 22 09:23:08 crc kubenswrapper[4811]: I0122 09:23:08.864404 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f96a2c3b-9e7d-4e81-8744-bcaeca5db4bd-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"f96a2c3b-9e7d-4e81-8744-bcaeca5db4bd\") " pod="openstack/nova-cell1-conductor-0" Jan 22 09:23:08 crc kubenswrapper[4811]: I0122 09:23:08.864931 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f96a2c3b-9e7d-4e81-8744-bcaeca5db4bd-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"f96a2c3b-9e7d-4e81-8744-bcaeca5db4bd\") " pod="openstack/nova-cell1-conductor-0" Jan 22 09:23:08 crc kubenswrapper[4811]: I0122 09:23:08.875954 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxljs\" (UniqueName: \"kubernetes.io/projected/f96a2c3b-9e7d-4e81-8744-bcaeca5db4bd-kube-api-access-hxljs\") pod \"nova-cell1-conductor-0\" (UID: \"f96a2c3b-9e7d-4e81-8744-bcaeca5db4bd\") " pod="openstack/nova-cell1-conductor-0" Jan 22 09:23:08 crc kubenswrapper[4811]: I0122 09:23:08.914423 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-fpztn" Jan 22 09:23:08 crc kubenswrapper[4811]: I0122 09:23:08.957753 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fm6hb\" (UniqueName: \"kubernetes.io/projected/60d099f9-c44f-4d8c-9983-d478c424eff9-kube-api-access-fm6hb\") pod \"60d099f9-c44f-4d8c-9983-d478c424eff9\" (UID: \"60d099f9-c44f-4d8c-9983-d478c424eff9\") " Jan 22 09:23:08 crc kubenswrapper[4811]: I0122 09:23:08.957861 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60d099f9-c44f-4d8c-9983-d478c424eff9-scripts\") pod \"60d099f9-c44f-4d8c-9983-d478c424eff9\" (UID: \"60d099f9-c44f-4d8c-9983-d478c424eff9\") " Jan 22 09:23:08 crc kubenswrapper[4811]: I0122 09:23:08.958016 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60d099f9-c44f-4d8c-9983-d478c424eff9-config-data\") pod \"60d099f9-c44f-4d8c-9983-d478c424eff9\" (UID: \"60d099f9-c44f-4d8c-9983-d478c424eff9\") " Jan 22 09:23:08 crc kubenswrapper[4811]: I0122 09:23:08.958090 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60d099f9-c44f-4d8c-9983-d478c424eff9-combined-ca-bundle\") pod \"60d099f9-c44f-4d8c-9983-d478c424eff9\" (UID: \"60d099f9-c44f-4d8c-9983-d478c424eff9\") " Jan 22 09:23:08 crc kubenswrapper[4811]: I0122 09:23:08.968212 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60d099f9-c44f-4d8c-9983-d478c424eff9-scripts" (OuterVolumeSpecName: "scripts") pod "60d099f9-c44f-4d8c-9983-d478c424eff9" (UID: "60d099f9-c44f-4d8c-9983-d478c424eff9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:23:08 crc kubenswrapper[4811]: I0122 09:23:08.971718 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60d099f9-c44f-4d8c-9983-d478c424eff9-kube-api-access-fm6hb" (OuterVolumeSpecName: "kube-api-access-fm6hb") pod "60d099f9-c44f-4d8c-9983-d478c424eff9" (UID: "60d099f9-c44f-4d8c-9983-d478c424eff9"). InnerVolumeSpecName "kube-api-access-fm6hb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:23:08 crc kubenswrapper[4811]: I0122 09:23:08.987267 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60d099f9-c44f-4d8c-9983-d478c424eff9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "60d099f9-c44f-4d8c-9983-d478c424eff9" (UID: "60d099f9-c44f-4d8c-9983-d478c424eff9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:23:08 crc kubenswrapper[4811]: I0122 09:23:08.990231 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60d099f9-c44f-4d8c-9983-d478c424eff9-config-data" (OuterVolumeSpecName: "config-data") pod "60d099f9-c44f-4d8c-9983-d478c424eff9" (UID: "60d099f9-c44f-4d8c-9983-d478c424eff9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:23:09 crc kubenswrapper[4811]: I0122 09:23:09.006366 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 22 09:23:09 crc kubenswrapper[4811]: I0122 09:23:09.060820 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fm6hb\" (UniqueName: \"kubernetes.io/projected/60d099f9-c44f-4d8c-9983-d478c424eff9-kube-api-access-fm6hb\") on node \"crc\" DevicePath \"\"" Jan 22 09:23:09 crc kubenswrapper[4811]: I0122 09:23:09.060850 4811 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60d099f9-c44f-4d8c-9983-d478c424eff9-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 09:23:09 crc kubenswrapper[4811]: I0122 09:23:09.060861 4811 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60d099f9-c44f-4d8c-9983-d478c424eff9-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 09:23:09 crc kubenswrapper[4811]: I0122 09:23:09.060871 4811 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60d099f9-c44f-4d8c-9983-d478c424eff9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 09:23:09 crc kubenswrapper[4811]: I0122 09:23:09.390666 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 22 09:23:09 crc kubenswrapper[4811]: W0122 09:23:09.399919 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf96a2c3b_9e7d_4e81_8744_bcaeca5db4bd.slice/crio-f4a26b69c6f83ada38724e2147ed9f35da05a6f5a806ae848db3fa9c51d579e6 WatchSource:0}: Error finding container f4a26b69c6f83ada38724e2147ed9f35da05a6f5a806ae848db3fa9c51d579e6: Status 404 returned error can't find the container with id f4a26b69c6f83ada38724e2147ed9f35da05a6f5a806ae848db3fa9c51d579e6 Jan 22 09:23:09 crc kubenswrapper[4811]: I0122 09:23:09.563199 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-fpztn" Jan 22 09:23:09 crc kubenswrapper[4811]: I0122 09:23:09.563195 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-fpztn" event={"ID":"60d099f9-c44f-4d8c-9983-d478c424eff9","Type":"ContainerDied","Data":"f46cd10d4d0b7ae8040deb8ee8623d79278374cfb91aba93b4940be23c6b2ab7"} Jan 22 09:23:09 crc kubenswrapper[4811]: I0122 09:23:09.563523 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f46cd10d4d0b7ae8040deb8ee8623d79278374cfb91aba93b4940be23c6b2ab7" Jan 22 09:23:09 crc kubenswrapper[4811]: I0122 09:23:09.564979 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"f96a2c3b-9e7d-4e81-8744-bcaeca5db4bd","Type":"ContainerStarted","Data":"caabe2884d4e1f3658a12ae0c73061a67b478ebb268ab09307ef1cc5d1e0560e"} Jan 22 09:23:09 crc kubenswrapper[4811]: I0122 09:23:09.565022 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"f96a2c3b-9e7d-4e81-8744-bcaeca5db4bd","Type":"ContainerStarted","Data":"f4a26b69c6f83ada38724e2147ed9f35da05a6f5a806ae848db3fa9c51d579e6"} Jan 22 09:23:09 crc kubenswrapper[4811]: I0122 09:23:09.592338 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=1.592300522 podStartE2EDuration="1.592300522s" podCreationTimestamp="2026-01-22 09:23:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:23:09.585847201 +0000 UTC m=+1033.908034325" watchObservedRunningTime="2026-01-22 09:23:09.592300522 +0000 UTC m=+1033.914487645" Jan 22 09:23:09 crc kubenswrapper[4811]: I0122 09:23:09.607620 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 22 09:23:09 crc kubenswrapper[4811]: I0122 09:23:09.607679 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 22 09:23:09 crc kubenswrapper[4811]: I0122 09:23:09.620907 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 22 09:23:09 crc kubenswrapper[4811]: I0122 09:23:09.649113 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 22 09:23:09 crc kubenswrapper[4811]: I0122 09:23:09.735607 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 22 09:23:09 crc kubenswrapper[4811]: I0122 09:23:09.761213 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 22 09:23:09 crc kubenswrapper[4811]: I0122 09:23:09.934761 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-59fd54bbff-sw5lk" Jan 22 09:23:09 crc kubenswrapper[4811]: I0122 09:23:09.989447 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7474d577dc-bfmkd"] Jan 22 09:23:09 crc kubenswrapper[4811]: I0122 09:23:09.989702 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7474d577dc-bfmkd" podUID="1831d8da-ec75-458b-b289-5119052f2216" containerName="dnsmasq-dns" containerID="cri-o://f0ee131d5ea82416c65eff726e2aa48d9cf15c6bf1e82292111b464aedb62aa2" gracePeriod=10 Jan 22 09:23:10 crc kubenswrapper[4811]: I0122 09:23:10.443731 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7474d577dc-bfmkd" Jan 22 09:23:10 crc kubenswrapper[4811]: I0122 09:23:10.485635 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1831d8da-ec75-458b-b289-5119052f2216-dns-svc\") pod \"1831d8da-ec75-458b-b289-5119052f2216\" (UID: \"1831d8da-ec75-458b-b289-5119052f2216\") " Jan 22 09:23:10 crc kubenswrapper[4811]: I0122 09:23:10.485699 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1831d8da-ec75-458b-b289-5119052f2216-config\") pod \"1831d8da-ec75-458b-b289-5119052f2216\" (UID: \"1831d8da-ec75-458b-b289-5119052f2216\") " Jan 22 09:23:10 crc kubenswrapper[4811]: I0122 09:23:10.485792 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9wrz6\" (UniqueName: \"kubernetes.io/projected/1831d8da-ec75-458b-b289-5119052f2216-kube-api-access-9wrz6\") pod \"1831d8da-ec75-458b-b289-5119052f2216\" (UID: \"1831d8da-ec75-458b-b289-5119052f2216\") " Jan 22 09:23:10 crc kubenswrapper[4811]: I0122 09:23:10.485813 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1831d8da-ec75-458b-b289-5119052f2216-ovsdbserver-nb\") pod \"1831d8da-ec75-458b-b289-5119052f2216\" (UID: \"1831d8da-ec75-458b-b289-5119052f2216\") " Jan 22 09:23:10 crc kubenswrapper[4811]: I0122 09:23:10.495730 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1831d8da-ec75-458b-b289-5119052f2216-kube-api-access-9wrz6" (OuterVolumeSpecName: "kube-api-access-9wrz6") pod "1831d8da-ec75-458b-b289-5119052f2216" (UID: "1831d8da-ec75-458b-b289-5119052f2216"). InnerVolumeSpecName "kube-api-access-9wrz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:23:10 crc kubenswrapper[4811]: I0122 09:23:10.555619 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1831d8da-ec75-458b-b289-5119052f2216-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1831d8da-ec75-458b-b289-5119052f2216" (UID: "1831d8da-ec75-458b-b289-5119052f2216"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:23:10 crc kubenswrapper[4811]: I0122 09:23:10.556385 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1831d8da-ec75-458b-b289-5119052f2216-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1831d8da-ec75-458b-b289-5119052f2216" (UID: "1831d8da-ec75-458b-b289-5119052f2216"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:23:10 crc kubenswrapper[4811]: I0122 09:23:10.576204 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1831d8da-ec75-458b-b289-5119052f2216-config" (OuterVolumeSpecName: "config") pod "1831d8da-ec75-458b-b289-5119052f2216" (UID: "1831d8da-ec75-458b-b289-5119052f2216"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:23:10 crc kubenswrapper[4811]: I0122 09:23:10.577931 4811 generic.go:334] "Generic (PLEG): container finished" podID="1831d8da-ec75-458b-b289-5119052f2216" containerID="f0ee131d5ea82416c65eff726e2aa48d9cf15c6bf1e82292111b464aedb62aa2" exitCode=0 Jan 22 09:23:10 crc kubenswrapper[4811]: I0122 09:23:10.578886 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7474d577dc-bfmkd" Jan 22 09:23:10 crc kubenswrapper[4811]: I0122 09:23:10.578958 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7474d577dc-bfmkd" event={"ID":"1831d8da-ec75-458b-b289-5119052f2216","Type":"ContainerDied","Data":"f0ee131d5ea82416c65eff726e2aa48d9cf15c6bf1e82292111b464aedb62aa2"} Jan 22 09:23:10 crc kubenswrapper[4811]: I0122 09:23:10.578990 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7474d577dc-bfmkd" event={"ID":"1831d8da-ec75-458b-b289-5119052f2216","Type":"ContainerDied","Data":"076d5c248a8628587cbe1a9dff8a8e8d8262a394ba5d30aa689dc714619c46e4"} Jan 22 09:23:10 crc kubenswrapper[4811]: I0122 09:23:10.579007 4811 scope.go:117] "RemoveContainer" containerID="f0ee131d5ea82416c65eff726e2aa48d9cf15c6bf1e82292111b464aedb62aa2" Jan 22 09:23:10 crc kubenswrapper[4811]: I0122 09:23:10.579587 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 22 09:23:10 crc kubenswrapper[4811]: I0122 09:23:10.579783 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="87478694-0c10-49d8-9d91-1ce054e05dee" containerName="nova-api-log" containerID="cri-o://c6f46f723dfc1c4db2b076b15b15bc4a6c65711e111b533da910cc2f0adcd49e" gracePeriod=30 Jan 22 09:23:10 crc kubenswrapper[4811]: I0122 09:23:10.579903 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="87478694-0c10-49d8-9d91-1ce054e05dee" containerName="nova-api-api" containerID="cri-o://e8a533ac5c851541b1154b577802f110cf957e4e04d0f8783b10990f745a6acf" gracePeriod=30 Jan 22 09:23:10 crc kubenswrapper[4811]: I0122 09:23:10.583846 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="87478694-0c10-49d8-9d91-1ce054e05dee" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.172:8774/\": EOF" Jan 22 09:23:10 crc kubenswrapper[4811]: I0122 09:23:10.586889 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="87478694-0c10-49d8-9d91-1ce054e05dee" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.172:8774/\": EOF" Jan 22 09:23:10 crc kubenswrapper[4811]: I0122 09:23:10.587942 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1831d8da-ec75-458b-b289-5119052f2216-ovsdbserver-sb\") pod \"1831d8da-ec75-458b-b289-5119052f2216\" (UID: \"1831d8da-ec75-458b-b289-5119052f2216\") " Jan 22 09:23:10 crc kubenswrapper[4811]: I0122 09:23:10.588718 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9wrz6\" (UniqueName: \"kubernetes.io/projected/1831d8da-ec75-458b-b289-5119052f2216-kube-api-access-9wrz6\") on node \"crc\" DevicePath \"\"" Jan 22 09:23:10 crc kubenswrapper[4811]: I0122 09:23:10.588823 4811 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1831d8da-ec75-458b-b289-5119052f2216-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 22 09:23:10 crc kubenswrapper[4811]: I0122 09:23:10.588903 4811 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1831d8da-ec75-458b-b289-5119052f2216-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 22 09:23:10 crc kubenswrapper[4811]: I0122 09:23:10.588979 4811 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1831d8da-ec75-458b-b289-5119052f2216-config\") on node \"crc\" DevicePath \"\"" Jan 22 09:23:10 crc kubenswrapper[4811]: I0122 09:23:10.609395 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 22 09:23:10 crc kubenswrapper[4811]: I0122 09:23:10.611416 4811 scope.go:117] "RemoveContainer" containerID="b3ac66eef8f8810a19ec0a0c5e9a3e7d603561b5b0feefdbe0d94e51bc8ae371" Jan 22 09:23:10 crc kubenswrapper[4811]: I0122 09:23:10.630610 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 22 09:23:10 crc kubenswrapper[4811]: I0122 09:23:10.658700 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1831d8da-ec75-458b-b289-5119052f2216-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1831d8da-ec75-458b-b289-5119052f2216" (UID: "1831d8da-ec75-458b-b289-5119052f2216"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:23:10 crc kubenswrapper[4811]: I0122 09:23:10.673833 4811 scope.go:117] "RemoveContainer" containerID="f0ee131d5ea82416c65eff726e2aa48d9cf15c6bf1e82292111b464aedb62aa2" Jan 22 09:23:10 crc kubenswrapper[4811]: E0122 09:23:10.674610 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f0ee131d5ea82416c65eff726e2aa48d9cf15c6bf1e82292111b464aedb62aa2\": container with ID starting with f0ee131d5ea82416c65eff726e2aa48d9cf15c6bf1e82292111b464aedb62aa2 not found: ID does not exist" containerID="f0ee131d5ea82416c65eff726e2aa48d9cf15c6bf1e82292111b464aedb62aa2" Jan 22 09:23:10 crc kubenswrapper[4811]: I0122 09:23:10.674675 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0ee131d5ea82416c65eff726e2aa48d9cf15c6bf1e82292111b464aedb62aa2"} err="failed to get container status \"f0ee131d5ea82416c65eff726e2aa48d9cf15c6bf1e82292111b464aedb62aa2\": rpc error: code = NotFound desc = could not find container \"f0ee131d5ea82416c65eff726e2aa48d9cf15c6bf1e82292111b464aedb62aa2\": container with ID starting with f0ee131d5ea82416c65eff726e2aa48d9cf15c6bf1e82292111b464aedb62aa2 not found: ID does not exist" Jan 22 09:23:10 crc kubenswrapper[4811]: I0122 09:23:10.674697 4811 scope.go:117] "RemoveContainer" containerID="b3ac66eef8f8810a19ec0a0c5e9a3e7d603561b5b0feefdbe0d94e51bc8ae371" Jan 22 09:23:10 crc kubenswrapper[4811]: E0122 09:23:10.674943 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3ac66eef8f8810a19ec0a0c5e9a3e7d603561b5b0feefdbe0d94e51bc8ae371\": container with ID starting with b3ac66eef8f8810a19ec0a0c5e9a3e7d603561b5b0feefdbe0d94e51bc8ae371 not found: ID does not exist" containerID="b3ac66eef8f8810a19ec0a0c5e9a3e7d603561b5b0feefdbe0d94e51bc8ae371" Jan 22 09:23:10 crc kubenswrapper[4811]: I0122 09:23:10.674968 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3ac66eef8f8810a19ec0a0c5e9a3e7d603561b5b0feefdbe0d94e51bc8ae371"} err="failed to get container status \"b3ac66eef8f8810a19ec0a0c5e9a3e7d603561b5b0feefdbe0d94e51bc8ae371\": rpc error: code = NotFound desc = could not find container \"b3ac66eef8f8810a19ec0a0c5e9a3e7d603561b5b0feefdbe0d94e51bc8ae371\": container with ID starting with b3ac66eef8f8810a19ec0a0c5e9a3e7d603561b5b0feefdbe0d94e51bc8ae371 not found: ID does not exist" Jan 22 09:23:10 crc kubenswrapper[4811]: I0122 09:23:10.692778 4811 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1831d8da-ec75-458b-b289-5119052f2216-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 22 09:23:10 crc kubenswrapper[4811]: I0122 09:23:10.909922 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7474d577dc-bfmkd"] Jan 22 09:23:10 crc kubenswrapper[4811]: I0122 09:23:10.922267 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7474d577dc-bfmkd"] Jan 22 09:23:11 crc kubenswrapper[4811]: I0122 09:23:11.599858 4811 generic.go:334] "Generic (PLEG): container finished" podID="87478694-0c10-49d8-9d91-1ce054e05dee" containerID="c6f46f723dfc1c4db2b076b15b15bc4a6c65711e111b533da910cc2f0adcd49e" exitCode=143 Jan 22 09:23:11 crc kubenswrapper[4811]: I0122 09:23:11.600524 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"87478694-0c10-49d8-9d91-1ce054e05dee","Type":"ContainerDied","Data":"c6f46f723dfc1c4db2b076b15b15bc4a6c65711e111b533da910cc2f0adcd49e"} Jan 22 09:23:11 crc kubenswrapper[4811]: I0122 09:23:11.602022 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="dfebf11f-55b3-459e-b0c3-36c20de9bd1d" containerName="nova-scheduler-scheduler" containerID="cri-o://5ce04cee941d90f6265e592f8e7f1a276f181236771e8d62d505152c02b5bf28" gracePeriod=30 Jan 22 09:23:12 crc kubenswrapper[4811]: I0122 09:23:12.000075 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1831d8da-ec75-458b-b289-5119052f2216" path="/var/lib/kubelet/pods/1831d8da-ec75-458b-b289-5119052f2216/volumes" Jan 22 09:23:13 crc kubenswrapper[4811]: I0122 09:23:13.570658 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 22 09:23:13 crc kubenswrapper[4811]: I0122 09:23:13.570907 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="e8d5030f-13fb-403c-9d6d-e9d87f27800f" containerName="kube-state-metrics" containerID="cri-o://224cfb4359b41b0f0bd2110a732e39b7aed6ee95383fdf01c01a1c6ef1567804" gracePeriod=30 Jan 22 09:23:13 crc kubenswrapper[4811]: I0122 09:23:13.974157 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 22 09:23:14 crc kubenswrapper[4811]: I0122 09:23:14.053257 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 22 09:23:14 crc kubenswrapper[4811]: I0122 09:23:14.152248 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xz9pf\" (UniqueName: \"kubernetes.io/projected/e8d5030f-13fb-403c-9d6d-e9d87f27800f-kube-api-access-xz9pf\") pod \"e8d5030f-13fb-403c-9d6d-e9d87f27800f\" (UID: \"e8d5030f-13fb-403c-9d6d-e9d87f27800f\") " Jan 22 09:23:14 crc kubenswrapper[4811]: I0122 09:23:14.153666 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-p7zqj" Jan 22 09:23:14 crc kubenswrapper[4811]: I0122 09:23:14.153908 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-p7zqj" Jan 22 09:23:14 crc kubenswrapper[4811]: I0122 09:23:14.157275 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8d5030f-13fb-403c-9d6d-e9d87f27800f-kube-api-access-xz9pf" (OuterVolumeSpecName: "kube-api-access-xz9pf") pod "e8d5030f-13fb-403c-9d6d-e9d87f27800f" (UID: "e8d5030f-13fb-403c-9d6d-e9d87f27800f"). InnerVolumeSpecName "kube-api-access-xz9pf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:23:14 crc kubenswrapper[4811]: I0122 09:23:14.189381 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-p7zqj" Jan 22 09:23:14 crc kubenswrapper[4811]: I0122 09:23:14.255458 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xz9pf\" (UniqueName: \"kubernetes.io/projected/e8d5030f-13fb-403c-9d6d-e9d87f27800f-kube-api-access-xz9pf\") on node \"crc\" DevicePath \"\"" Jan 22 09:23:14 crc kubenswrapper[4811]: I0122 09:23:14.496643 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 22 09:23:14 crc kubenswrapper[4811]: I0122 09:23:14.496877 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e9c1e6c8-785c-4978-8053-23f809789e23" containerName="ceilometer-central-agent" containerID="cri-o://c9ac90b166e61dfffcbbe492fb91e0b8db5ccfc2e6da6dd2002fb66f630a8852" gracePeriod=30 Jan 22 09:23:14 crc kubenswrapper[4811]: I0122 09:23:14.496993 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e9c1e6c8-785c-4978-8053-23f809789e23" containerName="proxy-httpd" containerID="cri-o://064c4e3a661f91b4a6e7a9c49315caef2ae118d01e452d7ce04335343afdcf1a" gracePeriod=30 Jan 22 09:23:14 crc kubenswrapper[4811]: I0122 09:23:14.497039 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e9c1e6c8-785c-4978-8053-23f809789e23" containerName="sg-core" containerID="cri-o://c2b2c19cf1f5b522633e4d88febe0d8198acff87ec9a4995871dc26e41e9b6e2" gracePeriod=30 Jan 22 09:23:14 crc kubenswrapper[4811]: I0122 09:23:14.497072 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e9c1e6c8-785c-4978-8053-23f809789e23" containerName="ceilometer-notification-agent" containerID="cri-o://89381f32d9ce0812b9b272069e646036b69b498ed584a8a30ce2da796e494565" gracePeriod=30 Jan 22 09:23:14 crc kubenswrapper[4811]: E0122 09:23:14.621964 4811 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5ce04cee941d90f6265e592f8e7f1a276f181236771e8d62d505152c02b5bf28" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 22 09:23:14 crc kubenswrapper[4811]: E0122 09:23:14.623155 4811 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5ce04cee941d90f6265e592f8e7f1a276f181236771e8d62d505152c02b5bf28" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 22 09:23:14 crc kubenswrapper[4811]: E0122 09:23:14.624272 4811 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5ce04cee941d90f6265e592f8e7f1a276f181236771e8d62d505152c02b5bf28" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 22 09:23:14 crc kubenswrapper[4811]: E0122 09:23:14.624308 4811 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="dfebf11f-55b3-459e-b0c3-36c20de9bd1d" containerName="nova-scheduler-scheduler" Jan 22 09:23:14 crc kubenswrapper[4811]: I0122 09:23:14.633399 4811 generic.go:334] "Generic (PLEG): container finished" podID="e9c1e6c8-785c-4978-8053-23f809789e23" containerID="064c4e3a661f91b4a6e7a9c49315caef2ae118d01e452d7ce04335343afdcf1a" exitCode=0 Jan 22 09:23:14 crc kubenswrapper[4811]: I0122 09:23:14.633425 4811 generic.go:334] "Generic (PLEG): container finished" podID="e9c1e6c8-785c-4978-8053-23f809789e23" containerID="c2b2c19cf1f5b522633e4d88febe0d8198acff87ec9a4995871dc26e41e9b6e2" exitCode=2 Jan 22 09:23:14 crc kubenswrapper[4811]: I0122 09:23:14.633469 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e9c1e6c8-785c-4978-8053-23f809789e23","Type":"ContainerDied","Data":"064c4e3a661f91b4a6e7a9c49315caef2ae118d01e452d7ce04335343afdcf1a"} Jan 22 09:23:14 crc kubenswrapper[4811]: I0122 09:23:14.633501 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e9c1e6c8-785c-4978-8053-23f809789e23","Type":"ContainerDied","Data":"c2b2c19cf1f5b522633e4d88febe0d8198acff87ec9a4995871dc26e41e9b6e2"} Jan 22 09:23:14 crc kubenswrapper[4811]: I0122 09:23:14.635474 4811 generic.go:334] "Generic (PLEG): container finished" podID="e8d5030f-13fb-403c-9d6d-e9d87f27800f" containerID="224cfb4359b41b0f0bd2110a732e39b7aed6ee95383fdf01c01a1c6ef1567804" exitCode=2 Jan 22 09:23:14 crc kubenswrapper[4811]: I0122 09:23:14.635523 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 22 09:23:14 crc kubenswrapper[4811]: I0122 09:23:14.635556 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"e8d5030f-13fb-403c-9d6d-e9d87f27800f","Type":"ContainerDied","Data":"224cfb4359b41b0f0bd2110a732e39b7aed6ee95383fdf01c01a1c6ef1567804"} Jan 22 09:23:14 crc kubenswrapper[4811]: I0122 09:23:14.635605 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"e8d5030f-13fb-403c-9d6d-e9d87f27800f","Type":"ContainerDied","Data":"5dbde7a33fa95447df89da812098341e0df99f6008d808f3350ad11ac16369ce"} Jan 22 09:23:14 crc kubenswrapper[4811]: I0122 09:23:14.635665 4811 scope.go:117] "RemoveContainer" containerID="224cfb4359b41b0f0bd2110a732e39b7aed6ee95383fdf01c01a1c6ef1567804" Jan 22 09:23:14 crc kubenswrapper[4811]: I0122 09:23:14.662050 4811 scope.go:117] "RemoveContainer" containerID="224cfb4359b41b0f0bd2110a732e39b7aed6ee95383fdf01c01a1c6ef1567804" Jan 22 09:23:14 crc kubenswrapper[4811]: E0122 09:23:14.665391 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"224cfb4359b41b0f0bd2110a732e39b7aed6ee95383fdf01c01a1c6ef1567804\": container with ID starting with 224cfb4359b41b0f0bd2110a732e39b7aed6ee95383fdf01c01a1c6ef1567804 not found: ID does not exist" containerID="224cfb4359b41b0f0bd2110a732e39b7aed6ee95383fdf01c01a1c6ef1567804" Jan 22 09:23:14 crc kubenswrapper[4811]: I0122 09:23:14.665420 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"224cfb4359b41b0f0bd2110a732e39b7aed6ee95383fdf01c01a1c6ef1567804"} err="failed to get container status \"224cfb4359b41b0f0bd2110a732e39b7aed6ee95383fdf01c01a1c6ef1567804\": rpc error: code = NotFound desc = could not find container \"224cfb4359b41b0f0bd2110a732e39b7aed6ee95383fdf01c01a1c6ef1567804\": container with ID starting with 224cfb4359b41b0f0bd2110a732e39b7aed6ee95383fdf01c01a1c6ef1567804 not found: ID does not exist" Jan 22 09:23:14 crc kubenswrapper[4811]: I0122 09:23:14.673894 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 22 09:23:14 crc kubenswrapper[4811]: I0122 09:23:14.683684 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 22 09:23:14 crc kubenswrapper[4811]: I0122 09:23:14.690985 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 22 09:23:14 crc kubenswrapper[4811]: E0122 09:23:14.691706 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1831d8da-ec75-458b-b289-5119052f2216" containerName="init" Jan 22 09:23:14 crc kubenswrapper[4811]: I0122 09:23:14.691735 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="1831d8da-ec75-458b-b289-5119052f2216" containerName="init" Jan 22 09:23:14 crc kubenswrapper[4811]: E0122 09:23:14.691769 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60d099f9-c44f-4d8c-9983-d478c424eff9" containerName="nova-manage" Jan 22 09:23:14 crc kubenswrapper[4811]: I0122 09:23:14.691778 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="60d099f9-c44f-4d8c-9983-d478c424eff9" containerName="nova-manage" Jan 22 09:23:14 crc kubenswrapper[4811]: E0122 09:23:14.691797 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1831d8da-ec75-458b-b289-5119052f2216" containerName="dnsmasq-dns" Jan 22 09:23:14 crc kubenswrapper[4811]: I0122 09:23:14.691818 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="1831d8da-ec75-458b-b289-5119052f2216" containerName="dnsmasq-dns" Jan 22 09:23:14 crc kubenswrapper[4811]: E0122 09:23:14.691844 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8d5030f-13fb-403c-9d6d-e9d87f27800f" containerName="kube-state-metrics" Jan 22 09:23:14 crc kubenswrapper[4811]: I0122 09:23:14.691851 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8d5030f-13fb-403c-9d6d-e9d87f27800f" containerName="kube-state-metrics" Jan 22 09:23:14 crc kubenswrapper[4811]: I0122 09:23:14.692139 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8d5030f-13fb-403c-9d6d-e9d87f27800f" containerName="kube-state-metrics" Jan 22 09:23:14 crc kubenswrapper[4811]: I0122 09:23:14.692161 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="1831d8da-ec75-458b-b289-5119052f2216" containerName="dnsmasq-dns" Jan 22 09:23:14 crc kubenswrapper[4811]: I0122 09:23:14.692197 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="60d099f9-c44f-4d8c-9983-d478c424eff9" containerName="nova-manage" Jan 22 09:23:14 crc kubenswrapper[4811]: I0122 09:23:14.693315 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 22 09:23:14 crc kubenswrapper[4811]: I0122 09:23:14.696995 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 22 09:23:14 crc kubenswrapper[4811]: I0122 09:23:14.698892 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Jan 22 09:23:14 crc kubenswrapper[4811]: I0122 09:23:14.699080 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Jan 22 09:23:14 crc kubenswrapper[4811]: I0122 09:23:14.733806 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-p7zqj" Jan 22 09:23:14 crc kubenswrapper[4811]: I0122 09:23:14.782233 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-p7zqj"] Jan 22 09:23:14 crc kubenswrapper[4811]: I0122 09:23:14.863187 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/8b61b8e9-fda4-46d3-a494-f3e804e7f4d4-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"8b61b8e9-fda4-46d3-a494-f3e804e7f4d4\") " pod="openstack/kube-state-metrics-0" Jan 22 09:23:14 crc kubenswrapper[4811]: I0122 09:23:14.863250 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b61b8e9-fda4-46d3-a494-f3e804e7f4d4-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"8b61b8e9-fda4-46d3-a494-f3e804e7f4d4\") " pod="openstack/kube-state-metrics-0" Jan 22 09:23:14 crc kubenswrapper[4811]: I0122 09:23:14.863273 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6phfh\" (UniqueName: \"kubernetes.io/projected/8b61b8e9-fda4-46d3-a494-f3e804e7f4d4-kube-api-access-6phfh\") pod \"kube-state-metrics-0\" (UID: \"8b61b8e9-fda4-46d3-a494-f3e804e7f4d4\") " pod="openstack/kube-state-metrics-0" Jan 22 09:23:14 crc kubenswrapper[4811]: I0122 09:23:14.863576 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b61b8e9-fda4-46d3-a494-f3e804e7f4d4-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"8b61b8e9-fda4-46d3-a494-f3e804e7f4d4\") " pod="openstack/kube-state-metrics-0" Jan 22 09:23:14 crc kubenswrapper[4811]: I0122 09:23:14.965381 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/8b61b8e9-fda4-46d3-a494-f3e804e7f4d4-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"8b61b8e9-fda4-46d3-a494-f3e804e7f4d4\") " pod="openstack/kube-state-metrics-0" Jan 22 09:23:14 crc kubenswrapper[4811]: I0122 09:23:14.965432 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b61b8e9-fda4-46d3-a494-f3e804e7f4d4-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"8b61b8e9-fda4-46d3-a494-f3e804e7f4d4\") " pod="openstack/kube-state-metrics-0" Jan 22 09:23:14 crc kubenswrapper[4811]: I0122 09:23:14.965454 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6phfh\" (UniqueName: \"kubernetes.io/projected/8b61b8e9-fda4-46d3-a494-f3e804e7f4d4-kube-api-access-6phfh\") pod \"kube-state-metrics-0\" (UID: \"8b61b8e9-fda4-46d3-a494-f3e804e7f4d4\") " pod="openstack/kube-state-metrics-0" Jan 22 09:23:14 crc kubenswrapper[4811]: I0122 09:23:14.965500 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b61b8e9-fda4-46d3-a494-f3e804e7f4d4-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"8b61b8e9-fda4-46d3-a494-f3e804e7f4d4\") " pod="openstack/kube-state-metrics-0" Jan 22 09:23:14 crc kubenswrapper[4811]: I0122 09:23:14.970867 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b61b8e9-fda4-46d3-a494-f3e804e7f4d4-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"8b61b8e9-fda4-46d3-a494-f3e804e7f4d4\") " pod="openstack/kube-state-metrics-0" Jan 22 09:23:14 crc kubenswrapper[4811]: I0122 09:23:14.971335 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/8b61b8e9-fda4-46d3-a494-f3e804e7f4d4-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"8b61b8e9-fda4-46d3-a494-f3e804e7f4d4\") " pod="openstack/kube-state-metrics-0" Jan 22 09:23:14 crc kubenswrapper[4811]: I0122 09:23:14.978391 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b61b8e9-fda4-46d3-a494-f3e804e7f4d4-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"8b61b8e9-fda4-46d3-a494-f3e804e7f4d4\") " pod="openstack/kube-state-metrics-0" Jan 22 09:23:14 crc kubenswrapper[4811]: I0122 09:23:14.988288 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6phfh\" (UniqueName: \"kubernetes.io/projected/8b61b8e9-fda4-46d3-a494-f3e804e7f4d4-kube-api-access-6phfh\") pod \"kube-state-metrics-0\" (UID: \"8b61b8e9-fda4-46d3-a494-f3e804e7f4d4\") " pod="openstack/kube-state-metrics-0" Jan 22 09:23:15 crc kubenswrapper[4811]: I0122 09:23:15.039070 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 22 09:23:15 crc kubenswrapper[4811]: I0122 09:23:15.437617 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 22 09:23:15 crc kubenswrapper[4811]: I0122 09:23:15.451687 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 22 09:23:15 crc kubenswrapper[4811]: W0122 09:23:15.459328 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8b61b8e9_fda4_46d3_a494_f3e804e7f4d4.slice/crio-4f6175f5ffc1d00e9239c370425f4a442ac002ca6e918795a9bf1a801b11c70f WatchSource:0}: Error finding container 4f6175f5ffc1d00e9239c370425f4a442ac002ca6e918795a9bf1a801b11c70f: Status 404 returned error can't find the container with id 4f6175f5ffc1d00e9239c370425f4a442ac002ca6e918795a9bf1a801b11c70f Jan 22 09:23:15 crc kubenswrapper[4811]: I0122 09:23:15.474703 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7vwhq\" (UniqueName: \"kubernetes.io/projected/dfebf11f-55b3-459e-b0c3-36c20de9bd1d-kube-api-access-7vwhq\") pod \"dfebf11f-55b3-459e-b0c3-36c20de9bd1d\" (UID: \"dfebf11f-55b3-459e-b0c3-36c20de9bd1d\") " Jan 22 09:23:15 crc kubenswrapper[4811]: I0122 09:23:15.475015 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dfebf11f-55b3-459e-b0c3-36c20de9bd1d-config-data\") pod \"dfebf11f-55b3-459e-b0c3-36c20de9bd1d\" (UID: \"dfebf11f-55b3-459e-b0c3-36c20de9bd1d\") " Jan 22 09:23:15 crc kubenswrapper[4811]: I0122 09:23:15.475084 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfebf11f-55b3-459e-b0c3-36c20de9bd1d-combined-ca-bundle\") pod \"dfebf11f-55b3-459e-b0c3-36c20de9bd1d\" (UID: \"dfebf11f-55b3-459e-b0c3-36c20de9bd1d\") " Jan 22 09:23:15 crc kubenswrapper[4811]: I0122 09:23:15.480736 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dfebf11f-55b3-459e-b0c3-36c20de9bd1d-kube-api-access-7vwhq" (OuterVolumeSpecName: "kube-api-access-7vwhq") pod "dfebf11f-55b3-459e-b0c3-36c20de9bd1d" (UID: "dfebf11f-55b3-459e-b0c3-36c20de9bd1d"). InnerVolumeSpecName "kube-api-access-7vwhq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:23:15 crc kubenswrapper[4811]: I0122 09:23:15.497322 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dfebf11f-55b3-459e-b0c3-36c20de9bd1d-config-data" (OuterVolumeSpecName: "config-data") pod "dfebf11f-55b3-459e-b0c3-36c20de9bd1d" (UID: "dfebf11f-55b3-459e-b0c3-36c20de9bd1d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:23:15 crc kubenswrapper[4811]: I0122 09:23:15.497470 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dfebf11f-55b3-459e-b0c3-36c20de9bd1d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dfebf11f-55b3-459e-b0c3-36c20de9bd1d" (UID: "dfebf11f-55b3-459e-b0c3-36c20de9bd1d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:23:15 crc kubenswrapper[4811]: I0122 09:23:15.576445 4811 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dfebf11f-55b3-459e-b0c3-36c20de9bd1d-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 09:23:15 crc kubenswrapper[4811]: I0122 09:23:15.576479 4811 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfebf11f-55b3-459e-b0c3-36c20de9bd1d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 09:23:15 crc kubenswrapper[4811]: I0122 09:23:15.576490 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7vwhq\" (UniqueName: \"kubernetes.io/projected/dfebf11f-55b3-459e-b0c3-36c20de9bd1d-kube-api-access-7vwhq\") on node \"crc\" DevicePath \"\"" Jan 22 09:23:15 crc kubenswrapper[4811]: I0122 09:23:15.646591 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"8b61b8e9-fda4-46d3-a494-f3e804e7f4d4","Type":"ContainerStarted","Data":"4f6175f5ffc1d00e9239c370425f4a442ac002ca6e918795a9bf1a801b11c70f"} Jan 22 09:23:15 crc kubenswrapper[4811]: I0122 09:23:15.652431 4811 generic.go:334] "Generic (PLEG): container finished" podID="e9c1e6c8-785c-4978-8053-23f809789e23" containerID="c9ac90b166e61dfffcbbe492fb91e0b8db5ccfc2e6da6dd2002fb66f630a8852" exitCode=0 Jan 22 09:23:15 crc kubenswrapper[4811]: I0122 09:23:15.652519 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e9c1e6c8-785c-4978-8053-23f809789e23","Type":"ContainerDied","Data":"c9ac90b166e61dfffcbbe492fb91e0b8db5ccfc2e6da6dd2002fb66f630a8852"} Jan 22 09:23:15 crc kubenswrapper[4811]: I0122 09:23:15.671194 4811 generic.go:334] "Generic (PLEG): container finished" podID="dfebf11f-55b3-459e-b0c3-36c20de9bd1d" containerID="5ce04cee941d90f6265e592f8e7f1a276f181236771e8d62d505152c02b5bf28" exitCode=0 Jan 22 09:23:15 crc kubenswrapper[4811]: I0122 09:23:15.672361 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 22 09:23:15 crc kubenswrapper[4811]: I0122 09:23:15.675354 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"dfebf11f-55b3-459e-b0c3-36c20de9bd1d","Type":"ContainerDied","Data":"5ce04cee941d90f6265e592f8e7f1a276f181236771e8d62d505152c02b5bf28"} Jan 22 09:23:15 crc kubenswrapper[4811]: I0122 09:23:15.675683 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"dfebf11f-55b3-459e-b0c3-36c20de9bd1d","Type":"ContainerDied","Data":"c2f632bec9dc761f8db5c7a671588935e6b22b55e9770627951d04e2f396d209"} Jan 22 09:23:15 crc kubenswrapper[4811]: I0122 09:23:15.675702 4811 scope.go:117] "RemoveContainer" containerID="5ce04cee941d90f6265e592f8e7f1a276f181236771e8d62d505152c02b5bf28" Jan 22 09:23:15 crc kubenswrapper[4811]: I0122 09:23:15.782696 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 22 09:23:15 crc kubenswrapper[4811]: I0122 09:23:15.826851 4811 scope.go:117] "RemoveContainer" containerID="5ce04cee941d90f6265e592f8e7f1a276f181236771e8d62d505152c02b5bf28" Jan 22 09:23:15 crc kubenswrapper[4811]: I0122 09:23:15.837897 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 22 09:23:15 crc kubenswrapper[4811]: E0122 09:23:15.840415 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ce04cee941d90f6265e592f8e7f1a276f181236771e8d62d505152c02b5bf28\": container with ID starting with 5ce04cee941d90f6265e592f8e7f1a276f181236771e8d62d505152c02b5bf28 not found: ID does not exist" containerID="5ce04cee941d90f6265e592f8e7f1a276f181236771e8d62d505152c02b5bf28" Jan 22 09:23:15 crc kubenswrapper[4811]: I0122 09:23:15.840463 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ce04cee941d90f6265e592f8e7f1a276f181236771e8d62d505152c02b5bf28"} err="failed to get container status \"5ce04cee941d90f6265e592f8e7f1a276f181236771e8d62d505152c02b5bf28\": rpc error: code = NotFound desc = could not find container \"5ce04cee941d90f6265e592f8e7f1a276f181236771e8d62d505152c02b5bf28\": container with ID starting with 5ce04cee941d90f6265e592f8e7f1a276f181236771e8d62d505152c02b5bf28 not found: ID does not exist" Jan 22 09:23:15 crc kubenswrapper[4811]: I0122 09:23:15.860405 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 22 09:23:15 crc kubenswrapper[4811]: E0122 09:23:15.860867 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfebf11f-55b3-459e-b0c3-36c20de9bd1d" containerName="nova-scheduler-scheduler" Jan 22 09:23:15 crc kubenswrapper[4811]: I0122 09:23:15.860891 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfebf11f-55b3-459e-b0c3-36c20de9bd1d" containerName="nova-scheduler-scheduler" Jan 22 09:23:15 crc kubenswrapper[4811]: I0122 09:23:15.861110 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="dfebf11f-55b3-459e-b0c3-36c20de9bd1d" containerName="nova-scheduler-scheduler" Jan 22 09:23:15 crc kubenswrapper[4811]: I0122 09:23:15.861722 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 22 09:23:15 crc kubenswrapper[4811]: I0122 09:23:15.881709 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 22 09:23:15 crc kubenswrapper[4811]: I0122 09:23:15.885265 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/738ef216-ebaa-4000-8021-1c3b675abb3a-config-data\") pod \"nova-scheduler-0\" (UID: \"738ef216-ebaa-4000-8021-1c3b675abb3a\") " pod="openstack/nova-scheduler-0" Jan 22 09:23:15 crc kubenswrapper[4811]: I0122 09:23:15.885356 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/738ef216-ebaa-4000-8021-1c3b675abb3a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"738ef216-ebaa-4000-8021-1c3b675abb3a\") " pod="openstack/nova-scheduler-0" Jan 22 09:23:15 crc kubenswrapper[4811]: I0122 09:23:15.885410 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrg9v\" (UniqueName: \"kubernetes.io/projected/738ef216-ebaa-4000-8021-1c3b675abb3a-kube-api-access-vrg9v\") pod \"nova-scheduler-0\" (UID: \"738ef216-ebaa-4000-8021-1c3b675abb3a\") " pod="openstack/nova-scheduler-0" Jan 22 09:23:15 crc kubenswrapper[4811]: I0122 09:23:15.909977 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 22 09:23:15 crc kubenswrapper[4811]: I0122 09:23:15.988946 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/738ef216-ebaa-4000-8021-1c3b675abb3a-config-data\") pod \"nova-scheduler-0\" (UID: \"738ef216-ebaa-4000-8021-1c3b675abb3a\") " pod="openstack/nova-scheduler-0" Jan 22 09:23:15 crc kubenswrapper[4811]: I0122 09:23:15.989021 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/738ef216-ebaa-4000-8021-1c3b675abb3a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"738ef216-ebaa-4000-8021-1c3b675abb3a\") " pod="openstack/nova-scheduler-0" Jan 22 09:23:15 crc kubenswrapper[4811]: I0122 09:23:15.989069 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrg9v\" (UniqueName: \"kubernetes.io/projected/738ef216-ebaa-4000-8021-1c3b675abb3a-kube-api-access-vrg9v\") pod \"nova-scheduler-0\" (UID: \"738ef216-ebaa-4000-8021-1c3b675abb3a\") " pod="openstack/nova-scheduler-0" Jan 22 09:23:15 crc kubenswrapper[4811]: I0122 09:23:15.995581 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/738ef216-ebaa-4000-8021-1c3b675abb3a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"738ef216-ebaa-4000-8021-1c3b675abb3a\") " pod="openstack/nova-scheduler-0" Jan 22 09:23:16 crc kubenswrapper[4811]: I0122 09:23:16.004096 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/738ef216-ebaa-4000-8021-1c3b675abb3a-config-data\") pod \"nova-scheduler-0\" (UID: \"738ef216-ebaa-4000-8021-1c3b675abb3a\") " pod="openstack/nova-scheduler-0" Jan 22 09:23:16 crc kubenswrapper[4811]: I0122 09:23:16.016387 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrg9v\" (UniqueName: \"kubernetes.io/projected/738ef216-ebaa-4000-8021-1c3b675abb3a-kube-api-access-vrg9v\") pod \"nova-scheduler-0\" (UID: \"738ef216-ebaa-4000-8021-1c3b675abb3a\") " pod="openstack/nova-scheduler-0" Jan 22 09:23:16 crc kubenswrapper[4811]: I0122 09:23:16.027588 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dfebf11f-55b3-459e-b0c3-36c20de9bd1d" path="/var/lib/kubelet/pods/dfebf11f-55b3-459e-b0c3-36c20de9bd1d/volumes" Jan 22 09:23:16 crc kubenswrapper[4811]: I0122 09:23:16.028201 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8d5030f-13fb-403c-9d6d-e9d87f27800f" path="/var/lib/kubelet/pods/e8d5030f-13fb-403c-9d6d-e9d87f27800f/volumes" Jan 22 09:23:16 crc kubenswrapper[4811]: I0122 09:23:16.212228 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 22 09:23:16 crc kubenswrapper[4811]: I0122 09:23:16.373401 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 22 09:23:16 crc kubenswrapper[4811]: I0122 09:23:16.515562 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/87478694-0c10-49d8-9d91-1ce054e05dee-logs\") pod \"87478694-0c10-49d8-9d91-1ce054e05dee\" (UID: \"87478694-0c10-49d8-9d91-1ce054e05dee\") " Jan 22 09:23:16 crc kubenswrapper[4811]: I0122 09:23:16.515881 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7dblv\" (UniqueName: \"kubernetes.io/projected/87478694-0c10-49d8-9d91-1ce054e05dee-kube-api-access-7dblv\") pod \"87478694-0c10-49d8-9d91-1ce054e05dee\" (UID: \"87478694-0c10-49d8-9d91-1ce054e05dee\") " Jan 22 09:23:16 crc kubenswrapper[4811]: I0122 09:23:16.515905 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87478694-0c10-49d8-9d91-1ce054e05dee-config-data\") pod \"87478694-0c10-49d8-9d91-1ce054e05dee\" (UID: \"87478694-0c10-49d8-9d91-1ce054e05dee\") " Jan 22 09:23:16 crc kubenswrapper[4811]: I0122 09:23:16.515970 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/87478694-0c10-49d8-9d91-1ce054e05dee-logs" (OuterVolumeSpecName: "logs") pod "87478694-0c10-49d8-9d91-1ce054e05dee" (UID: "87478694-0c10-49d8-9d91-1ce054e05dee"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:23:16 crc kubenswrapper[4811]: I0122 09:23:16.516025 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87478694-0c10-49d8-9d91-1ce054e05dee-combined-ca-bundle\") pod \"87478694-0c10-49d8-9d91-1ce054e05dee\" (UID: \"87478694-0c10-49d8-9d91-1ce054e05dee\") " Jan 22 09:23:16 crc kubenswrapper[4811]: I0122 09:23:16.518783 4811 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/87478694-0c10-49d8-9d91-1ce054e05dee-logs\") on node \"crc\" DevicePath \"\"" Jan 22 09:23:16 crc kubenswrapper[4811]: I0122 09:23:16.521396 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87478694-0c10-49d8-9d91-1ce054e05dee-kube-api-access-7dblv" (OuterVolumeSpecName: "kube-api-access-7dblv") pod "87478694-0c10-49d8-9d91-1ce054e05dee" (UID: "87478694-0c10-49d8-9d91-1ce054e05dee"). InnerVolumeSpecName "kube-api-access-7dblv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:23:16 crc kubenswrapper[4811]: I0122 09:23:16.538559 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87478694-0c10-49d8-9d91-1ce054e05dee-config-data" (OuterVolumeSpecName: "config-data") pod "87478694-0c10-49d8-9d91-1ce054e05dee" (UID: "87478694-0c10-49d8-9d91-1ce054e05dee"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:23:16 crc kubenswrapper[4811]: I0122 09:23:16.542817 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87478694-0c10-49d8-9d91-1ce054e05dee-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "87478694-0c10-49d8-9d91-1ce054e05dee" (UID: "87478694-0c10-49d8-9d91-1ce054e05dee"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:23:16 crc kubenswrapper[4811]: I0122 09:23:16.621016 4811 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87478694-0c10-49d8-9d91-1ce054e05dee-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 09:23:16 crc kubenswrapper[4811]: I0122 09:23:16.621349 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7dblv\" (UniqueName: \"kubernetes.io/projected/87478694-0c10-49d8-9d91-1ce054e05dee-kube-api-access-7dblv\") on node \"crc\" DevicePath \"\"" Jan 22 09:23:16 crc kubenswrapper[4811]: I0122 09:23:16.621432 4811 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87478694-0c10-49d8-9d91-1ce054e05dee-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 09:23:16 crc kubenswrapper[4811]: I0122 09:23:16.661033 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 22 09:23:16 crc kubenswrapper[4811]: W0122 09:23:16.663737 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod738ef216_ebaa_4000_8021_1c3b675abb3a.slice/crio-d6777979f00bfb1aba0b48e9a0a87023bfa9fef2d133bb3d6c74b8eacec33ab1 WatchSource:0}: Error finding container d6777979f00bfb1aba0b48e9a0a87023bfa9fef2d133bb3d6c74b8eacec33ab1: Status 404 returned error can't find the container with id d6777979f00bfb1aba0b48e9a0a87023bfa9fef2d133bb3d6c74b8eacec33ab1 Jan 22 09:23:16 crc kubenswrapper[4811]: I0122 09:23:16.680096 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"8b61b8e9-fda4-46d3-a494-f3e804e7f4d4","Type":"ContainerStarted","Data":"b8489f8888d763a8a9bf2a92abc02b921f6344903ff061dc6ae94289e558da73"} Jan 22 09:23:16 crc kubenswrapper[4811]: I0122 09:23:16.688927 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"738ef216-ebaa-4000-8021-1c3b675abb3a","Type":"ContainerStarted","Data":"d6777979f00bfb1aba0b48e9a0a87023bfa9fef2d133bb3d6c74b8eacec33ab1"} Jan 22 09:23:16 crc kubenswrapper[4811]: I0122 09:23:16.690602 4811 generic.go:334] "Generic (PLEG): container finished" podID="87478694-0c10-49d8-9d91-1ce054e05dee" containerID="e8a533ac5c851541b1154b577802f110cf957e4e04d0f8783b10990f745a6acf" exitCode=0 Jan 22 09:23:16 crc kubenswrapper[4811]: I0122 09:23:16.690685 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"87478694-0c10-49d8-9d91-1ce054e05dee","Type":"ContainerDied","Data":"e8a533ac5c851541b1154b577802f110cf957e4e04d0f8783b10990f745a6acf"} Jan 22 09:23:16 crc kubenswrapper[4811]: I0122 09:23:16.690713 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"87478694-0c10-49d8-9d91-1ce054e05dee","Type":"ContainerDied","Data":"765e9bff13f0743c90e3eae425152e767990987dfe85358d1e375b6c9de16084"} Jan 22 09:23:16 crc kubenswrapper[4811]: I0122 09:23:16.690730 4811 scope.go:117] "RemoveContainer" containerID="e8a533ac5c851541b1154b577802f110cf957e4e04d0f8783b10990f745a6acf" Jan 22 09:23:16 crc kubenswrapper[4811]: I0122 09:23:16.690922 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 22 09:23:16 crc kubenswrapper[4811]: I0122 09:23:16.705021 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-p7zqj" podUID="89733fdb-bf6a-42d3-8a8e-da3e2ff86438" containerName="registry-server" containerID="cri-o://096c08bb0503b4fb8e624b1b7c5abd60a9198e30e963666366aef2d291f24d4b" gracePeriod=2 Jan 22 09:23:16 crc kubenswrapper[4811]: I0122 09:23:16.706995 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.42226198 podStartE2EDuration="2.704203703s" podCreationTimestamp="2026-01-22 09:23:14 +0000 UTC" firstStartedPulling="2026-01-22 09:23:15.461975702 +0000 UTC m=+1039.784162824" lastFinishedPulling="2026-01-22 09:23:15.743917424 +0000 UTC m=+1040.066104547" observedRunningTime="2026-01-22 09:23:16.703659055 +0000 UTC m=+1041.025846178" watchObservedRunningTime="2026-01-22 09:23:16.704203703 +0000 UTC m=+1041.026390826" Jan 22 09:23:16 crc kubenswrapper[4811]: I0122 09:23:16.944972 4811 scope.go:117] "RemoveContainer" containerID="c6f46f723dfc1c4db2b076b15b15bc4a6c65711e111b533da910cc2f0adcd49e" Jan 22 09:23:16 crc kubenswrapper[4811]: I0122 09:23:16.945391 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 22 09:23:16 crc kubenswrapper[4811]: I0122 09:23:16.965100 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 22 09:23:16 crc kubenswrapper[4811]: I0122 09:23:16.968815 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 22 09:23:16 crc kubenswrapper[4811]: E0122 09:23:16.969302 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87478694-0c10-49d8-9d91-1ce054e05dee" containerName="nova-api-api" Jan 22 09:23:16 crc kubenswrapper[4811]: I0122 09:23:16.969317 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="87478694-0c10-49d8-9d91-1ce054e05dee" containerName="nova-api-api" Jan 22 09:23:16 crc kubenswrapper[4811]: E0122 09:23:16.969334 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87478694-0c10-49d8-9d91-1ce054e05dee" containerName="nova-api-log" Jan 22 09:23:16 crc kubenswrapper[4811]: I0122 09:23:16.969339 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="87478694-0c10-49d8-9d91-1ce054e05dee" containerName="nova-api-log" Jan 22 09:23:16 crc kubenswrapper[4811]: I0122 09:23:16.969504 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="87478694-0c10-49d8-9d91-1ce054e05dee" containerName="nova-api-api" Jan 22 09:23:16 crc kubenswrapper[4811]: I0122 09:23:16.969520 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="87478694-0c10-49d8-9d91-1ce054e05dee" containerName="nova-api-log" Jan 22 09:23:16 crc kubenswrapper[4811]: I0122 09:23:16.970402 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 22 09:23:16 crc kubenswrapper[4811]: I0122 09:23:16.973987 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 22 09:23:16 crc kubenswrapper[4811]: I0122 09:23:16.996844 4811 scope.go:117] "RemoveContainer" containerID="e8a533ac5c851541b1154b577802f110cf957e4e04d0f8783b10990f745a6acf" Jan 22 09:23:17 crc kubenswrapper[4811]: E0122 09:23:17.000746 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e8a533ac5c851541b1154b577802f110cf957e4e04d0f8783b10990f745a6acf\": container with ID starting with e8a533ac5c851541b1154b577802f110cf957e4e04d0f8783b10990f745a6acf not found: ID does not exist" containerID="e8a533ac5c851541b1154b577802f110cf957e4e04d0f8783b10990f745a6acf" Jan 22 09:23:17 crc kubenswrapper[4811]: I0122 09:23:17.000788 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e8a533ac5c851541b1154b577802f110cf957e4e04d0f8783b10990f745a6acf"} err="failed to get container status \"e8a533ac5c851541b1154b577802f110cf957e4e04d0f8783b10990f745a6acf\": rpc error: code = NotFound desc = could not find container \"e8a533ac5c851541b1154b577802f110cf957e4e04d0f8783b10990f745a6acf\": container with ID starting with e8a533ac5c851541b1154b577802f110cf957e4e04d0f8783b10990f745a6acf not found: ID does not exist" Jan 22 09:23:17 crc kubenswrapper[4811]: I0122 09:23:17.000815 4811 scope.go:117] "RemoveContainer" containerID="c6f46f723dfc1c4db2b076b15b15bc4a6c65711e111b533da910cc2f0adcd49e" Jan 22 09:23:17 crc kubenswrapper[4811]: E0122 09:23:17.002254 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6f46f723dfc1c4db2b076b15b15bc4a6c65711e111b533da910cc2f0adcd49e\": container with ID starting with c6f46f723dfc1c4db2b076b15b15bc4a6c65711e111b533da910cc2f0adcd49e not found: ID does not exist" containerID="c6f46f723dfc1c4db2b076b15b15bc4a6c65711e111b533da910cc2f0adcd49e" Jan 22 09:23:17 crc kubenswrapper[4811]: I0122 09:23:17.002297 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6f46f723dfc1c4db2b076b15b15bc4a6c65711e111b533da910cc2f0adcd49e"} err="failed to get container status \"c6f46f723dfc1c4db2b076b15b15bc4a6c65711e111b533da910cc2f0adcd49e\": rpc error: code = NotFound desc = could not find container \"c6f46f723dfc1c4db2b076b15b15bc4a6c65711e111b533da910cc2f0adcd49e\": container with ID starting with c6f46f723dfc1c4db2b076b15b15bc4a6c65711e111b533da910cc2f0adcd49e not found: ID does not exist" Jan 22 09:23:17 crc kubenswrapper[4811]: I0122 09:23:17.030665 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 22 09:23:17 crc kubenswrapper[4811]: I0122 09:23:17.031941 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e229ce79-29a8-487c-9468-3a0ae1449022-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e229ce79-29a8-487c-9468-3a0ae1449022\") " pod="openstack/nova-api-0" Jan 22 09:23:17 crc kubenswrapper[4811]: I0122 09:23:17.032016 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e229ce79-29a8-487c-9468-3a0ae1449022-logs\") pod \"nova-api-0\" (UID: \"e229ce79-29a8-487c-9468-3a0ae1449022\") " pod="openstack/nova-api-0" Jan 22 09:23:17 crc kubenswrapper[4811]: I0122 09:23:17.032097 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79nc6\" (UniqueName: \"kubernetes.io/projected/e229ce79-29a8-487c-9468-3a0ae1449022-kube-api-access-79nc6\") pod \"nova-api-0\" (UID: \"e229ce79-29a8-487c-9468-3a0ae1449022\") " pod="openstack/nova-api-0" Jan 22 09:23:17 crc kubenswrapper[4811]: I0122 09:23:17.032150 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e229ce79-29a8-487c-9468-3a0ae1449022-config-data\") pod \"nova-api-0\" (UID: \"e229ce79-29a8-487c-9468-3a0ae1449022\") " pod="openstack/nova-api-0" Jan 22 09:23:17 crc kubenswrapper[4811]: I0122 09:23:17.134720 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-79nc6\" (UniqueName: \"kubernetes.io/projected/e229ce79-29a8-487c-9468-3a0ae1449022-kube-api-access-79nc6\") pod \"nova-api-0\" (UID: \"e229ce79-29a8-487c-9468-3a0ae1449022\") " pod="openstack/nova-api-0" Jan 22 09:23:17 crc kubenswrapper[4811]: I0122 09:23:17.134797 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e229ce79-29a8-487c-9468-3a0ae1449022-config-data\") pod \"nova-api-0\" (UID: \"e229ce79-29a8-487c-9468-3a0ae1449022\") " pod="openstack/nova-api-0" Jan 22 09:23:17 crc kubenswrapper[4811]: I0122 09:23:17.134948 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e229ce79-29a8-487c-9468-3a0ae1449022-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e229ce79-29a8-487c-9468-3a0ae1449022\") " pod="openstack/nova-api-0" Jan 22 09:23:17 crc kubenswrapper[4811]: I0122 09:23:17.134994 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e229ce79-29a8-487c-9468-3a0ae1449022-logs\") pod \"nova-api-0\" (UID: \"e229ce79-29a8-487c-9468-3a0ae1449022\") " pod="openstack/nova-api-0" Jan 22 09:23:17 crc kubenswrapper[4811]: I0122 09:23:17.135335 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e229ce79-29a8-487c-9468-3a0ae1449022-logs\") pod \"nova-api-0\" (UID: \"e229ce79-29a8-487c-9468-3a0ae1449022\") " pod="openstack/nova-api-0" Jan 22 09:23:17 crc kubenswrapper[4811]: I0122 09:23:17.141045 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e229ce79-29a8-487c-9468-3a0ae1449022-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e229ce79-29a8-487c-9468-3a0ae1449022\") " pod="openstack/nova-api-0" Jan 22 09:23:17 crc kubenswrapper[4811]: I0122 09:23:17.141514 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e229ce79-29a8-487c-9468-3a0ae1449022-config-data\") pod \"nova-api-0\" (UID: \"e229ce79-29a8-487c-9468-3a0ae1449022\") " pod="openstack/nova-api-0" Jan 22 09:23:17 crc kubenswrapper[4811]: I0122 09:23:17.153322 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p7zqj" Jan 22 09:23:17 crc kubenswrapper[4811]: I0122 09:23:17.159943 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-79nc6\" (UniqueName: \"kubernetes.io/projected/e229ce79-29a8-487c-9468-3a0ae1449022-kube-api-access-79nc6\") pod \"nova-api-0\" (UID: \"e229ce79-29a8-487c-9468-3a0ae1449022\") " pod="openstack/nova-api-0" Jan 22 09:23:17 crc kubenswrapper[4811]: I0122 09:23:17.162545 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 09:23:17 crc kubenswrapper[4811]: I0122 09:23:17.236295 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e9c1e6c8-785c-4978-8053-23f809789e23-log-httpd\") pod \"e9c1e6c8-785c-4978-8053-23f809789e23\" (UID: \"e9c1e6c8-785c-4978-8053-23f809789e23\") " Jan 22 09:23:17 crc kubenswrapper[4811]: I0122 09:23:17.236365 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e9c1e6c8-785c-4978-8053-23f809789e23-sg-core-conf-yaml\") pod \"e9c1e6c8-785c-4978-8053-23f809789e23\" (UID: \"e9c1e6c8-785c-4978-8053-23f809789e23\") " Jan 22 09:23:17 crc kubenswrapper[4811]: I0122 09:23:17.236396 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89733fdb-bf6a-42d3-8a8e-da3e2ff86438-catalog-content\") pod \"89733fdb-bf6a-42d3-8a8e-da3e2ff86438\" (UID: \"89733fdb-bf6a-42d3-8a8e-da3e2ff86438\") " Jan 22 09:23:17 crc kubenswrapper[4811]: I0122 09:23:17.236434 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e9c1e6c8-785c-4978-8053-23f809789e23-scripts\") pod \"e9c1e6c8-785c-4978-8053-23f809789e23\" (UID: \"e9c1e6c8-785c-4978-8053-23f809789e23\") " Jan 22 09:23:17 crc kubenswrapper[4811]: I0122 09:23:17.236499 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9c1e6c8-785c-4978-8053-23f809789e23-config-data\") pod \"e9c1e6c8-785c-4978-8053-23f809789e23\" (UID: \"e9c1e6c8-785c-4978-8053-23f809789e23\") " Jan 22 09:23:17 crc kubenswrapper[4811]: I0122 09:23:17.236575 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgqbn\" (UniqueName: \"kubernetes.io/projected/e9c1e6c8-785c-4978-8053-23f809789e23-kube-api-access-zgqbn\") pod \"e9c1e6c8-785c-4978-8053-23f809789e23\" (UID: \"e9c1e6c8-785c-4978-8053-23f809789e23\") " Jan 22 09:23:17 crc kubenswrapper[4811]: I0122 09:23:17.236611 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sf6st\" (UniqueName: \"kubernetes.io/projected/89733fdb-bf6a-42d3-8a8e-da3e2ff86438-kube-api-access-sf6st\") pod \"89733fdb-bf6a-42d3-8a8e-da3e2ff86438\" (UID: \"89733fdb-bf6a-42d3-8a8e-da3e2ff86438\") " Jan 22 09:23:17 crc kubenswrapper[4811]: I0122 09:23:17.236673 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e9c1e6c8-785c-4978-8053-23f809789e23-run-httpd\") pod \"e9c1e6c8-785c-4978-8053-23f809789e23\" (UID: \"e9c1e6c8-785c-4978-8053-23f809789e23\") " Jan 22 09:23:17 crc kubenswrapper[4811]: I0122 09:23:17.236715 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89733fdb-bf6a-42d3-8a8e-da3e2ff86438-utilities\") pod \"89733fdb-bf6a-42d3-8a8e-da3e2ff86438\" (UID: \"89733fdb-bf6a-42d3-8a8e-da3e2ff86438\") " Jan 22 09:23:17 crc kubenswrapper[4811]: I0122 09:23:17.236747 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e9c1e6c8-785c-4978-8053-23f809789e23-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "e9c1e6c8-785c-4978-8053-23f809789e23" (UID: "e9c1e6c8-785c-4978-8053-23f809789e23"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:23:17 crc kubenswrapper[4811]: I0122 09:23:17.236779 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9c1e6c8-785c-4978-8053-23f809789e23-combined-ca-bundle\") pod \"e9c1e6c8-785c-4978-8053-23f809789e23\" (UID: \"e9c1e6c8-785c-4978-8053-23f809789e23\") " Jan 22 09:23:17 crc kubenswrapper[4811]: I0122 09:23:17.237250 4811 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e9c1e6c8-785c-4978-8053-23f809789e23-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 09:23:17 crc kubenswrapper[4811]: I0122 09:23:17.237303 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e9c1e6c8-785c-4978-8053-23f809789e23-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "e9c1e6c8-785c-4978-8053-23f809789e23" (UID: "e9c1e6c8-785c-4978-8053-23f809789e23"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:23:17 crc kubenswrapper[4811]: I0122 09:23:17.237944 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89733fdb-bf6a-42d3-8a8e-da3e2ff86438-utilities" (OuterVolumeSpecName: "utilities") pod "89733fdb-bf6a-42d3-8a8e-da3e2ff86438" (UID: "89733fdb-bf6a-42d3-8a8e-da3e2ff86438"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:23:17 crc kubenswrapper[4811]: I0122 09:23:17.240442 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9c1e6c8-785c-4978-8053-23f809789e23-scripts" (OuterVolumeSpecName: "scripts") pod "e9c1e6c8-785c-4978-8053-23f809789e23" (UID: "e9c1e6c8-785c-4978-8053-23f809789e23"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:23:17 crc kubenswrapper[4811]: I0122 09:23:17.241871 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89733fdb-bf6a-42d3-8a8e-da3e2ff86438-kube-api-access-sf6st" (OuterVolumeSpecName: "kube-api-access-sf6st") pod "89733fdb-bf6a-42d3-8a8e-da3e2ff86438" (UID: "89733fdb-bf6a-42d3-8a8e-da3e2ff86438"). InnerVolumeSpecName "kube-api-access-sf6st". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:23:17 crc kubenswrapper[4811]: I0122 09:23:17.250157 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9c1e6c8-785c-4978-8053-23f809789e23-kube-api-access-zgqbn" (OuterVolumeSpecName: "kube-api-access-zgqbn") pod "e9c1e6c8-785c-4978-8053-23f809789e23" (UID: "e9c1e6c8-785c-4978-8053-23f809789e23"). InnerVolumeSpecName "kube-api-access-zgqbn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:23:17 crc kubenswrapper[4811]: I0122 09:23:17.275337 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9c1e6c8-785c-4978-8053-23f809789e23-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "e9c1e6c8-785c-4978-8053-23f809789e23" (UID: "e9c1e6c8-785c-4978-8053-23f809789e23"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:23:17 crc kubenswrapper[4811]: I0122 09:23:17.292741 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89733fdb-bf6a-42d3-8a8e-da3e2ff86438-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "89733fdb-bf6a-42d3-8a8e-da3e2ff86438" (UID: "89733fdb-bf6a-42d3-8a8e-da3e2ff86438"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:23:17 crc kubenswrapper[4811]: I0122 09:23:17.301965 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9c1e6c8-785c-4978-8053-23f809789e23-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e9c1e6c8-785c-4978-8053-23f809789e23" (UID: "e9c1e6c8-785c-4978-8053-23f809789e23"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:23:17 crc kubenswrapper[4811]: I0122 09:23:17.310887 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 22 09:23:17 crc kubenswrapper[4811]: I0122 09:23:17.323654 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9c1e6c8-785c-4978-8053-23f809789e23-config-data" (OuterVolumeSpecName: "config-data") pod "e9c1e6c8-785c-4978-8053-23f809789e23" (UID: "e9c1e6c8-785c-4978-8053-23f809789e23"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:23:17 crc kubenswrapper[4811]: I0122 09:23:17.339276 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sf6st\" (UniqueName: \"kubernetes.io/projected/89733fdb-bf6a-42d3-8a8e-da3e2ff86438-kube-api-access-sf6st\") on node \"crc\" DevicePath \"\"" Jan 22 09:23:17 crc kubenswrapper[4811]: I0122 09:23:17.339323 4811 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e9c1e6c8-785c-4978-8053-23f809789e23-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 09:23:17 crc kubenswrapper[4811]: I0122 09:23:17.339347 4811 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89733fdb-bf6a-42d3-8a8e-da3e2ff86438-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 09:23:17 crc kubenswrapper[4811]: I0122 09:23:17.339355 4811 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9c1e6c8-785c-4978-8053-23f809789e23-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 09:23:17 crc kubenswrapper[4811]: I0122 09:23:17.339365 4811 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e9c1e6c8-785c-4978-8053-23f809789e23-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 22 09:23:17 crc kubenswrapper[4811]: I0122 09:23:17.339374 4811 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89733fdb-bf6a-42d3-8a8e-da3e2ff86438-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 09:23:17 crc kubenswrapper[4811]: I0122 09:23:17.339383 4811 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e9c1e6c8-785c-4978-8053-23f809789e23-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 09:23:17 crc kubenswrapper[4811]: I0122 09:23:17.339390 4811 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9c1e6c8-785c-4978-8053-23f809789e23-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 09:23:17 crc kubenswrapper[4811]: I0122 09:23:17.339398 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgqbn\" (UniqueName: \"kubernetes.io/projected/e9c1e6c8-785c-4978-8053-23f809789e23-kube-api-access-zgqbn\") on node \"crc\" DevicePath \"\"" Jan 22 09:23:17 crc kubenswrapper[4811]: I0122 09:23:17.715800 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"738ef216-ebaa-4000-8021-1c3b675abb3a","Type":"ContainerStarted","Data":"b41df3f94807c81186b37a6668bb739f700725818de37a3652af3b6080e12c13"} Jan 22 09:23:17 crc kubenswrapper[4811]: I0122 09:23:17.721573 4811 generic.go:334] "Generic (PLEG): container finished" podID="e9c1e6c8-785c-4978-8053-23f809789e23" containerID="89381f32d9ce0812b9b272069e646036b69b498ed584a8a30ce2da796e494565" exitCode=0 Jan 22 09:23:17 crc kubenswrapper[4811]: I0122 09:23:17.721656 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 09:23:17 crc kubenswrapper[4811]: I0122 09:23:17.721672 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e9c1e6c8-785c-4978-8053-23f809789e23","Type":"ContainerDied","Data":"89381f32d9ce0812b9b272069e646036b69b498ed584a8a30ce2da796e494565"} Jan 22 09:23:17 crc kubenswrapper[4811]: I0122 09:23:17.721706 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e9c1e6c8-785c-4978-8053-23f809789e23","Type":"ContainerDied","Data":"77ff4e0ded122ed2a18494b9744a20963f25cf5b9185162976c31f1de19f48d1"} Jan 22 09:23:17 crc kubenswrapper[4811]: I0122 09:23:17.721748 4811 scope.go:117] "RemoveContainer" containerID="064c4e3a661f91b4a6e7a9c49315caef2ae118d01e452d7ce04335343afdcf1a" Jan 22 09:23:17 crc kubenswrapper[4811]: I0122 09:23:17.725243 4811 generic.go:334] "Generic (PLEG): container finished" podID="89733fdb-bf6a-42d3-8a8e-da3e2ff86438" containerID="096c08bb0503b4fb8e624b1b7c5abd60a9198e30e963666366aef2d291f24d4b" exitCode=0 Jan 22 09:23:17 crc kubenswrapper[4811]: I0122 09:23:17.726071 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p7zqj" Jan 22 09:23:17 crc kubenswrapper[4811]: I0122 09:23:17.742472 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p7zqj" event={"ID":"89733fdb-bf6a-42d3-8a8e-da3e2ff86438","Type":"ContainerDied","Data":"096c08bb0503b4fb8e624b1b7c5abd60a9198e30e963666366aef2d291f24d4b"} Jan 22 09:23:17 crc kubenswrapper[4811]: I0122 09:23:17.742543 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 22 09:23:17 crc kubenswrapper[4811]: I0122 09:23:17.742562 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p7zqj" event={"ID":"89733fdb-bf6a-42d3-8a8e-da3e2ff86438","Type":"ContainerDied","Data":"d8fc0f3c5d7546a74e62635d5d7206923d1b5ea62f8793f02dfa99e156b04ac7"} Jan 22 09:23:17 crc kubenswrapper[4811]: I0122 09:23:17.753977 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.753953459 podStartE2EDuration="2.753953459s" podCreationTimestamp="2026-01-22 09:23:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:23:17.744045645 +0000 UTC m=+1042.066232767" watchObservedRunningTime="2026-01-22 09:23:17.753953459 +0000 UTC m=+1042.076140583" Jan 22 09:23:17 crc kubenswrapper[4811]: I0122 09:23:17.758746 4811 scope.go:117] "RemoveContainer" containerID="c2b2c19cf1f5b522633e4d88febe0d8198acff87ec9a4995871dc26e41e9b6e2" Jan 22 09:23:17 crc kubenswrapper[4811]: I0122 09:23:17.798012 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 22 09:23:17 crc kubenswrapper[4811]: I0122 09:23:17.823762 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 22 09:23:17 crc kubenswrapper[4811]: I0122 09:23:17.836011 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 22 09:23:17 crc kubenswrapper[4811]: I0122 09:23:17.836936 4811 scope.go:117] "RemoveContainer" containerID="89381f32d9ce0812b9b272069e646036b69b498ed584a8a30ce2da796e494565" Jan 22 09:23:17 crc kubenswrapper[4811]: I0122 09:23:17.845353 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-p7zqj"] Jan 22 09:23:17 crc kubenswrapper[4811]: I0122 09:23:17.853105 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-p7zqj"] Jan 22 09:23:17 crc kubenswrapper[4811]: I0122 09:23:17.860878 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 22 09:23:17 crc kubenswrapper[4811]: E0122 09:23:17.861399 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9c1e6c8-785c-4978-8053-23f809789e23" containerName="ceilometer-central-agent" Jan 22 09:23:17 crc kubenswrapper[4811]: I0122 09:23:17.861415 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9c1e6c8-785c-4978-8053-23f809789e23" containerName="ceilometer-central-agent" Jan 22 09:23:17 crc kubenswrapper[4811]: E0122 09:23:17.861439 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89733fdb-bf6a-42d3-8a8e-da3e2ff86438" containerName="extract-content" Jan 22 09:23:17 crc kubenswrapper[4811]: I0122 09:23:17.861445 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="89733fdb-bf6a-42d3-8a8e-da3e2ff86438" containerName="extract-content" Jan 22 09:23:17 crc kubenswrapper[4811]: E0122 09:23:17.861460 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9c1e6c8-785c-4978-8053-23f809789e23" containerName="sg-core" Jan 22 09:23:17 crc kubenswrapper[4811]: I0122 09:23:17.861465 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9c1e6c8-785c-4978-8053-23f809789e23" containerName="sg-core" Jan 22 09:23:17 crc kubenswrapper[4811]: E0122 09:23:17.861476 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9c1e6c8-785c-4978-8053-23f809789e23" containerName="ceilometer-notification-agent" Jan 22 09:23:17 crc kubenswrapper[4811]: I0122 09:23:17.861484 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9c1e6c8-785c-4978-8053-23f809789e23" containerName="ceilometer-notification-agent" Jan 22 09:23:17 crc kubenswrapper[4811]: E0122 09:23:17.861505 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89733fdb-bf6a-42d3-8a8e-da3e2ff86438" containerName="extract-utilities" Jan 22 09:23:17 crc kubenswrapper[4811]: I0122 09:23:17.861511 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="89733fdb-bf6a-42d3-8a8e-da3e2ff86438" containerName="extract-utilities" Jan 22 09:23:17 crc kubenswrapper[4811]: E0122 09:23:17.861525 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9c1e6c8-785c-4978-8053-23f809789e23" containerName="proxy-httpd" Jan 22 09:23:17 crc kubenswrapper[4811]: I0122 09:23:17.861531 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9c1e6c8-785c-4978-8053-23f809789e23" containerName="proxy-httpd" Jan 22 09:23:17 crc kubenswrapper[4811]: E0122 09:23:17.861541 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89733fdb-bf6a-42d3-8a8e-da3e2ff86438" containerName="registry-server" Jan 22 09:23:17 crc kubenswrapper[4811]: I0122 09:23:17.861548 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="89733fdb-bf6a-42d3-8a8e-da3e2ff86438" containerName="registry-server" Jan 22 09:23:17 crc kubenswrapper[4811]: I0122 09:23:17.861789 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9c1e6c8-785c-4978-8053-23f809789e23" containerName="proxy-httpd" Jan 22 09:23:17 crc kubenswrapper[4811]: I0122 09:23:17.861814 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9c1e6c8-785c-4978-8053-23f809789e23" containerName="sg-core" Jan 22 09:23:17 crc kubenswrapper[4811]: I0122 09:23:17.861830 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9c1e6c8-785c-4978-8053-23f809789e23" containerName="ceilometer-notification-agent" Jan 22 09:23:17 crc kubenswrapper[4811]: I0122 09:23:17.861843 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="89733fdb-bf6a-42d3-8a8e-da3e2ff86438" containerName="registry-server" Jan 22 09:23:17 crc kubenswrapper[4811]: I0122 09:23:17.861852 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9c1e6c8-785c-4978-8053-23f809789e23" containerName="ceilometer-central-agent" Jan 22 09:23:17 crc kubenswrapper[4811]: I0122 09:23:17.863767 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 09:23:17 crc kubenswrapper[4811]: I0122 09:23:17.863803 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 22 09:23:17 crc kubenswrapper[4811]: I0122 09:23:17.867789 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 22 09:23:17 crc kubenswrapper[4811]: I0122 09:23:17.867956 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 22 09:23:17 crc kubenswrapper[4811]: I0122 09:23:17.868591 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 22 09:23:17 crc kubenswrapper[4811]: I0122 09:23:17.889259 4811 scope.go:117] "RemoveContainer" containerID="c9ac90b166e61dfffcbbe492fb91e0b8db5ccfc2e6da6dd2002fb66f630a8852" Jan 22 09:23:17 crc kubenswrapper[4811]: I0122 09:23:17.916971 4811 scope.go:117] "RemoveContainer" containerID="064c4e3a661f91b4a6e7a9c49315caef2ae118d01e452d7ce04335343afdcf1a" Jan 22 09:23:17 crc kubenswrapper[4811]: E0122 09:23:17.922536 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"064c4e3a661f91b4a6e7a9c49315caef2ae118d01e452d7ce04335343afdcf1a\": container with ID starting with 064c4e3a661f91b4a6e7a9c49315caef2ae118d01e452d7ce04335343afdcf1a not found: ID does not exist" containerID="064c4e3a661f91b4a6e7a9c49315caef2ae118d01e452d7ce04335343afdcf1a" Jan 22 09:23:17 crc kubenswrapper[4811]: I0122 09:23:17.922575 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"064c4e3a661f91b4a6e7a9c49315caef2ae118d01e452d7ce04335343afdcf1a"} err="failed to get container status \"064c4e3a661f91b4a6e7a9c49315caef2ae118d01e452d7ce04335343afdcf1a\": rpc error: code = NotFound desc = could not find container \"064c4e3a661f91b4a6e7a9c49315caef2ae118d01e452d7ce04335343afdcf1a\": container with ID starting with 064c4e3a661f91b4a6e7a9c49315caef2ae118d01e452d7ce04335343afdcf1a not found: ID does not exist" Jan 22 09:23:17 crc kubenswrapper[4811]: I0122 09:23:17.922603 4811 scope.go:117] "RemoveContainer" containerID="c2b2c19cf1f5b522633e4d88febe0d8198acff87ec9a4995871dc26e41e9b6e2" Jan 22 09:23:17 crc kubenswrapper[4811]: E0122 09:23:17.924745 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2b2c19cf1f5b522633e4d88febe0d8198acff87ec9a4995871dc26e41e9b6e2\": container with ID starting with c2b2c19cf1f5b522633e4d88febe0d8198acff87ec9a4995871dc26e41e9b6e2 not found: ID does not exist" containerID="c2b2c19cf1f5b522633e4d88febe0d8198acff87ec9a4995871dc26e41e9b6e2" Jan 22 09:23:17 crc kubenswrapper[4811]: I0122 09:23:17.924856 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2b2c19cf1f5b522633e4d88febe0d8198acff87ec9a4995871dc26e41e9b6e2"} err="failed to get container status \"c2b2c19cf1f5b522633e4d88febe0d8198acff87ec9a4995871dc26e41e9b6e2\": rpc error: code = NotFound desc = could not find container \"c2b2c19cf1f5b522633e4d88febe0d8198acff87ec9a4995871dc26e41e9b6e2\": container with ID starting with c2b2c19cf1f5b522633e4d88febe0d8198acff87ec9a4995871dc26e41e9b6e2 not found: ID does not exist" Jan 22 09:23:17 crc kubenswrapper[4811]: I0122 09:23:17.924938 4811 scope.go:117] "RemoveContainer" containerID="89381f32d9ce0812b9b272069e646036b69b498ed584a8a30ce2da796e494565" Jan 22 09:23:17 crc kubenswrapper[4811]: E0122 09:23:17.930553 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89381f32d9ce0812b9b272069e646036b69b498ed584a8a30ce2da796e494565\": container with ID starting with 89381f32d9ce0812b9b272069e646036b69b498ed584a8a30ce2da796e494565 not found: ID does not exist" containerID="89381f32d9ce0812b9b272069e646036b69b498ed584a8a30ce2da796e494565" Jan 22 09:23:17 crc kubenswrapper[4811]: I0122 09:23:17.930606 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89381f32d9ce0812b9b272069e646036b69b498ed584a8a30ce2da796e494565"} err="failed to get container status \"89381f32d9ce0812b9b272069e646036b69b498ed584a8a30ce2da796e494565\": rpc error: code = NotFound desc = could not find container \"89381f32d9ce0812b9b272069e646036b69b498ed584a8a30ce2da796e494565\": container with ID starting with 89381f32d9ce0812b9b272069e646036b69b498ed584a8a30ce2da796e494565 not found: ID does not exist" Jan 22 09:23:17 crc kubenswrapper[4811]: I0122 09:23:17.930662 4811 scope.go:117] "RemoveContainer" containerID="c9ac90b166e61dfffcbbe492fb91e0b8db5ccfc2e6da6dd2002fb66f630a8852" Jan 22 09:23:17 crc kubenswrapper[4811]: E0122 09:23:17.931076 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c9ac90b166e61dfffcbbe492fb91e0b8db5ccfc2e6da6dd2002fb66f630a8852\": container with ID starting with c9ac90b166e61dfffcbbe492fb91e0b8db5ccfc2e6da6dd2002fb66f630a8852 not found: ID does not exist" containerID="c9ac90b166e61dfffcbbe492fb91e0b8db5ccfc2e6da6dd2002fb66f630a8852" Jan 22 09:23:17 crc kubenswrapper[4811]: I0122 09:23:17.931102 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9ac90b166e61dfffcbbe492fb91e0b8db5ccfc2e6da6dd2002fb66f630a8852"} err="failed to get container status \"c9ac90b166e61dfffcbbe492fb91e0b8db5ccfc2e6da6dd2002fb66f630a8852\": rpc error: code = NotFound desc = could not find container \"c9ac90b166e61dfffcbbe492fb91e0b8db5ccfc2e6da6dd2002fb66f630a8852\": container with ID starting with c9ac90b166e61dfffcbbe492fb91e0b8db5ccfc2e6da6dd2002fb66f630a8852 not found: ID does not exist" Jan 22 09:23:17 crc kubenswrapper[4811]: I0122 09:23:17.931128 4811 scope.go:117] "RemoveContainer" containerID="096c08bb0503b4fb8e624b1b7c5abd60a9198e30e963666366aef2d291f24d4b" Jan 22 09:23:17 crc kubenswrapper[4811]: I0122 09:23:17.951529 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf971fef-0ef1-47b6-ba75-ca76dde2e658-scripts\") pod \"ceilometer-0\" (UID: \"bf971fef-0ef1-47b6-ba75-ca76dde2e658\") " pod="openstack/ceilometer-0" Jan 22 09:23:17 crc kubenswrapper[4811]: I0122 09:23:17.951586 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bf971fef-0ef1-47b6-ba75-ca76dde2e658-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bf971fef-0ef1-47b6-ba75-ca76dde2e658\") " pod="openstack/ceilometer-0" Jan 22 09:23:17 crc kubenswrapper[4811]: I0122 09:23:17.951616 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf971fef-0ef1-47b6-ba75-ca76dde2e658-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bf971fef-0ef1-47b6-ba75-ca76dde2e658\") " pod="openstack/ceilometer-0" Jan 22 09:23:17 crc kubenswrapper[4811]: I0122 09:23:17.951665 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bf971fef-0ef1-47b6-ba75-ca76dde2e658-log-httpd\") pod \"ceilometer-0\" (UID: \"bf971fef-0ef1-47b6-ba75-ca76dde2e658\") " pod="openstack/ceilometer-0" Jan 22 09:23:17 crc kubenswrapper[4811]: I0122 09:23:17.951688 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf971fef-0ef1-47b6-ba75-ca76dde2e658-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"bf971fef-0ef1-47b6-ba75-ca76dde2e658\") " pod="openstack/ceilometer-0" Jan 22 09:23:17 crc kubenswrapper[4811]: I0122 09:23:17.951724 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dc4f\" (UniqueName: \"kubernetes.io/projected/bf971fef-0ef1-47b6-ba75-ca76dde2e658-kube-api-access-2dc4f\") pod \"ceilometer-0\" (UID: \"bf971fef-0ef1-47b6-ba75-ca76dde2e658\") " pod="openstack/ceilometer-0" Jan 22 09:23:17 crc kubenswrapper[4811]: I0122 09:23:17.951792 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf971fef-0ef1-47b6-ba75-ca76dde2e658-config-data\") pod \"ceilometer-0\" (UID: \"bf971fef-0ef1-47b6-ba75-ca76dde2e658\") " pod="openstack/ceilometer-0" Jan 22 09:23:17 crc kubenswrapper[4811]: I0122 09:23:17.951837 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bf971fef-0ef1-47b6-ba75-ca76dde2e658-run-httpd\") pod \"ceilometer-0\" (UID: \"bf971fef-0ef1-47b6-ba75-ca76dde2e658\") " pod="openstack/ceilometer-0" Jan 22 09:23:17 crc kubenswrapper[4811]: I0122 09:23:17.963690 4811 scope.go:117] "RemoveContainer" containerID="f086c391f8021c78ea00be742c3f4d7fc4bf312f4da3f5146df36d571e0e020d" Jan 22 09:23:17 crc kubenswrapper[4811]: I0122 09:23:17.986963 4811 scope.go:117] "RemoveContainer" containerID="32879ddefedbe01b91f139a318bbd9415c93eb5c894e8cf7b55b3c60f0d205a7" Jan 22 09:23:18 crc kubenswrapper[4811]: I0122 09:23:18.042047 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87478694-0c10-49d8-9d91-1ce054e05dee" path="/var/lib/kubelet/pods/87478694-0c10-49d8-9d91-1ce054e05dee/volumes" Jan 22 09:23:18 crc kubenswrapper[4811]: I0122 09:23:18.042577 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89733fdb-bf6a-42d3-8a8e-da3e2ff86438" path="/var/lib/kubelet/pods/89733fdb-bf6a-42d3-8a8e-da3e2ff86438/volumes" Jan 22 09:23:18 crc kubenswrapper[4811]: I0122 09:23:18.043345 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9c1e6c8-785c-4978-8053-23f809789e23" path="/var/lib/kubelet/pods/e9c1e6c8-785c-4978-8053-23f809789e23/volumes" Jan 22 09:23:18 crc kubenswrapper[4811]: I0122 09:23:18.049895 4811 scope.go:117] "RemoveContainer" containerID="096c08bb0503b4fb8e624b1b7c5abd60a9198e30e963666366aef2d291f24d4b" Jan 22 09:23:18 crc kubenswrapper[4811]: E0122 09:23:18.050870 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"096c08bb0503b4fb8e624b1b7c5abd60a9198e30e963666366aef2d291f24d4b\": container with ID starting with 096c08bb0503b4fb8e624b1b7c5abd60a9198e30e963666366aef2d291f24d4b not found: ID does not exist" containerID="096c08bb0503b4fb8e624b1b7c5abd60a9198e30e963666366aef2d291f24d4b" Jan 22 09:23:18 crc kubenswrapper[4811]: I0122 09:23:18.050908 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"096c08bb0503b4fb8e624b1b7c5abd60a9198e30e963666366aef2d291f24d4b"} err="failed to get container status \"096c08bb0503b4fb8e624b1b7c5abd60a9198e30e963666366aef2d291f24d4b\": rpc error: code = NotFound desc = could not find container \"096c08bb0503b4fb8e624b1b7c5abd60a9198e30e963666366aef2d291f24d4b\": container with ID starting with 096c08bb0503b4fb8e624b1b7c5abd60a9198e30e963666366aef2d291f24d4b not found: ID does not exist" Jan 22 09:23:18 crc kubenswrapper[4811]: I0122 09:23:18.050929 4811 scope.go:117] "RemoveContainer" containerID="f086c391f8021c78ea00be742c3f4d7fc4bf312f4da3f5146df36d571e0e020d" Jan 22 09:23:18 crc kubenswrapper[4811]: E0122 09:23:18.051850 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f086c391f8021c78ea00be742c3f4d7fc4bf312f4da3f5146df36d571e0e020d\": container with ID starting with f086c391f8021c78ea00be742c3f4d7fc4bf312f4da3f5146df36d571e0e020d not found: ID does not exist" containerID="f086c391f8021c78ea00be742c3f4d7fc4bf312f4da3f5146df36d571e0e020d" Jan 22 09:23:18 crc kubenswrapper[4811]: I0122 09:23:18.051871 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f086c391f8021c78ea00be742c3f4d7fc4bf312f4da3f5146df36d571e0e020d"} err="failed to get container status \"f086c391f8021c78ea00be742c3f4d7fc4bf312f4da3f5146df36d571e0e020d\": rpc error: code = NotFound desc = could not find container \"f086c391f8021c78ea00be742c3f4d7fc4bf312f4da3f5146df36d571e0e020d\": container with ID starting with f086c391f8021c78ea00be742c3f4d7fc4bf312f4da3f5146df36d571e0e020d not found: ID does not exist" Jan 22 09:23:18 crc kubenswrapper[4811]: I0122 09:23:18.051884 4811 scope.go:117] "RemoveContainer" containerID="32879ddefedbe01b91f139a318bbd9415c93eb5c894e8cf7b55b3c60f0d205a7" Jan 22 09:23:18 crc kubenswrapper[4811]: E0122 09:23:18.053478 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32879ddefedbe01b91f139a318bbd9415c93eb5c894e8cf7b55b3c60f0d205a7\": container with ID starting with 32879ddefedbe01b91f139a318bbd9415c93eb5c894e8cf7b55b3c60f0d205a7 not found: ID does not exist" containerID="32879ddefedbe01b91f139a318bbd9415c93eb5c894e8cf7b55b3c60f0d205a7" Jan 22 09:23:18 crc kubenswrapper[4811]: I0122 09:23:18.053509 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32879ddefedbe01b91f139a318bbd9415c93eb5c894e8cf7b55b3c60f0d205a7"} err="failed to get container status \"32879ddefedbe01b91f139a318bbd9415c93eb5c894e8cf7b55b3c60f0d205a7\": rpc error: code = NotFound desc = could not find container \"32879ddefedbe01b91f139a318bbd9415c93eb5c894e8cf7b55b3c60f0d205a7\": container with ID starting with 32879ddefedbe01b91f139a318bbd9415c93eb5c894e8cf7b55b3c60f0d205a7 not found: ID does not exist" Jan 22 09:23:18 crc kubenswrapper[4811]: I0122 09:23:18.053862 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bf971fef-0ef1-47b6-ba75-ca76dde2e658-log-httpd\") pod \"ceilometer-0\" (UID: \"bf971fef-0ef1-47b6-ba75-ca76dde2e658\") " pod="openstack/ceilometer-0" Jan 22 09:23:18 crc kubenswrapper[4811]: I0122 09:23:18.053895 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf971fef-0ef1-47b6-ba75-ca76dde2e658-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"bf971fef-0ef1-47b6-ba75-ca76dde2e658\") " pod="openstack/ceilometer-0" Jan 22 09:23:18 crc kubenswrapper[4811]: I0122 09:23:18.053934 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dc4f\" (UniqueName: \"kubernetes.io/projected/bf971fef-0ef1-47b6-ba75-ca76dde2e658-kube-api-access-2dc4f\") pod \"ceilometer-0\" (UID: \"bf971fef-0ef1-47b6-ba75-ca76dde2e658\") " pod="openstack/ceilometer-0" Jan 22 09:23:18 crc kubenswrapper[4811]: I0122 09:23:18.054007 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf971fef-0ef1-47b6-ba75-ca76dde2e658-config-data\") pod \"ceilometer-0\" (UID: \"bf971fef-0ef1-47b6-ba75-ca76dde2e658\") " pod="openstack/ceilometer-0" Jan 22 09:23:18 crc kubenswrapper[4811]: I0122 09:23:18.054050 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bf971fef-0ef1-47b6-ba75-ca76dde2e658-run-httpd\") pod \"ceilometer-0\" (UID: \"bf971fef-0ef1-47b6-ba75-ca76dde2e658\") " pod="openstack/ceilometer-0" Jan 22 09:23:18 crc kubenswrapper[4811]: I0122 09:23:18.054090 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf971fef-0ef1-47b6-ba75-ca76dde2e658-scripts\") pod \"ceilometer-0\" (UID: \"bf971fef-0ef1-47b6-ba75-ca76dde2e658\") " pod="openstack/ceilometer-0" Jan 22 09:23:18 crc kubenswrapper[4811]: I0122 09:23:18.054142 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bf971fef-0ef1-47b6-ba75-ca76dde2e658-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bf971fef-0ef1-47b6-ba75-ca76dde2e658\") " pod="openstack/ceilometer-0" Jan 22 09:23:18 crc kubenswrapper[4811]: I0122 09:23:18.054162 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf971fef-0ef1-47b6-ba75-ca76dde2e658-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bf971fef-0ef1-47b6-ba75-ca76dde2e658\") " pod="openstack/ceilometer-0" Jan 22 09:23:18 crc kubenswrapper[4811]: I0122 09:23:18.054808 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bf971fef-0ef1-47b6-ba75-ca76dde2e658-log-httpd\") pod \"ceilometer-0\" (UID: \"bf971fef-0ef1-47b6-ba75-ca76dde2e658\") " pod="openstack/ceilometer-0" Jan 22 09:23:18 crc kubenswrapper[4811]: I0122 09:23:18.057304 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bf971fef-0ef1-47b6-ba75-ca76dde2e658-run-httpd\") pod \"ceilometer-0\" (UID: \"bf971fef-0ef1-47b6-ba75-ca76dde2e658\") " pod="openstack/ceilometer-0" Jan 22 09:23:18 crc kubenswrapper[4811]: I0122 09:23:18.059164 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf971fef-0ef1-47b6-ba75-ca76dde2e658-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"bf971fef-0ef1-47b6-ba75-ca76dde2e658\") " pod="openstack/ceilometer-0" Jan 22 09:23:18 crc kubenswrapper[4811]: I0122 09:23:18.069557 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf971fef-0ef1-47b6-ba75-ca76dde2e658-scripts\") pod \"ceilometer-0\" (UID: \"bf971fef-0ef1-47b6-ba75-ca76dde2e658\") " pod="openstack/ceilometer-0" Jan 22 09:23:18 crc kubenswrapper[4811]: I0122 09:23:18.070537 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf971fef-0ef1-47b6-ba75-ca76dde2e658-config-data\") pod \"ceilometer-0\" (UID: \"bf971fef-0ef1-47b6-ba75-ca76dde2e658\") " pod="openstack/ceilometer-0" Jan 22 09:23:18 crc kubenswrapper[4811]: I0122 09:23:18.070597 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bf971fef-0ef1-47b6-ba75-ca76dde2e658-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bf971fef-0ef1-47b6-ba75-ca76dde2e658\") " pod="openstack/ceilometer-0" Jan 22 09:23:18 crc kubenswrapper[4811]: I0122 09:23:18.071166 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf971fef-0ef1-47b6-ba75-ca76dde2e658-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bf971fef-0ef1-47b6-ba75-ca76dde2e658\") " pod="openstack/ceilometer-0" Jan 22 09:23:18 crc kubenswrapper[4811]: I0122 09:23:18.074400 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dc4f\" (UniqueName: \"kubernetes.io/projected/bf971fef-0ef1-47b6-ba75-ca76dde2e658-kube-api-access-2dc4f\") pod \"ceilometer-0\" (UID: \"bf971fef-0ef1-47b6-ba75-ca76dde2e658\") " pod="openstack/ceilometer-0" Jan 22 09:23:18 crc kubenswrapper[4811]: I0122 09:23:18.209869 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 09:23:18 crc kubenswrapper[4811]: I0122 09:23:18.706682 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 22 09:23:18 crc kubenswrapper[4811]: I0122 09:23:18.749770 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e229ce79-29a8-487c-9468-3a0ae1449022","Type":"ContainerStarted","Data":"90a38fdd2abe993092f56c8d59c81f75ca092a08b68a26f65909329e10830a8f"} Jan 22 09:23:18 crc kubenswrapper[4811]: I0122 09:23:18.749854 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e229ce79-29a8-487c-9468-3a0ae1449022","Type":"ContainerStarted","Data":"a8254d941cdca8d983b408bed5220b7b21e7a6da6e6b73f1d3de0ba18e50f08e"} Jan 22 09:23:18 crc kubenswrapper[4811]: I0122 09:23:18.749871 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e229ce79-29a8-487c-9468-3a0ae1449022","Type":"ContainerStarted","Data":"33e455ab694f97b17a9b165842df97e4478bac0f2418840867d10877ed4219d1"} Jan 22 09:23:18 crc kubenswrapper[4811]: I0122 09:23:18.753774 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bf971fef-0ef1-47b6-ba75-ca76dde2e658","Type":"ContainerStarted","Data":"519ecc41eabdb2622f70369133a0de99bf6f8f3cdc596f2c39fa56e5be118a69"} Jan 22 09:23:18 crc kubenswrapper[4811]: I0122 09:23:18.770253 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.770229148 podStartE2EDuration="2.770229148s" podCreationTimestamp="2026-01-22 09:23:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:23:18.769920426 +0000 UTC m=+1043.092107539" watchObservedRunningTime="2026-01-22 09:23:18.770229148 +0000 UTC m=+1043.092416262" Jan 22 09:23:19 crc kubenswrapper[4811]: I0122 09:23:19.766119 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bf971fef-0ef1-47b6-ba75-ca76dde2e658","Type":"ContainerStarted","Data":"c57326606911589e15ed05807cfbc3f21e6ce59aa1e0583b6f78b3c49698fc55"} Jan 22 09:23:20 crc kubenswrapper[4811]: I0122 09:23:20.776280 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bf971fef-0ef1-47b6-ba75-ca76dde2e658","Type":"ContainerStarted","Data":"9a5a1873bd8ec0e3b26b4ceef007129955303827ea35881e2a75785cb4a683bc"} Jan 22 09:23:20 crc kubenswrapper[4811]: I0122 09:23:20.776873 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bf971fef-0ef1-47b6-ba75-ca76dde2e658","Type":"ContainerStarted","Data":"37f6d0767104e3f03bfba57da231e1602954d2430b45c16e733ffc002f8c7ac6"} Jan 22 09:23:21 crc kubenswrapper[4811]: I0122 09:23:21.213416 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 22 09:23:22 crc kubenswrapper[4811]: I0122 09:23:22.796485 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bf971fef-0ef1-47b6-ba75-ca76dde2e658","Type":"ContainerStarted","Data":"3fd61ef4d17d6f0a8c457d3e3c9b89bb2b1a8f3005b7d5709c98fb22b6b5b513"} Jan 22 09:23:22 crc kubenswrapper[4811]: I0122 09:23:22.796876 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 22 09:23:22 crc kubenswrapper[4811]: I0122 09:23:22.822324 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.632281159 podStartE2EDuration="5.822295799s" podCreationTimestamp="2026-01-22 09:23:17 +0000 UTC" firstStartedPulling="2026-01-22 09:23:18.722451285 +0000 UTC m=+1043.044638408" lastFinishedPulling="2026-01-22 09:23:21.912465926 +0000 UTC m=+1046.234653048" observedRunningTime="2026-01-22 09:23:22.815330183 +0000 UTC m=+1047.137517306" watchObservedRunningTime="2026-01-22 09:23:22.822295799 +0000 UTC m=+1047.144482922" Jan 22 09:23:25 crc kubenswrapper[4811]: I0122 09:23:25.047996 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 22 09:23:26 crc kubenswrapper[4811]: I0122 09:23:26.212853 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 22 09:23:26 crc kubenswrapper[4811]: I0122 09:23:26.236208 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 22 09:23:26 crc kubenswrapper[4811]: I0122 09:23:26.846961 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 22 09:23:27 crc kubenswrapper[4811]: I0122 09:23:27.311006 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 22 09:23:27 crc kubenswrapper[4811]: I0122 09:23:27.311440 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 22 09:23:28 crc kubenswrapper[4811]: I0122 09:23:28.393744 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="e229ce79-29a8-487c-9468-3a0ae1449022" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.182:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 09:23:28 crc kubenswrapper[4811]: I0122 09:23:28.394027 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="e229ce79-29a8-487c-9468-3a0ae1449022" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.182:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 09:23:34 crc kubenswrapper[4811]: I0122 09:23:34.879461 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 22 09:23:34 crc kubenswrapper[4811]: I0122 09:23:34.885023 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 22 09:23:34 crc kubenswrapper[4811]: I0122 09:23:34.900174 4811 generic.go:334] "Generic (PLEG): container finished" podID="a4121565-e932-4f2e-a6f1-62ecfeaab63b" containerID="5c691d1eade36eec09832866a57d5f1d3f5e557ade0b300b11184e4079c1c5dd" exitCode=137 Jan 22 09:23:34 crc kubenswrapper[4811]: I0122 09:23:34.900194 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 22 09:23:34 crc kubenswrapper[4811]: I0122 09:23:34.900246 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"a4121565-e932-4f2e-a6f1-62ecfeaab63b","Type":"ContainerDied","Data":"5c691d1eade36eec09832866a57d5f1d3f5e557ade0b300b11184e4079c1c5dd"} Jan 22 09:23:34 crc kubenswrapper[4811]: I0122 09:23:34.900277 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"a4121565-e932-4f2e-a6f1-62ecfeaab63b","Type":"ContainerDied","Data":"def4d4322ba7e97f9c878fb7b20863be30755dddad81995c09d2c4daa4352e9e"} Jan 22 09:23:34 crc kubenswrapper[4811]: I0122 09:23:34.900293 4811 scope.go:117] "RemoveContainer" containerID="5c691d1eade36eec09832866a57d5f1d3f5e557ade0b300b11184e4079c1c5dd" Jan 22 09:23:34 crc kubenswrapper[4811]: I0122 09:23:34.903666 4811 generic.go:334] "Generic (PLEG): container finished" podID="3da71c17-c624-4ba1-b5b2-dd5410951026" containerID="015ef246e1c476dabe092245e80aa87200397d00623f9c1d839a56a05bc72c84" exitCode=137 Jan 22 09:23:34 crc kubenswrapper[4811]: I0122 09:23:34.903693 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3da71c17-c624-4ba1-b5b2-dd5410951026","Type":"ContainerDied","Data":"015ef246e1c476dabe092245e80aa87200397d00623f9c1d839a56a05bc72c84"} Jan 22 09:23:34 crc kubenswrapper[4811]: I0122 09:23:34.903718 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3da71c17-c624-4ba1-b5b2-dd5410951026","Type":"ContainerDied","Data":"51e32d627e4252d82e93b1a2b42ef4a2c58e0db3cc75c3a2c595e185d116fb2e"} Jan 22 09:23:34 crc kubenswrapper[4811]: I0122 09:23:34.903753 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 22 09:23:34 crc kubenswrapper[4811]: I0122 09:23:34.937821 4811 scope.go:117] "RemoveContainer" containerID="5c691d1eade36eec09832866a57d5f1d3f5e557ade0b300b11184e4079c1c5dd" Jan 22 09:23:34 crc kubenswrapper[4811]: E0122 09:23:34.938222 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c691d1eade36eec09832866a57d5f1d3f5e557ade0b300b11184e4079c1c5dd\": container with ID starting with 5c691d1eade36eec09832866a57d5f1d3f5e557ade0b300b11184e4079c1c5dd not found: ID does not exist" containerID="5c691d1eade36eec09832866a57d5f1d3f5e557ade0b300b11184e4079c1c5dd" Jan 22 09:23:34 crc kubenswrapper[4811]: I0122 09:23:34.938253 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c691d1eade36eec09832866a57d5f1d3f5e557ade0b300b11184e4079c1c5dd"} err="failed to get container status \"5c691d1eade36eec09832866a57d5f1d3f5e557ade0b300b11184e4079c1c5dd\": rpc error: code = NotFound desc = could not find container \"5c691d1eade36eec09832866a57d5f1d3f5e557ade0b300b11184e4079c1c5dd\": container with ID starting with 5c691d1eade36eec09832866a57d5f1d3f5e557ade0b300b11184e4079c1c5dd not found: ID does not exist" Jan 22 09:23:34 crc kubenswrapper[4811]: I0122 09:23:34.938274 4811 scope.go:117] "RemoveContainer" containerID="015ef246e1c476dabe092245e80aa87200397d00623f9c1d839a56a05bc72c84" Jan 22 09:23:34 crc kubenswrapper[4811]: I0122 09:23:34.955533 4811 scope.go:117] "RemoveContainer" containerID="b69f3c8adef8f8bf79cb7c49a144ba2138a2c7f63dc3021fdcc0ce67ff372e7b" Jan 22 09:23:34 crc kubenswrapper[4811]: I0122 09:23:34.977291 4811 scope.go:117] "RemoveContainer" containerID="015ef246e1c476dabe092245e80aa87200397d00623f9c1d839a56a05bc72c84" Jan 22 09:23:34 crc kubenswrapper[4811]: E0122 09:23:34.977616 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"015ef246e1c476dabe092245e80aa87200397d00623f9c1d839a56a05bc72c84\": container with ID starting with 015ef246e1c476dabe092245e80aa87200397d00623f9c1d839a56a05bc72c84 not found: ID does not exist" containerID="015ef246e1c476dabe092245e80aa87200397d00623f9c1d839a56a05bc72c84" Jan 22 09:23:34 crc kubenswrapper[4811]: I0122 09:23:34.977699 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"015ef246e1c476dabe092245e80aa87200397d00623f9c1d839a56a05bc72c84"} err="failed to get container status \"015ef246e1c476dabe092245e80aa87200397d00623f9c1d839a56a05bc72c84\": rpc error: code = NotFound desc = could not find container \"015ef246e1c476dabe092245e80aa87200397d00623f9c1d839a56a05bc72c84\": container with ID starting with 015ef246e1c476dabe092245e80aa87200397d00623f9c1d839a56a05bc72c84 not found: ID does not exist" Jan 22 09:23:34 crc kubenswrapper[4811]: I0122 09:23:34.977734 4811 scope.go:117] "RemoveContainer" containerID="b69f3c8adef8f8bf79cb7c49a144ba2138a2c7f63dc3021fdcc0ce67ff372e7b" Jan 22 09:23:34 crc kubenswrapper[4811]: E0122 09:23:34.978071 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b69f3c8adef8f8bf79cb7c49a144ba2138a2c7f63dc3021fdcc0ce67ff372e7b\": container with ID starting with b69f3c8adef8f8bf79cb7c49a144ba2138a2c7f63dc3021fdcc0ce67ff372e7b not found: ID does not exist" containerID="b69f3c8adef8f8bf79cb7c49a144ba2138a2c7f63dc3021fdcc0ce67ff372e7b" Jan 22 09:23:34 crc kubenswrapper[4811]: I0122 09:23:34.978090 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b69f3c8adef8f8bf79cb7c49a144ba2138a2c7f63dc3021fdcc0ce67ff372e7b"} err="failed to get container status \"b69f3c8adef8f8bf79cb7c49a144ba2138a2c7f63dc3021fdcc0ce67ff372e7b\": rpc error: code = NotFound desc = could not find container \"b69f3c8adef8f8bf79cb7c49a144ba2138a2c7f63dc3021fdcc0ce67ff372e7b\": container with ID starting with b69f3c8adef8f8bf79cb7c49a144ba2138a2c7f63dc3021fdcc0ce67ff372e7b not found: ID does not exist" Jan 22 09:23:34 crc kubenswrapper[4811]: I0122 09:23:34.987826 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3da71c17-c624-4ba1-b5b2-dd5410951026-combined-ca-bundle\") pod \"3da71c17-c624-4ba1-b5b2-dd5410951026\" (UID: \"3da71c17-c624-4ba1-b5b2-dd5410951026\") " Jan 22 09:23:34 crc kubenswrapper[4811]: I0122 09:23:34.988820 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g2nzz\" (UniqueName: \"kubernetes.io/projected/a4121565-e932-4f2e-a6f1-62ecfeaab63b-kube-api-access-g2nzz\") pod \"a4121565-e932-4f2e-a6f1-62ecfeaab63b\" (UID: \"a4121565-e932-4f2e-a6f1-62ecfeaab63b\") " Jan 22 09:23:34 crc kubenswrapper[4811]: I0122 09:23:34.988891 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4121565-e932-4f2e-a6f1-62ecfeaab63b-combined-ca-bundle\") pod \"a4121565-e932-4f2e-a6f1-62ecfeaab63b\" (UID: \"a4121565-e932-4f2e-a6f1-62ecfeaab63b\") " Jan 22 09:23:34 crc kubenswrapper[4811]: I0122 09:23:34.988958 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4121565-e932-4f2e-a6f1-62ecfeaab63b-config-data\") pod \"a4121565-e932-4f2e-a6f1-62ecfeaab63b\" (UID: \"a4121565-e932-4f2e-a6f1-62ecfeaab63b\") " Jan 22 09:23:34 crc kubenswrapper[4811]: I0122 09:23:34.989018 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3da71c17-c624-4ba1-b5b2-dd5410951026-logs\") pod \"3da71c17-c624-4ba1-b5b2-dd5410951026\" (UID: \"3da71c17-c624-4ba1-b5b2-dd5410951026\") " Jan 22 09:23:34 crc kubenswrapper[4811]: I0122 09:23:34.989140 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nwmpj\" (UniqueName: \"kubernetes.io/projected/3da71c17-c624-4ba1-b5b2-dd5410951026-kube-api-access-nwmpj\") pod \"3da71c17-c624-4ba1-b5b2-dd5410951026\" (UID: \"3da71c17-c624-4ba1-b5b2-dd5410951026\") " Jan 22 09:23:34 crc kubenswrapper[4811]: I0122 09:23:34.989165 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3da71c17-c624-4ba1-b5b2-dd5410951026-config-data\") pod \"3da71c17-c624-4ba1-b5b2-dd5410951026\" (UID: \"3da71c17-c624-4ba1-b5b2-dd5410951026\") " Jan 22 09:23:34 crc kubenswrapper[4811]: I0122 09:23:34.989431 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3da71c17-c624-4ba1-b5b2-dd5410951026-logs" (OuterVolumeSpecName: "logs") pod "3da71c17-c624-4ba1-b5b2-dd5410951026" (UID: "3da71c17-c624-4ba1-b5b2-dd5410951026"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:23:34 crc kubenswrapper[4811]: I0122 09:23:34.990111 4811 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3da71c17-c624-4ba1-b5b2-dd5410951026-logs\") on node \"crc\" DevicePath \"\"" Jan 22 09:23:34 crc kubenswrapper[4811]: I0122 09:23:34.994978 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3da71c17-c624-4ba1-b5b2-dd5410951026-kube-api-access-nwmpj" (OuterVolumeSpecName: "kube-api-access-nwmpj") pod "3da71c17-c624-4ba1-b5b2-dd5410951026" (UID: "3da71c17-c624-4ba1-b5b2-dd5410951026"). InnerVolumeSpecName "kube-api-access-nwmpj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:23:34 crc kubenswrapper[4811]: I0122 09:23:34.997090 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4121565-e932-4f2e-a6f1-62ecfeaab63b-kube-api-access-g2nzz" (OuterVolumeSpecName: "kube-api-access-g2nzz") pod "a4121565-e932-4f2e-a6f1-62ecfeaab63b" (UID: "a4121565-e932-4f2e-a6f1-62ecfeaab63b"). InnerVolumeSpecName "kube-api-access-g2nzz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:23:35 crc kubenswrapper[4811]: I0122 09:23:35.011687 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3da71c17-c624-4ba1-b5b2-dd5410951026-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3da71c17-c624-4ba1-b5b2-dd5410951026" (UID: "3da71c17-c624-4ba1-b5b2-dd5410951026"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:23:35 crc kubenswrapper[4811]: I0122 09:23:35.013218 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3da71c17-c624-4ba1-b5b2-dd5410951026-config-data" (OuterVolumeSpecName: "config-data") pod "3da71c17-c624-4ba1-b5b2-dd5410951026" (UID: "3da71c17-c624-4ba1-b5b2-dd5410951026"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:23:35 crc kubenswrapper[4811]: I0122 09:23:35.016822 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4121565-e932-4f2e-a6f1-62ecfeaab63b-config-data" (OuterVolumeSpecName: "config-data") pod "a4121565-e932-4f2e-a6f1-62ecfeaab63b" (UID: "a4121565-e932-4f2e-a6f1-62ecfeaab63b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:23:35 crc kubenswrapper[4811]: I0122 09:23:35.016839 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4121565-e932-4f2e-a6f1-62ecfeaab63b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a4121565-e932-4f2e-a6f1-62ecfeaab63b" (UID: "a4121565-e932-4f2e-a6f1-62ecfeaab63b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:23:35 crc kubenswrapper[4811]: I0122 09:23:35.091576 4811 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4121565-e932-4f2e-a6f1-62ecfeaab63b-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 09:23:35 crc kubenswrapper[4811]: I0122 09:23:35.091608 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nwmpj\" (UniqueName: \"kubernetes.io/projected/3da71c17-c624-4ba1-b5b2-dd5410951026-kube-api-access-nwmpj\") on node \"crc\" DevicePath \"\"" Jan 22 09:23:35 crc kubenswrapper[4811]: I0122 09:23:35.091671 4811 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3da71c17-c624-4ba1-b5b2-dd5410951026-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 09:23:35 crc kubenswrapper[4811]: I0122 09:23:35.091771 4811 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3da71c17-c624-4ba1-b5b2-dd5410951026-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 09:23:35 crc kubenswrapper[4811]: I0122 09:23:35.091810 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g2nzz\" (UniqueName: \"kubernetes.io/projected/a4121565-e932-4f2e-a6f1-62ecfeaab63b-kube-api-access-g2nzz\") on node \"crc\" DevicePath \"\"" Jan 22 09:23:35 crc kubenswrapper[4811]: I0122 09:23:35.091824 4811 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4121565-e932-4f2e-a6f1-62ecfeaab63b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 09:23:35 crc kubenswrapper[4811]: I0122 09:23:35.233884 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 22 09:23:35 crc kubenswrapper[4811]: I0122 09:23:35.241141 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 22 09:23:35 crc kubenswrapper[4811]: I0122 09:23:35.250015 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 22 09:23:35 crc kubenswrapper[4811]: I0122 09:23:35.255408 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 22 09:23:35 crc kubenswrapper[4811]: I0122 09:23:35.273381 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 22 09:23:35 crc kubenswrapper[4811]: E0122 09:23:35.273966 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3da71c17-c624-4ba1-b5b2-dd5410951026" containerName="nova-metadata-metadata" Jan 22 09:23:35 crc kubenswrapper[4811]: I0122 09:23:35.274074 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="3da71c17-c624-4ba1-b5b2-dd5410951026" containerName="nova-metadata-metadata" Jan 22 09:23:35 crc kubenswrapper[4811]: E0122 09:23:35.274149 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4121565-e932-4f2e-a6f1-62ecfeaab63b" containerName="nova-cell1-novncproxy-novncproxy" Jan 22 09:23:35 crc kubenswrapper[4811]: I0122 09:23:35.274215 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4121565-e932-4f2e-a6f1-62ecfeaab63b" containerName="nova-cell1-novncproxy-novncproxy" Jan 22 09:23:35 crc kubenswrapper[4811]: E0122 09:23:35.274280 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3da71c17-c624-4ba1-b5b2-dd5410951026" containerName="nova-metadata-log" Jan 22 09:23:35 crc kubenswrapper[4811]: I0122 09:23:35.274325 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="3da71c17-c624-4ba1-b5b2-dd5410951026" containerName="nova-metadata-log" Jan 22 09:23:35 crc kubenswrapper[4811]: I0122 09:23:35.274545 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4121565-e932-4f2e-a6f1-62ecfeaab63b" containerName="nova-cell1-novncproxy-novncproxy" Jan 22 09:23:35 crc kubenswrapper[4811]: I0122 09:23:35.274613 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="3da71c17-c624-4ba1-b5b2-dd5410951026" containerName="nova-metadata-metadata" Jan 22 09:23:35 crc kubenswrapper[4811]: I0122 09:23:35.274697 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="3da71c17-c624-4ba1-b5b2-dd5410951026" containerName="nova-metadata-log" Jan 22 09:23:35 crc kubenswrapper[4811]: I0122 09:23:35.275341 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 22 09:23:35 crc kubenswrapper[4811]: I0122 09:23:35.280973 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 22 09:23:35 crc kubenswrapper[4811]: I0122 09:23:35.288982 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 22 09:23:35 crc kubenswrapper[4811]: I0122 09:23:35.290736 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 22 09:23:35 crc kubenswrapper[4811]: I0122 09:23:35.296318 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee19b0fe-2250-40e7-9917-230c53ad0f13-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"ee19b0fe-2250-40e7-9917-230c53ad0f13\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 09:23:35 crc kubenswrapper[4811]: I0122 09:23:35.296439 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee19b0fe-2250-40e7-9917-230c53ad0f13-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"ee19b0fe-2250-40e7-9917-230c53ad0f13\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 09:23:35 crc kubenswrapper[4811]: I0122 09:23:35.296541 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 22 09:23:35 crc kubenswrapper[4811]: I0122 09:23:35.296677 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee19b0fe-2250-40e7-9917-230c53ad0f13-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"ee19b0fe-2250-40e7-9917-230c53ad0f13\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 09:23:35 crc kubenswrapper[4811]: I0122 09:23:35.296855 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9l49\" (UniqueName: \"kubernetes.io/projected/ee19b0fe-2250-40e7-9917-230c53ad0f13-kube-api-access-l9l49\") pod \"nova-cell1-novncproxy-0\" (UID: \"ee19b0fe-2250-40e7-9917-230c53ad0f13\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 09:23:35 crc kubenswrapper[4811]: I0122 09:23:35.296986 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee19b0fe-2250-40e7-9917-230c53ad0f13-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"ee19b0fe-2250-40e7-9917-230c53ad0f13\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 09:23:35 crc kubenswrapper[4811]: I0122 09:23:35.298253 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 22 09:23:35 crc kubenswrapper[4811]: I0122 09:23:35.299839 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 22 09:23:35 crc kubenswrapper[4811]: I0122 09:23:35.302813 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 22 09:23:35 crc kubenswrapper[4811]: I0122 09:23:35.308094 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 22 09:23:35 crc kubenswrapper[4811]: I0122 09:23:35.325191 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 22 09:23:35 crc kubenswrapper[4811]: I0122 09:23:35.398934 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8807b3ea-b4ec-44d4-b2bc-d79fcb7c0a90-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"8807b3ea-b4ec-44d4-b2bc-d79fcb7c0a90\") " pod="openstack/nova-metadata-0" Jan 22 09:23:35 crc kubenswrapper[4811]: I0122 09:23:35.398981 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8807b3ea-b4ec-44d4-b2bc-d79fcb7c0a90-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"8807b3ea-b4ec-44d4-b2bc-d79fcb7c0a90\") " pod="openstack/nova-metadata-0" Jan 22 09:23:35 crc kubenswrapper[4811]: I0122 09:23:35.399022 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee19b0fe-2250-40e7-9917-230c53ad0f13-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"ee19b0fe-2250-40e7-9917-230c53ad0f13\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 09:23:35 crc kubenswrapper[4811]: I0122 09:23:35.399044 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jh6nn\" (UniqueName: \"kubernetes.io/projected/8807b3ea-b4ec-44d4-b2bc-d79fcb7c0a90-kube-api-access-jh6nn\") pod \"nova-metadata-0\" (UID: \"8807b3ea-b4ec-44d4-b2bc-d79fcb7c0a90\") " pod="openstack/nova-metadata-0" Jan 22 09:23:35 crc kubenswrapper[4811]: I0122 09:23:35.399098 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8807b3ea-b4ec-44d4-b2bc-d79fcb7c0a90-logs\") pod \"nova-metadata-0\" (UID: \"8807b3ea-b4ec-44d4-b2bc-d79fcb7c0a90\") " pod="openstack/nova-metadata-0" Jan 22 09:23:35 crc kubenswrapper[4811]: I0122 09:23:35.399124 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9l49\" (UniqueName: \"kubernetes.io/projected/ee19b0fe-2250-40e7-9917-230c53ad0f13-kube-api-access-l9l49\") pod \"nova-cell1-novncproxy-0\" (UID: \"ee19b0fe-2250-40e7-9917-230c53ad0f13\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 09:23:35 crc kubenswrapper[4811]: I0122 09:23:35.399186 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee19b0fe-2250-40e7-9917-230c53ad0f13-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"ee19b0fe-2250-40e7-9917-230c53ad0f13\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 09:23:35 crc kubenswrapper[4811]: I0122 09:23:35.399236 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee19b0fe-2250-40e7-9917-230c53ad0f13-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"ee19b0fe-2250-40e7-9917-230c53ad0f13\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 09:23:35 crc kubenswrapper[4811]: I0122 09:23:35.399270 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8807b3ea-b4ec-44d4-b2bc-d79fcb7c0a90-config-data\") pod \"nova-metadata-0\" (UID: \"8807b3ea-b4ec-44d4-b2bc-d79fcb7c0a90\") " pod="openstack/nova-metadata-0" Jan 22 09:23:35 crc kubenswrapper[4811]: I0122 09:23:35.399343 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee19b0fe-2250-40e7-9917-230c53ad0f13-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"ee19b0fe-2250-40e7-9917-230c53ad0f13\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 09:23:35 crc kubenswrapper[4811]: I0122 09:23:35.403707 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee19b0fe-2250-40e7-9917-230c53ad0f13-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"ee19b0fe-2250-40e7-9917-230c53ad0f13\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 09:23:35 crc kubenswrapper[4811]: I0122 09:23:35.403718 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee19b0fe-2250-40e7-9917-230c53ad0f13-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"ee19b0fe-2250-40e7-9917-230c53ad0f13\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 09:23:35 crc kubenswrapper[4811]: I0122 09:23:35.404518 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee19b0fe-2250-40e7-9917-230c53ad0f13-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"ee19b0fe-2250-40e7-9917-230c53ad0f13\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 09:23:35 crc kubenswrapper[4811]: I0122 09:23:35.409141 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee19b0fe-2250-40e7-9917-230c53ad0f13-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"ee19b0fe-2250-40e7-9917-230c53ad0f13\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 09:23:35 crc kubenswrapper[4811]: I0122 09:23:35.413486 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9l49\" (UniqueName: \"kubernetes.io/projected/ee19b0fe-2250-40e7-9917-230c53ad0f13-kube-api-access-l9l49\") pod \"nova-cell1-novncproxy-0\" (UID: \"ee19b0fe-2250-40e7-9917-230c53ad0f13\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 09:23:35 crc kubenswrapper[4811]: I0122 09:23:35.500417 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8807b3ea-b4ec-44d4-b2bc-d79fcb7c0a90-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"8807b3ea-b4ec-44d4-b2bc-d79fcb7c0a90\") " pod="openstack/nova-metadata-0" Jan 22 09:23:35 crc kubenswrapper[4811]: I0122 09:23:35.500552 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8807b3ea-b4ec-44d4-b2bc-d79fcb7c0a90-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"8807b3ea-b4ec-44d4-b2bc-d79fcb7c0a90\") " pod="openstack/nova-metadata-0" Jan 22 09:23:35 crc kubenswrapper[4811]: I0122 09:23:35.500660 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jh6nn\" (UniqueName: \"kubernetes.io/projected/8807b3ea-b4ec-44d4-b2bc-d79fcb7c0a90-kube-api-access-jh6nn\") pod \"nova-metadata-0\" (UID: \"8807b3ea-b4ec-44d4-b2bc-d79fcb7c0a90\") " pod="openstack/nova-metadata-0" Jan 22 09:23:35 crc kubenswrapper[4811]: I0122 09:23:35.500756 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8807b3ea-b4ec-44d4-b2bc-d79fcb7c0a90-logs\") pod \"nova-metadata-0\" (UID: \"8807b3ea-b4ec-44d4-b2bc-d79fcb7c0a90\") " pod="openstack/nova-metadata-0" Jan 22 09:23:35 crc kubenswrapper[4811]: I0122 09:23:35.500878 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8807b3ea-b4ec-44d4-b2bc-d79fcb7c0a90-config-data\") pod \"nova-metadata-0\" (UID: \"8807b3ea-b4ec-44d4-b2bc-d79fcb7c0a90\") " pod="openstack/nova-metadata-0" Jan 22 09:23:35 crc kubenswrapper[4811]: I0122 09:23:35.501069 4811 patch_prober.go:28] interesting pod/machine-config-daemon-txvcq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 09:23:35 crc kubenswrapper[4811]: I0122 09:23:35.501132 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 09:23:35 crc kubenswrapper[4811]: I0122 09:23:35.501371 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8807b3ea-b4ec-44d4-b2bc-d79fcb7c0a90-logs\") pod \"nova-metadata-0\" (UID: \"8807b3ea-b4ec-44d4-b2bc-d79fcb7c0a90\") " pod="openstack/nova-metadata-0" Jan 22 09:23:35 crc kubenswrapper[4811]: I0122 09:23:35.504874 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8807b3ea-b4ec-44d4-b2bc-d79fcb7c0a90-config-data\") pod \"nova-metadata-0\" (UID: \"8807b3ea-b4ec-44d4-b2bc-d79fcb7c0a90\") " pod="openstack/nova-metadata-0" Jan 22 09:23:35 crc kubenswrapper[4811]: I0122 09:23:35.505969 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8807b3ea-b4ec-44d4-b2bc-d79fcb7c0a90-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"8807b3ea-b4ec-44d4-b2bc-d79fcb7c0a90\") " pod="openstack/nova-metadata-0" Jan 22 09:23:35 crc kubenswrapper[4811]: I0122 09:23:35.505993 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8807b3ea-b4ec-44d4-b2bc-d79fcb7c0a90-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"8807b3ea-b4ec-44d4-b2bc-d79fcb7c0a90\") " pod="openstack/nova-metadata-0" Jan 22 09:23:35 crc kubenswrapper[4811]: I0122 09:23:35.515199 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jh6nn\" (UniqueName: \"kubernetes.io/projected/8807b3ea-b4ec-44d4-b2bc-d79fcb7c0a90-kube-api-access-jh6nn\") pod \"nova-metadata-0\" (UID: \"8807b3ea-b4ec-44d4-b2bc-d79fcb7c0a90\") " pod="openstack/nova-metadata-0" Jan 22 09:23:35 crc kubenswrapper[4811]: I0122 09:23:35.590794 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 22 09:23:35 crc kubenswrapper[4811]: I0122 09:23:35.621484 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 22 09:23:36 crc kubenswrapper[4811]: I0122 09:23:36.001695 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3da71c17-c624-4ba1-b5b2-dd5410951026" path="/var/lib/kubelet/pods/3da71c17-c624-4ba1-b5b2-dd5410951026/volumes" Jan 22 09:23:36 crc kubenswrapper[4811]: I0122 09:23:36.002688 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4121565-e932-4f2e-a6f1-62ecfeaab63b" path="/var/lib/kubelet/pods/a4121565-e932-4f2e-a6f1-62ecfeaab63b/volumes" Jan 22 09:23:36 crc kubenswrapper[4811]: I0122 09:23:36.003463 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 22 09:23:36 crc kubenswrapper[4811]: W0122 09:23:36.003975 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podee19b0fe_2250_40e7_9917_230c53ad0f13.slice/crio-89fd317d65c5c0a49178a6f1aa83f35d01e831fbae55667bc188b34ba92a94be WatchSource:0}: Error finding container 89fd317d65c5c0a49178a6f1aa83f35d01e831fbae55667bc188b34ba92a94be: Status 404 returned error can't find the container with id 89fd317d65c5c0a49178a6f1aa83f35d01e831fbae55667bc188b34ba92a94be Jan 22 09:23:36 crc kubenswrapper[4811]: I0122 09:23:36.064913 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 22 09:23:36 crc kubenswrapper[4811]: W0122 09:23:36.074284 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8807b3ea_b4ec_44d4_b2bc_d79fcb7c0a90.slice/crio-ffbad12c23014d97efcc1f01bc2056608670f929248d19f36f658602c9de47c7 WatchSource:0}: Error finding container ffbad12c23014d97efcc1f01bc2056608670f929248d19f36f658602c9de47c7: Status 404 returned error can't find the container with id ffbad12c23014d97efcc1f01bc2056608670f929248d19f36f658602c9de47c7 Jan 22 09:23:36 crc kubenswrapper[4811]: I0122 09:23:36.928300 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8807b3ea-b4ec-44d4-b2bc-d79fcb7c0a90","Type":"ContainerStarted","Data":"9aeee3bf886228f178ff79e6abb6efd4b28625304470f1cd6dfe411000da41dd"} Jan 22 09:23:36 crc kubenswrapper[4811]: I0122 09:23:36.928590 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8807b3ea-b4ec-44d4-b2bc-d79fcb7c0a90","Type":"ContainerStarted","Data":"638643c7bf0ec3aeec80d2b38cd6ae1c6cf1f9ef9681af2086a1574c73c87a09"} Jan 22 09:23:36 crc kubenswrapper[4811]: I0122 09:23:36.928602 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8807b3ea-b4ec-44d4-b2bc-d79fcb7c0a90","Type":"ContainerStarted","Data":"ffbad12c23014d97efcc1f01bc2056608670f929248d19f36f658602c9de47c7"} Jan 22 09:23:36 crc kubenswrapper[4811]: I0122 09:23:36.930805 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"ee19b0fe-2250-40e7-9917-230c53ad0f13","Type":"ContainerStarted","Data":"f7e9b500ffcfa2f54fe05eab6aea590958a06191304f4e5911cfbb03a3d820ba"} Jan 22 09:23:36 crc kubenswrapper[4811]: I0122 09:23:36.930832 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"ee19b0fe-2250-40e7-9917-230c53ad0f13","Type":"ContainerStarted","Data":"89fd317d65c5c0a49178a6f1aa83f35d01e831fbae55667bc188b34ba92a94be"} Jan 22 09:23:36 crc kubenswrapper[4811]: I0122 09:23:36.945618 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=1.945604301 podStartE2EDuration="1.945604301s" podCreationTimestamp="2026-01-22 09:23:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:23:36.94346209 +0000 UTC m=+1061.265649213" watchObservedRunningTime="2026-01-22 09:23:36.945604301 +0000 UTC m=+1061.267791414" Jan 22 09:23:36 crc kubenswrapper[4811]: I0122 09:23:36.960114 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=1.960099818 podStartE2EDuration="1.960099818s" podCreationTimestamp="2026-01-22 09:23:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:23:36.956875847 +0000 UTC m=+1061.279062970" watchObservedRunningTime="2026-01-22 09:23:36.960099818 +0000 UTC m=+1061.282286941" Jan 22 09:23:37 crc kubenswrapper[4811]: I0122 09:23:37.320726 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 22 09:23:37 crc kubenswrapper[4811]: I0122 09:23:37.321135 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 22 09:23:37 crc kubenswrapper[4811]: I0122 09:23:37.325735 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 22 09:23:37 crc kubenswrapper[4811]: I0122 09:23:37.334498 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 22 09:23:37 crc kubenswrapper[4811]: I0122 09:23:37.938765 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 22 09:23:37 crc kubenswrapper[4811]: I0122 09:23:37.941512 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 22 09:23:38 crc kubenswrapper[4811]: I0122 09:23:38.101492 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78c596d7cf-qxrcp"] Jan 22 09:23:38 crc kubenswrapper[4811]: I0122 09:23:38.106165 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78c596d7cf-qxrcp" Jan 22 09:23:38 crc kubenswrapper[4811]: I0122 09:23:38.116346 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78c596d7cf-qxrcp"] Jan 22 09:23:38 crc kubenswrapper[4811]: I0122 09:23:38.157085 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j27s2\" (UniqueName: \"kubernetes.io/projected/341dafad-c519-4c1d-a8ea-8de3df06709e-kube-api-access-j27s2\") pod \"dnsmasq-dns-78c596d7cf-qxrcp\" (UID: \"341dafad-c519-4c1d-a8ea-8de3df06709e\") " pod="openstack/dnsmasq-dns-78c596d7cf-qxrcp" Jan 22 09:23:38 crc kubenswrapper[4811]: I0122 09:23:38.157197 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/341dafad-c519-4c1d-a8ea-8de3df06709e-dns-svc\") pod \"dnsmasq-dns-78c596d7cf-qxrcp\" (UID: \"341dafad-c519-4c1d-a8ea-8de3df06709e\") " pod="openstack/dnsmasq-dns-78c596d7cf-qxrcp" Jan 22 09:23:38 crc kubenswrapper[4811]: I0122 09:23:38.157284 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/341dafad-c519-4c1d-a8ea-8de3df06709e-config\") pod \"dnsmasq-dns-78c596d7cf-qxrcp\" (UID: \"341dafad-c519-4c1d-a8ea-8de3df06709e\") " pod="openstack/dnsmasq-dns-78c596d7cf-qxrcp" Jan 22 09:23:38 crc kubenswrapper[4811]: I0122 09:23:38.157324 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/341dafad-c519-4c1d-a8ea-8de3df06709e-ovsdbserver-nb\") pod \"dnsmasq-dns-78c596d7cf-qxrcp\" (UID: \"341dafad-c519-4c1d-a8ea-8de3df06709e\") " pod="openstack/dnsmasq-dns-78c596d7cf-qxrcp" Jan 22 09:23:38 crc kubenswrapper[4811]: I0122 09:23:38.157339 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/341dafad-c519-4c1d-a8ea-8de3df06709e-ovsdbserver-sb\") pod \"dnsmasq-dns-78c596d7cf-qxrcp\" (UID: \"341dafad-c519-4c1d-a8ea-8de3df06709e\") " pod="openstack/dnsmasq-dns-78c596d7cf-qxrcp" Jan 22 09:23:38 crc kubenswrapper[4811]: I0122 09:23:38.259107 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/341dafad-c519-4c1d-a8ea-8de3df06709e-config\") pod \"dnsmasq-dns-78c596d7cf-qxrcp\" (UID: \"341dafad-c519-4c1d-a8ea-8de3df06709e\") " pod="openstack/dnsmasq-dns-78c596d7cf-qxrcp" Jan 22 09:23:38 crc kubenswrapper[4811]: I0122 09:23:38.259224 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/341dafad-c519-4c1d-a8ea-8de3df06709e-ovsdbserver-nb\") pod \"dnsmasq-dns-78c596d7cf-qxrcp\" (UID: \"341dafad-c519-4c1d-a8ea-8de3df06709e\") " pod="openstack/dnsmasq-dns-78c596d7cf-qxrcp" Jan 22 09:23:38 crc kubenswrapper[4811]: I0122 09:23:38.259245 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/341dafad-c519-4c1d-a8ea-8de3df06709e-ovsdbserver-sb\") pod \"dnsmasq-dns-78c596d7cf-qxrcp\" (UID: \"341dafad-c519-4c1d-a8ea-8de3df06709e\") " pod="openstack/dnsmasq-dns-78c596d7cf-qxrcp" Jan 22 09:23:38 crc kubenswrapper[4811]: I0122 09:23:38.259273 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j27s2\" (UniqueName: \"kubernetes.io/projected/341dafad-c519-4c1d-a8ea-8de3df06709e-kube-api-access-j27s2\") pod \"dnsmasq-dns-78c596d7cf-qxrcp\" (UID: \"341dafad-c519-4c1d-a8ea-8de3df06709e\") " pod="openstack/dnsmasq-dns-78c596d7cf-qxrcp" Jan 22 09:23:38 crc kubenswrapper[4811]: I0122 09:23:38.259397 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/341dafad-c519-4c1d-a8ea-8de3df06709e-dns-svc\") pod \"dnsmasq-dns-78c596d7cf-qxrcp\" (UID: \"341dafad-c519-4c1d-a8ea-8de3df06709e\") " pod="openstack/dnsmasq-dns-78c596d7cf-qxrcp" Jan 22 09:23:38 crc kubenswrapper[4811]: I0122 09:23:38.259955 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/341dafad-c519-4c1d-a8ea-8de3df06709e-ovsdbserver-sb\") pod \"dnsmasq-dns-78c596d7cf-qxrcp\" (UID: \"341dafad-c519-4c1d-a8ea-8de3df06709e\") " pod="openstack/dnsmasq-dns-78c596d7cf-qxrcp" Jan 22 09:23:38 crc kubenswrapper[4811]: I0122 09:23:38.260175 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/341dafad-c519-4c1d-a8ea-8de3df06709e-dns-svc\") pod \"dnsmasq-dns-78c596d7cf-qxrcp\" (UID: \"341dafad-c519-4c1d-a8ea-8de3df06709e\") " pod="openstack/dnsmasq-dns-78c596d7cf-qxrcp" Jan 22 09:23:38 crc kubenswrapper[4811]: I0122 09:23:38.260494 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/341dafad-c519-4c1d-a8ea-8de3df06709e-config\") pod \"dnsmasq-dns-78c596d7cf-qxrcp\" (UID: \"341dafad-c519-4c1d-a8ea-8de3df06709e\") " pod="openstack/dnsmasq-dns-78c596d7cf-qxrcp" Jan 22 09:23:38 crc kubenswrapper[4811]: I0122 09:23:38.261049 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/341dafad-c519-4c1d-a8ea-8de3df06709e-ovsdbserver-nb\") pod \"dnsmasq-dns-78c596d7cf-qxrcp\" (UID: \"341dafad-c519-4c1d-a8ea-8de3df06709e\") " pod="openstack/dnsmasq-dns-78c596d7cf-qxrcp" Jan 22 09:23:38 crc kubenswrapper[4811]: I0122 09:23:38.281947 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j27s2\" (UniqueName: \"kubernetes.io/projected/341dafad-c519-4c1d-a8ea-8de3df06709e-kube-api-access-j27s2\") pod \"dnsmasq-dns-78c596d7cf-qxrcp\" (UID: \"341dafad-c519-4c1d-a8ea-8de3df06709e\") " pod="openstack/dnsmasq-dns-78c596d7cf-qxrcp" Jan 22 09:23:38 crc kubenswrapper[4811]: I0122 09:23:38.421806 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78c596d7cf-qxrcp" Jan 22 09:23:38 crc kubenswrapper[4811]: I0122 09:23:38.760440 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78c596d7cf-qxrcp"] Jan 22 09:23:38 crc kubenswrapper[4811]: I0122 09:23:38.949976 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78c596d7cf-qxrcp" event={"ID":"341dafad-c519-4c1d-a8ea-8de3df06709e","Type":"ContainerStarted","Data":"344d165ce621404c7b5cd025b503bce62c4b14556ea10f347da344e69054f69e"} Jan 22 09:23:39 crc kubenswrapper[4811]: I0122 09:23:39.958126 4811 generic.go:334] "Generic (PLEG): container finished" podID="341dafad-c519-4c1d-a8ea-8de3df06709e" containerID="84a850421e27d37426b2930d4de0238c1c954499a80e21e7d1aa56891fbc4681" exitCode=0 Jan 22 09:23:39 crc kubenswrapper[4811]: I0122 09:23:39.958223 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78c596d7cf-qxrcp" event={"ID":"341dafad-c519-4c1d-a8ea-8de3df06709e","Type":"ContainerDied","Data":"84a850421e27d37426b2930d4de0238c1c954499a80e21e7d1aa56891fbc4681"} Jan 22 09:23:40 crc kubenswrapper[4811]: I0122 09:23:40.201634 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 22 09:23:40 crc kubenswrapper[4811]: I0122 09:23:40.549955 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 22 09:23:40 crc kubenswrapper[4811]: I0122 09:23:40.550279 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bf971fef-0ef1-47b6-ba75-ca76dde2e658" containerName="sg-core" containerID="cri-o://9a5a1873bd8ec0e3b26b4ceef007129955303827ea35881e2a75785cb4a683bc" gracePeriod=30 Jan 22 09:23:40 crc kubenswrapper[4811]: I0122 09:23:40.550469 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bf971fef-0ef1-47b6-ba75-ca76dde2e658" containerName="ceilometer-notification-agent" containerID="cri-o://37f6d0767104e3f03bfba57da231e1602954d2430b45c16e733ffc002f8c7ac6" gracePeriod=30 Jan 22 09:23:40 crc kubenswrapper[4811]: I0122 09:23:40.550592 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bf971fef-0ef1-47b6-ba75-ca76dde2e658" containerName="ceilometer-central-agent" containerID="cri-o://c57326606911589e15ed05807cfbc3f21e6ce59aa1e0583b6f78b3c49698fc55" gracePeriod=30 Jan 22 09:23:40 crc kubenswrapper[4811]: I0122 09:23:40.551524 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bf971fef-0ef1-47b6-ba75-ca76dde2e658" containerName="proxy-httpd" containerID="cri-o://3fd61ef4d17d6f0a8c457d3e3c9b89bb2b1a8f3005b7d5709c98fb22b6b5b513" gracePeriod=30 Jan 22 09:23:40 crc kubenswrapper[4811]: I0122 09:23:40.575126 4811 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="bf971fef-0ef1-47b6-ba75-ca76dde2e658" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502" Jan 22 09:23:40 crc kubenswrapper[4811]: I0122 09:23:40.591096 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 22 09:23:40 crc kubenswrapper[4811]: I0122 09:23:40.622083 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 22 09:23:40 crc kubenswrapper[4811]: I0122 09:23:40.622610 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 22 09:23:40 crc kubenswrapper[4811]: I0122 09:23:40.974808 4811 generic.go:334] "Generic (PLEG): container finished" podID="bf971fef-0ef1-47b6-ba75-ca76dde2e658" containerID="3fd61ef4d17d6f0a8c457d3e3c9b89bb2b1a8f3005b7d5709c98fb22b6b5b513" exitCode=0 Jan 22 09:23:40 crc kubenswrapper[4811]: I0122 09:23:40.974839 4811 generic.go:334] "Generic (PLEG): container finished" podID="bf971fef-0ef1-47b6-ba75-ca76dde2e658" containerID="9a5a1873bd8ec0e3b26b4ceef007129955303827ea35881e2a75785cb4a683bc" exitCode=2 Jan 22 09:23:40 crc kubenswrapper[4811]: I0122 09:23:40.974878 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bf971fef-0ef1-47b6-ba75-ca76dde2e658","Type":"ContainerDied","Data":"3fd61ef4d17d6f0a8c457d3e3c9b89bb2b1a8f3005b7d5709c98fb22b6b5b513"} Jan 22 09:23:40 crc kubenswrapper[4811]: I0122 09:23:40.974906 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bf971fef-0ef1-47b6-ba75-ca76dde2e658","Type":"ContainerDied","Data":"9a5a1873bd8ec0e3b26b4ceef007129955303827ea35881e2a75785cb4a683bc"} Jan 22 09:23:40 crc kubenswrapper[4811]: I0122 09:23:40.978190 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78c596d7cf-qxrcp" event={"ID":"341dafad-c519-4c1d-a8ea-8de3df06709e","Type":"ContainerStarted","Data":"860fb2da7bda7a58eac3af72d1c3d585ebfac063fc94ac5a5259caff333dfe9a"} Jan 22 09:23:40 crc kubenswrapper[4811]: I0122 09:23:40.978315 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="e229ce79-29a8-487c-9468-3a0ae1449022" containerName="nova-api-log" containerID="cri-o://a8254d941cdca8d983b408bed5220b7b21e7a6da6e6b73f1d3de0ba18e50f08e" gracePeriod=30 Jan 22 09:23:40 crc kubenswrapper[4811]: I0122 09:23:40.978506 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-78c596d7cf-qxrcp" Jan 22 09:23:40 crc kubenswrapper[4811]: I0122 09:23:40.978560 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="e229ce79-29a8-487c-9468-3a0ae1449022" containerName="nova-api-api" containerID="cri-o://90a38fdd2abe993092f56c8d59c81f75ca092a08b68a26f65909329e10830a8f" gracePeriod=30 Jan 22 09:23:41 crc kubenswrapper[4811]: I0122 09:23:41.006117 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-78c596d7cf-qxrcp" podStartSLOduration=3.006100488 podStartE2EDuration="3.006100488s" podCreationTimestamp="2026-01-22 09:23:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:23:40.994011339 +0000 UTC m=+1065.316198452" watchObservedRunningTime="2026-01-22 09:23:41.006100488 +0000 UTC m=+1065.328287611" Jan 22 09:23:41 crc kubenswrapper[4811]: I0122 09:23:41.990141 4811 generic.go:334] "Generic (PLEG): container finished" podID="bf971fef-0ef1-47b6-ba75-ca76dde2e658" containerID="c57326606911589e15ed05807cfbc3f21e6ce59aa1e0583b6f78b3c49698fc55" exitCode=0 Jan 22 09:23:41 crc kubenswrapper[4811]: I0122 09:23:41.990199 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bf971fef-0ef1-47b6-ba75-ca76dde2e658","Type":"ContainerDied","Data":"c57326606911589e15ed05807cfbc3f21e6ce59aa1e0583b6f78b3c49698fc55"} Jan 22 09:23:41 crc kubenswrapper[4811]: I0122 09:23:41.993254 4811 generic.go:334] "Generic (PLEG): container finished" podID="e229ce79-29a8-487c-9468-3a0ae1449022" containerID="a8254d941cdca8d983b408bed5220b7b21e7a6da6e6b73f1d3de0ba18e50f08e" exitCode=143 Jan 22 09:23:42 crc kubenswrapper[4811]: I0122 09:23:42.002707 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e229ce79-29a8-487c-9468-3a0ae1449022","Type":"ContainerDied","Data":"a8254d941cdca8d983b408bed5220b7b21e7a6da6e6b73f1d3de0ba18e50f08e"} Jan 22 09:23:44 crc kubenswrapper[4811]: I0122 09:23:44.441470 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 22 09:23:44 crc kubenswrapper[4811]: I0122 09:23:44.584227 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e229ce79-29a8-487c-9468-3a0ae1449022-combined-ca-bundle\") pod \"e229ce79-29a8-487c-9468-3a0ae1449022\" (UID: \"e229ce79-29a8-487c-9468-3a0ae1449022\") " Jan 22 09:23:44 crc kubenswrapper[4811]: I0122 09:23:44.584343 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e229ce79-29a8-487c-9468-3a0ae1449022-logs\") pod \"e229ce79-29a8-487c-9468-3a0ae1449022\" (UID: \"e229ce79-29a8-487c-9468-3a0ae1449022\") " Jan 22 09:23:44 crc kubenswrapper[4811]: I0122 09:23:44.584403 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-79nc6\" (UniqueName: \"kubernetes.io/projected/e229ce79-29a8-487c-9468-3a0ae1449022-kube-api-access-79nc6\") pod \"e229ce79-29a8-487c-9468-3a0ae1449022\" (UID: \"e229ce79-29a8-487c-9468-3a0ae1449022\") " Jan 22 09:23:44 crc kubenswrapper[4811]: I0122 09:23:44.584483 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e229ce79-29a8-487c-9468-3a0ae1449022-config-data\") pod \"e229ce79-29a8-487c-9468-3a0ae1449022\" (UID: \"e229ce79-29a8-487c-9468-3a0ae1449022\") " Jan 22 09:23:44 crc kubenswrapper[4811]: I0122 09:23:44.584917 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e229ce79-29a8-487c-9468-3a0ae1449022-logs" (OuterVolumeSpecName: "logs") pod "e229ce79-29a8-487c-9468-3a0ae1449022" (UID: "e229ce79-29a8-487c-9468-3a0ae1449022"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:23:44 crc kubenswrapper[4811]: I0122 09:23:44.610374 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e229ce79-29a8-487c-9468-3a0ae1449022-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e229ce79-29a8-487c-9468-3a0ae1449022" (UID: "e229ce79-29a8-487c-9468-3a0ae1449022"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:23:44 crc kubenswrapper[4811]: I0122 09:23:44.610739 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e229ce79-29a8-487c-9468-3a0ae1449022-kube-api-access-79nc6" (OuterVolumeSpecName: "kube-api-access-79nc6") pod "e229ce79-29a8-487c-9468-3a0ae1449022" (UID: "e229ce79-29a8-487c-9468-3a0ae1449022"). InnerVolumeSpecName "kube-api-access-79nc6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:23:44 crc kubenswrapper[4811]: I0122 09:23:44.622488 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e229ce79-29a8-487c-9468-3a0ae1449022-config-data" (OuterVolumeSpecName: "config-data") pod "e229ce79-29a8-487c-9468-3a0ae1449022" (UID: "e229ce79-29a8-487c-9468-3a0ae1449022"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:23:44 crc kubenswrapper[4811]: I0122 09:23:44.686588 4811 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e229ce79-29a8-487c-9468-3a0ae1449022-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 09:23:44 crc kubenswrapper[4811]: I0122 09:23:44.686988 4811 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e229ce79-29a8-487c-9468-3a0ae1449022-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 09:23:44 crc kubenswrapper[4811]: I0122 09:23:44.687093 4811 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e229ce79-29a8-487c-9468-3a0ae1449022-logs\") on node \"crc\" DevicePath \"\"" Jan 22 09:23:44 crc kubenswrapper[4811]: I0122 09:23:44.687161 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-79nc6\" (UniqueName: \"kubernetes.io/projected/e229ce79-29a8-487c-9468-3a0ae1449022-kube-api-access-79nc6\") on node \"crc\" DevicePath \"\"" Jan 22 09:23:44 crc kubenswrapper[4811]: I0122 09:23:44.906843 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 09:23:45 crc kubenswrapper[4811]: I0122 09:23:45.016491 4811 generic.go:334] "Generic (PLEG): container finished" podID="e229ce79-29a8-487c-9468-3a0ae1449022" containerID="90a38fdd2abe993092f56c8d59c81f75ca092a08b68a26f65909329e10830a8f" exitCode=0 Jan 22 09:23:45 crc kubenswrapper[4811]: I0122 09:23:45.016552 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e229ce79-29a8-487c-9468-3a0ae1449022","Type":"ContainerDied","Data":"90a38fdd2abe993092f56c8d59c81f75ca092a08b68a26f65909329e10830a8f"} Jan 22 09:23:45 crc kubenswrapper[4811]: I0122 09:23:45.016579 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e229ce79-29a8-487c-9468-3a0ae1449022","Type":"ContainerDied","Data":"33e455ab694f97b17a9b165842df97e4478bac0f2418840867d10877ed4219d1"} Jan 22 09:23:45 crc kubenswrapper[4811]: I0122 09:23:45.016596 4811 scope.go:117] "RemoveContainer" containerID="90a38fdd2abe993092f56c8d59c81f75ca092a08b68a26f65909329e10830a8f" Jan 22 09:23:45 crc kubenswrapper[4811]: I0122 09:23:45.016722 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 22 09:23:45 crc kubenswrapper[4811]: I0122 09:23:45.024135 4811 generic.go:334] "Generic (PLEG): container finished" podID="bf971fef-0ef1-47b6-ba75-ca76dde2e658" containerID="37f6d0767104e3f03bfba57da231e1602954d2430b45c16e733ffc002f8c7ac6" exitCode=0 Jan 22 09:23:45 crc kubenswrapper[4811]: I0122 09:23:45.024184 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bf971fef-0ef1-47b6-ba75-ca76dde2e658","Type":"ContainerDied","Data":"37f6d0767104e3f03bfba57da231e1602954d2430b45c16e733ffc002f8c7ac6"} Jan 22 09:23:45 crc kubenswrapper[4811]: I0122 09:23:45.024190 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 09:23:45 crc kubenswrapper[4811]: I0122 09:23:45.024211 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bf971fef-0ef1-47b6-ba75-ca76dde2e658","Type":"ContainerDied","Data":"519ecc41eabdb2622f70369133a0de99bf6f8f3cdc596f2c39fa56e5be118a69"} Jan 22 09:23:45 crc kubenswrapper[4811]: I0122 09:23:45.043004 4811 scope.go:117] "RemoveContainer" containerID="a8254d941cdca8d983b408bed5220b7b21e7a6da6e6b73f1d3de0ba18e50f08e" Jan 22 09:23:45 crc kubenswrapper[4811]: I0122 09:23:45.053100 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 22 09:23:45 crc kubenswrapper[4811]: I0122 09:23:45.059690 4811 scope.go:117] "RemoveContainer" containerID="90a38fdd2abe993092f56c8d59c81f75ca092a08b68a26f65909329e10830a8f" Jan 22 09:23:45 crc kubenswrapper[4811]: E0122 09:23:45.065831 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"90a38fdd2abe993092f56c8d59c81f75ca092a08b68a26f65909329e10830a8f\": container with ID starting with 90a38fdd2abe993092f56c8d59c81f75ca092a08b68a26f65909329e10830a8f not found: ID does not exist" containerID="90a38fdd2abe993092f56c8d59c81f75ca092a08b68a26f65909329e10830a8f" Jan 22 09:23:45 crc kubenswrapper[4811]: I0122 09:23:45.065890 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90a38fdd2abe993092f56c8d59c81f75ca092a08b68a26f65909329e10830a8f"} err="failed to get container status \"90a38fdd2abe993092f56c8d59c81f75ca092a08b68a26f65909329e10830a8f\": rpc error: code = NotFound desc = could not find container \"90a38fdd2abe993092f56c8d59c81f75ca092a08b68a26f65909329e10830a8f\": container with ID starting with 90a38fdd2abe993092f56c8d59c81f75ca092a08b68a26f65909329e10830a8f not found: ID does not exist" Jan 22 09:23:45 crc kubenswrapper[4811]: I0122 09:23:45.065923 4811 scope.go:117] "RemoveContainer" containerID="a8254d941cdca8d983b408bed5220b7b21e7a6da6e6b73f1d3de0ba18e50f08e" Jan 22 09:23:45 crc kubenswrapper[4811]: E0122 09:23:45.066219 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a8254d941cdca8d983b408bed5220b7b21e7a6da6e6b73f1d3de0ba18e50f08e\": container with ID starting with a8254d941cdca8d983b408bed5220b7b21e7a6da6e6b73f1d3de0ba18e50f08e not found: ID does not exist" containerID="a8254d941cdca8d983b408bed5220b7b21e7a6da6e6b73f1d3de0ba18e50f08e" Jan 22 09:23:45 crc kubenswrapper[4811]: I0122 09:23:45.066247 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8254d941cdca8d983b408bed5220b7b21e7a6da6e6b73f1d3de0ba18e50f08e"} err="failed to get container status \"a8254d941cdca8d983b408bed5220b7b21e7a6da6e6b73f1d3de0ba18e50f08e\": rpc error: code = NotFound desc = could not find container \"a8254d941cdca8d983b408bed5220b7b21e7a6da6e6b73f1d3de0ba18e50f08e\": container with ID starting with a8254d941cdca8d983b408bed5220b7b21e7a6da6e6b73f1d3de0ba18e50f08e not found: ID does not exist" Jan 22 09:23:45 crc kubenswrapper[4811]: I0122 09:23:45.066268 4811 scope.go:117] "RemoveContainer" containerID="3fd61ef4d17d6f0a8c457d3e3c9b89bb2b1a8f3005b7d5709c98fb22b6b5b513" Jan 22 09:23:45 crc kubenswrapper[4811]: I0122 09:23:45.071974 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 22 09:23:45 crc kubenswrapper[4811]: I0122 09:23:45.081407 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 22 09:23:45 crc kubenswrapper[4811]: E0122 09:23:45.090585 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf971fef-0ef1-47b6-ba75-ca76dde2e658" containerName="ceilometer-notification-agent" Jan 22 09:23:45 crc kubenswrapper[4811]: I0122 09:23:45.090613 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf971fef-0ef1-47b6-ba75-ca76dde2e658" containerName="ceilometer-notification-agent" Jan 22 09:23:45 crc kubenswrapper[4811]: E0122 09:23:45.090643 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf971fef-0ef1-47b6-ba75-ca76dde2e658" containerName="proxy-httpd" Jan 22 09:23:45 crc kubenswrapper[4811]: I0122 09:23:45.090652 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf971fef-0ef1-47b6-ba75-ca76dde2e658" containerName="proxy-httpd" Jan 22 09:23:45 crc kubenswrapper[4811]: E0122 09:23:45.090671 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf971fef-0ef1-47b6-ba75-ca76dde2e658" containerName="sg-core" Jan 22 09:23:45 crc kubenswrapper[4811]: I0122 09:23:45.090676 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf971fef-0ef1-47b6-ba75-ca76dde2e658" containerName="sg-core" Jan 22 09:23:45 crc kubenswrapper[4811]: E0122 09:23:45.090699 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e229ce79-29a8-487c-9468-3a0ae1449022" containerName="nova-api-log" Jan 22 09:23:45 crc kubenswrapper[4811]: I0122 09:23:45.090705 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="e229ce79-29a8-487c-9468-3a0ae1449022" containerName="nova-api-log" Jan 22 09:23:45 crc kubenswrapper[4811]: E0122 09:23:45.090713 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf971fef-0ef1-47b6-ba75-ca76dde2e658" containerName="ceilometer-central-agent" Jan 22 09:23:45 crc kubenswrapper[4811]: I0122 09:23:45.090720 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf971fef-0ef1-47b6-ba75-ca76dde2e658" containerName="ceilometer-central-agent" Jan 22 09:23:45 crc kubenswrapper[4811]: E0122 09:23:45.090737 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e229ce79-29a8-487c-9468-3a0ae1449022" containerName="nova-api-api" Jan 22 09:23:45 crc kubenswrapper[4811]: I0122 09:23:45.090743 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="e229ce79-29a8-487c-9468-3a0ae1449022" containerName="nova-api-api" Jan 22 09:23:45 crc kubenswrapper[4811]: I0122 09:23:45.091070 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="e229ce79-29a8-487c-9468-3a0ae1449022" containerName="nova-api-api" Jan 22 09:23:45 crc kubenswrapper[4811]: I0122 09:23:45.091104 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf971fef-0ef1-47b6-ba75-ca76dde2e658" containerName="ceilometer-notification-agent" Jan 22 09:23:45 crc kubenswrapper[4811]: I0122 09:23:45.091119 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf971fef-0ef1-47b6-ba75-ca76dde2e658" containerName="proxy-httpd" Jan 22 09:23:45 crc kubenswrapper[4811]: I0122 09:23:45.091129 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf971fef-0ef1-47b6-ba75-ca76dde2e658" containerName="sg-core" Jan 22 09:23:45 crc kubenswrapper[4811]: I0122 09:23:45.091138 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf971fef-0ef1-47b6-ba75-ca76dde2e658" containerName="ceilometer-central-agent" Jan 22 09:23:45 crc kubenswrapper[4811]: I0122 09:23:45.091147 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="e229ce79-29a8-487c-9468-3a0ae1449022" containerName="nova-api-log" Jan 22 09:23:45 crc kubenswrapper[4811]: I0122 09:23:45.102340 4811 scope.go:117] "RemoveContainer" containerID="9a5a1873bd8ec0e3b26b4ceef007129955303827ea35881e2a75785cb4a683bc" Jan 22 09:23:45 crc kubenswrapper[4811]: I0122 09:23:45.102873 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bf971fef-0ef1-47b6-ba75-ca76dde2e658-log-httpd\") pod \"bf971fef-0ef1-47b6-ba75-ca76dde2e658\" (UID: \"bf971fef-0ef1-47b6-ba75-ca76dde2e658\") " Jan 22 09:23:45 crc kubenswrapper[4811]: I0122 09:23:45.102926 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bf971fef-0ef1-47b6-ba75-ca76dde2e658-run-httpd\") pod \"bf971fef-0ef1-47b6-ba75-ca76dde2e658\" (UID: \"bf971fef-0ef1-47b6-ba75-ca76dde2e658\") " Jan 22 09:23:45 crc kubenswrapper[4811]: I0122 09:23:45.102991 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2dc4f\" (UniqueName: \"kubernetes.io/projected/bf971fef-0ef1-47b6-ba75-ca76dde2e658-kube-api-access-2dc4f\") pod \"bf971fef-0ef1-47b6-ba75-ca76dde2e658\" (UID: \"bf971fef-0ef1-47b6-ba75-ca76dde2e658\") " Jan 22 09:23:45 crc kubenswrapper[4811]: I0122 09:23:45.103025 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf971fef-0ef1-47b6-ba75-ca76dde2e658-scripts\") pod \"bf971fef-0ef1-47b6-ba75-ca76dde2e658\" (UID: \"bf971fef-0ef1-47b6-ba75-ca76dde2e658\") " Jan 22 09:23:45 crc kubenswrapper[4811]: I0122 09:23:45.103048 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bf971fef-0ef1-47b6-ba75-ca76dde2e658-sg-core-conf-yaml\") pod \"bf971fef-0ef1-47b6-ba75-ca76dde2e658\" (UID: \"bf971fef-0ef1-47b6-ba75-ca76dde2e658\") " Jan 22 09:23:45 crc kubenswrapper[4811]: I0122 09:23:45.103069 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf971fef-0ef1-47b6-ba75-ca76dde2e658-config-data\") pod \"bf971fef-0ef1-47b6-ba75-ca76dde2e658\" (UID: \"bf971fef-0ef1-47b6-ba75-ca76dde2e658\") " Jan 22 09:23:45 crc kubenswrapper[4811]: I0122 09:23:45.103190 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf971fef-0ef1-47b6-ba75-ca76dde2e658-ceilometer-tls-certs\") pod \"bf971fef-0ef1-47b6-ba75-ca76dde2e658\" (UID: \"bf971fef-0ef1-47b6-ba75-ca76dde2e658\") " Jan 22 09:23:45 crc kubenswrapper[4811]: I0122 09:23:45.103247 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf971fef-0ef1-47b6-ba75-ca76dde2e658-combined-ca-bundle\") pod \"bf971fef-0ef1-47b6-ba75-ca76dde2e658\" (UID: \"bf971fef-0ef1-47b6-ba75-ca76dde2e658\") " Jan 22 09:23:45 crc kubenswrapper[4811]: I0122 09:23:45.103475 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bf971fef-0ef1-47b6-ba75-ca76dde2e658-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "bf971fef-0ef1-47b6-ba75-ca76dde2e658" (UID: "bf971fef-0ef1-47b6-ba75-ca76dde2e658"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:23:45 crc kubenswrapper[4811]: I0122 09:23:45.103761 4811 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bf971fef-0ef1-47b6-ba75-ca76dde2e658-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 09:23:45 crc kubenswrapper[4811]: I0122 09:23:45.107217 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 22 09:23:45 crc kubenswrapper[4811]: I0122 09:23:45.107304 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 22 09:23:45 crc kubenswrapper[4811]: I0122 09:23:45.110599 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bf971fef-0ef1-47b6-ba75-ca76dde2e658-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "bf971fef-0ef1-47b6-ba75-ca76dde2e658" (UID: "bf971fef-0ef1-47b6-ba75-ca76dde2e658"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:23:45 crc kubenswrapper[4811]: I0122 09:23:45.111562 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 22 09:23:45 crc kubenswrapper[4811]: I0122 09:23:45.111693 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 22 09:23:45 crc kubenswrapper[4811]: I0122 09:23:45.111561 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 22 09:23:45 crc kubenswrapper[4811]: I0122 09:23:45.120896 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf971fef-0ef1-47b6-ba75-ca76dde2e658-kube-api-access-2dc4f" (OuterVolumeSpecName: "kube-api-access-2dc4f") pod "bf971fef-0ef1-47b6-ba75-ca76dde2e658" (UID: "bf971fef-0ef1-47b6-ba75-ca76dde2e658"). InnerVolumeSpecName "kube-api-access-2dc4f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:23:45 crc kubenswrapper[4811]: I0122 09:23:45.120983 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf971fef-0ef1-47b6-ba75-ca76dde2e658-scripts" (OuterVolumeSpecName: "scripts") pod "bf971fef-0ef1-47b6-ba75-ca76dde2e658" (UID: "bf971fef-0ef1-47b6-ba75-ca76dde2e658"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:23:45 crc kubenswrapper[4811]: I0122 09:23:45.140461 4811 scope.go:117] "RemoveContainer" containerID="37f6d0767104e3f03bfba57da231e1602954d2430b45c16e733ffc002f8c7ac6" Jan 22 09:23:45 crc kubenswrapper[4811]: I0122 09:23:45.157980 4811 scope.go:117] "RemoveContainer" containerID="c57326606911589e15ed05807cfbc3f21e6ce59aa1e0583b6f78b3c49698fc55" Jan 22 09:23:45 crc kubenswrapper[4811]: I0122 09:23:45.163091 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf971fef-0ef1-47b6-ba75-ca76dde2e658-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "bf971fef-0ef1-47b6-ba75-ca76dde2e658" (UID: "bf971fef-0ef1-47b6-ba75-ca76dde2e658"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:23:45 crc kubenswrapper[4811]: I0122 09:23:45.171304 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf971fef-0ef1-47b6-ba75-ca76dde2e658-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "bf971fef-0ef1-47b6-ba75-ca76dde2e658" (UID: "bf971fef-0ef1-47b6-ba75-ca76dde2e658"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:23:45 crc kubenswrapper[4811]: I0122 09:23:45.173099 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf971fef-0ef1-47b6-ba75-ca76dde2e658-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bf971fef-0ef1-47b6-ba75-ca76dde2e658" (UID: "bf971fef-0ef1-47b6-ba75-ca76dde2e658"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:23:45 crc kubenswrapper[4811]: I0122 09:23:45.176122 4811 scope.go:117] "RemoveContainer" containerID="3fd61ef4d17d6f0a8c457d3e3c9b89bb2b1a8f3005b7d5709c98fb22b6b5b513" Jan 22 09:23:45 crc kubenswrapper[4811]: E0122 09:23:45.176924 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3fd61ef4d17d6f0a8c457d3e3c9b89bb2b1a8f3005b7d5709c98fb22b6b5b513\": container with ID starting with 3fd61ef4d17d6f0a8c457d3e3c9b89bb2b1a8f3005b7d5709c98fb22b6b5b513 not found: ID does not exist" containerID="3fd61ef4d17d6f0a8c457d3e3c9b89bb2b1a8f3005b7d5709c98fb22b6b5b513" Jan 22 09:23:45 crc kubenswrapper[4811]: I0122 09:23:45.176957 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3fd61ef4d17d6f0a8c457d3e3c9b89bb2b1a8f3005b7d5709c98fb22b6b5b513"} err="failed to get container status \"3fd61ef4d17d6f0a8c457d3e3c9b89bb2b1a8f3005b7d5709c98fb22b6b5b513\": rpc error: code = NotFound desc = could not find container \"3fd61ef4d17d6f0a8c457d3e3c9b89bb2b1a8f3005b7d5709c98fb22b6b5b513\": container with ID starting with 3fd61ef4d17d6f0a8c457d3e3c9b89bb2b1a8f3005b7d5709c98fb22b6b5b513 not found: ID does not exist" Jan 22 09:23:45 crc kubenswrapper[4811]: I0122 09:23:45.176982 4811 scope.go:117] "RemoveContainer" containerID="9a5a1873bd8ec0e3b26b4ceef007129955303827ea35881e2a75785cb4a683bc" Jan 22 09:23:45 crc kubenswrapper[4811]: E0122 09:23:45.177409 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a5a1873bd8ec0e3b26b4ceef007129955303827ea35881e2a75785cb4a683bc\": container with ID starting with 9a5a1873bd8ec0e3b26b4ceef007129955303827ea35881e2a75785cb4a683bc not found: ID does not exist" containerID="9a5a1873bd8ec0e3b26b4ceef007129955303827ea35881e2a75785cb4a683bc" Jan 22 09:23:45 crc kubenswrapper[4811]: I0122 09:23:45.177433 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a5a1873bd8ec0e3b26b4ceef007129955303827ea35881e2a75785cb4a683bc"} err="failed to get container status \"9a5a1873bd8ec0e3b26b4ceef007129955303827ea35881e2a75785cb4a683bc\": rpc error: code = NotFound desc = could not find container \"9a5a1873bd8ec0e3b26b4ceef007129955303827ea35881e2a75785cb4a683bc\": container with ID starting with 9a5a1873bd8ec0e3b26b4ceef007129955303827ea35881e2a75785cb4a683bc not found: ID does not exist" Jan 22 09:23:45 crc kubenswrapper[4811]: I0122 09:23:45.177448 4811 scope.go:117] "RemoveContainer" containerID="37f6d0767104e3f03bfba57da231e1602954d2430b45c16e733ffc002f8c7ac6" Jan 22 09:23:45 crc kubenswrapper[4811]: E0122 09:23:45.177936 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"37f6d0767104e3f03bfba57da231e1602954d2430b45c16e733ffc002f8c7ac6\": container with ID starting with 37f6d0767104e3f03bfba57da231e1602954d2430b45c16e733ffc002f8c7ac6 not found: ID does not exist" containerID="37f6d0767104e3f03bfba57da231e1602954d2430b45c16e733ffc002f8c7ac6" Jan 22 09:23:45 crc kubenswrapper[4811]: I0122 09:23:45.177956 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37f6d0767104e3f03bfba57da231e1602954d2430b45c16e733ffc002f8c7ac6"} err="failed to get container status \"37f6d0767104e3f03bfba57da231e1602954d2430b45c16e733ffc002f8c7ac6\": rpc error: code = NotFound desc = could not find container \"37f6d0767104e3f03bfba57da231e1602954d2430b45c16e733ffc002f8c7ac6\": container with ID starting with 37f6d0767104e3f03bfba57da231e1602954d2430b45c16e733ffc002f8c7ac6 not found: ID does not exist" Jan 22 09:23:45 crc kubenswrapper[4811]: I0122 09:23:45.177969 4811 scope.go:117] "RemoveContainer" containerID="c57326606911589e15ed05807cfbc3f21e6ce59aa1e0583b6f78b3c49698fc55" Jan 22 09:23:45 crc kubenswrapper[4811]: E0122 09:23:45.178201 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c57326606911589e15ed05807cfbc3f21e6ce59aa1e0583b6f78b3c49698fc55\": container with ID starting with c57326606911589e15ed05807cfbc3f21e6ce59aa1e0583b6f78b3c49698fc55 not found: ID does not exist" containerID="c57326606911589e15ed05807cfbc3f21e6ce59aa1e0583b6f78b3c49698fc55" Jan 22 09:23:45 crc kubenswrapper[4811]: I0122 09:23:45.178327 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c57326606911589e15ed05807cfbc3f21e6ce59aa1e0583b6f78b3c49698fc55"} err="failed to get container status \"c57326606911589e15ed05807cfbc3f21e6ce59aa1e0583b6f78b3c49698fc55\": rpc error: code = NotFound desc = could not find container \"c57326606911589e15ed05807cfbc3f21e6ce59aa1e0583b6f78b3c49698fc55\": container with ID starting with c57326606911589e15ed05807cfbc3f21e6ce59aa1e0583b6f78b3c49698fc55 not found: ID does not exist" Jan 22 09:23:45 crc kubenswrapper[4811]: I0122 09:23:45.207667 4811 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bf971fef-0ef1-47b6-ba75-ca76dde2e658-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 09:23:45 crc kubenswrapper[4811]: I0122 09:23:45.207785 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2dc4f\" (UniqueName: \"kubernetes.io/projected/bf971fef-0ef1-47b6-ba75-ca76dde2e658-kube-api-access-2dc4f\") on node \"crc\" DevicePath \"\"" Jan 22 09:23:45 crc kubenswrapper[4811]: I0122 09:23:45.207900 4811 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf971fef-0ef1-47b6-ba75-ca76dde2e658-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 09:23:45 crc kubenswrapper[4811]: I0122 09:23:45.207978 4811 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bf971fef-0ef1-47b6-ba75-ca76dde2e658-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 22 09:23:45 crc kubenswrapper[4811]: I0122 09:23:45.208033 4811 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf971fef-0ef1-47b6-ba75-ca76dde2e658-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 09:23:45 crc kubenswrapper[4811]: I0122 09:23:45.208111 4811 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf971fef-0ef1-47b6-ba75-ca76dde2e658-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 09:23:45 crc kubenswrapper[4811]: I0122 09:23:45.229844 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf971fef-0ef1-47b6-ba75-ca76dde2e658-config-data" (OuterVolumeSpecName: "config-data") pod "bf971fef-0ef1-47b6-ba75-ca76dde2e658" (UID: "bf971fef-0ef1-47b6-ba75-ca76dde2e658"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:23:45 crc kubenswrapper[4811]: I0122 09:23:45.309563 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9cb1658a-c6d5-4087-9978-64888c975b22-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9cb1658a-c6d5-4087-9978-64888c975b22\") " pod="openstack/nova-api-0" Jan 22 09:23:45 crc kubenswrapper[4811]: I0122 09:23:45.309619 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9cb1658a-c6d5-4087-9978-64888c975b22-public-tls-certs\") pod \"nova-api-0\" (UID: \"9cb1658a-c6d5-4087-9978-64888c975b22\") " pod="openstack/nova-api-0" Jan 22 09:23:45 crc kubenswrapper[4811]: I0122 09:23:45.309683 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9cb1658a-c6d5-4087-9978-64888c975b22-logs\") pod \"nova-api-0\" (UID: \"9cb1658a-c6d5-4087-9978-64888c975b22\") " pod="openstack/nova-api-0" Jan 22 09:23:45 crc kubenswrapper[4811]: I0122 09:23:45.309709 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9cb1658a-c6d5-4087-9978-64888c975b22-config-data\") pod \"nova-api-0\" (UID: \"9cb1658a-c6d5-4087-9978-64888c975b22\") " pod="openstack/nova-api-0" Jan 22 09:23:45 crc kubenswrapper[4811]: I0122 09:23:45.309762 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9cb1658a-c6d5-4087-9978-64888c975b22-internal-tls-certs\") pod \"nova-api-0\" (UID: \"9cb1658a-c6d5-4087-9978-64888c975b22\") " pod="openstack/nova-api-0" Jan 22 09:23:45 crc kubenswrapper[4811]: I0122 09:23:45.310050 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpsbq\" (UniqueName: \"kubernetes.io/projected/9cb1658a-c6d5-4087-9978-64888c975b22-kube-api-access-wpsbq\") pod \"nova-api-0\" (UID: \"9cb1658a-c6d5-4087-9978-64888c975b22\") " pod="openstack/nova-api-0" Jan 22 09:23:45 crc kubenswrapper[4811]: I0122 09:23:45.310210 4811 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf971fef-0ef1-47b6-ba75-ca76dde2e658-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 09:23:45 crc kubenswrapper[4811]: I0122 09:23:45.354947 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 22 09:23:45 crc kubenswrapper[4811]: I0122 09:23:45.361211 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 22 09:23:45 crc kubenswrapper[4811]: I0122 09:23:45.377262 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 22 09:23:45 crc kubenswrapper[4811]: I0122 09:23:45.379124 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 09:23:45 crc kubenswrapper[4811]: I0122 09:23:45.380835 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 22 09:23:45 crc kubenswrapper[4811]: I0122 09:23:45.380857 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 22 09:23:45 crc kubenswrapper[4811]: I0122 09:23:45.385648 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 22 09:23:45 crc kubenswrapper[4811]: I0122 09:23:45.394797 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 22 09:23:45 crc kubenswrapper[4811]: I0122 09:23:45.412187 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d71dac51-18b5-44ad-b5f4-d3c46943a8dc-run-httpd\") pod \"ceilometer-0\" (UID: \"d71dac51-18b5-44ad-b5f4-d3c46943a8dc\") " pod="openstack/ceilometer-0" Jan 22 09:23:45 crc kubenswrapper[4811]: I0122 09:23:45.412235 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d71dac51-18b5-44ad-b5f4-d3c46943a8dc-log-httpd\") pod \"ceilometer-0\" (UID: \"d71dac51-18b5-44ad-b5f4-d3c46943a8dc\") " pod="openstack/ceilometer-0" Jan 22 09:23:45 crc kubenswrapper[4811]: I0122 09:23:45.412257 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d71dac51-18b5-44ad-b5f4-d3c46943a8dc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d71dac51-18b5-44ad-b5f4-d3c46943a8dc\") " pod="openstack/ceilometer-0" Jan 22 09:23:45 crc kubenswrapper[4811]: I0122 09:23:45.412294 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9cb1658a-c6d5-4087-9978-64888c975b22-internal-tls-certs\") pod \"nova-api-0\" (UID: \"9cb1658a-c6d5-4087-9978-64888c975b22\") " pod="openstack/nova-api-0" Jan 22 09:23:45 crc kubenswrapper[4811]: I0122 09:23:45.412312 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d71dac51-18b5-44ad-b5f4-d3c46943a8dc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d71dac51-18b5-44ad-b5f4-d3c46943a8dc\") " pod="openstack/ceilometer-0" Jan 22 09:23:45 crc kubenswrapper[4811]: I0122 09:23:45.412330 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d71dac51-18b5-44ad-b5f4-d3c46943a8dc-scripts\") pod \"ceilometer-0\" (UID: \"d71dac51-18b5-44ad-b5f4-d3c46943a8dc\") " pod="openstack/ceilometer-0" Jan 22 09:23:45 crc kubenswrapper[4811]: I0122 09:23:45.412349 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wpsbq\" (UniqueName: \"kubernetes.io/projected/9cb1658a-c6d5-4087-9978-64888c975b22-kube-api-access-wpsbq\") pod \"nova-api-0\" (UID: \"9cb1658a-c6d5-4087-9978-64888c975b22\") " pod="openstack/nova-api-0" Jan 22 09:23:45 crc kubenswrapper[4811]: I0122 09:23:45.412375 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdv7p\" (UniqueName: \"kubernetes.io/projected/d71dac51-18b5-44ad-b5f4-d3c46943a8dc-kube-api-access-vdv7p\") pod \"ceilometer-0\" (UID: \"d71dac51-18b5-44ad-b5f4-d3c46943a8dc\") " pod="openstack/ceilometer-0" Jan 22 09:23:45 crc kubenswrapper[4811]: I0122 09:23:45.412415 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9cb1658a-c6d5-4087-9978-64888c975b22-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9cb1658a-c6d5-4087-9978-64888c975b22\") " pod="openstack/nova-api-0" Jan 22 09:23:45 crc kubenswrapper[4811]: I0122 09:23:45.412445 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9cb1658a-c6d5-4087-9978-64888c975b22-public-tls-certs\") pod \"nova-api-0\" (UID: \"9cb1658a-c6d5-4087-9978-64888c975b22\") " pod="openstack/nova-api-0" Jan 22 09:23:45 crc kubenswrapper[4811]: I0122 09:23:45.412466 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d71dac51-18b5-44ad-b5f4-d3c46943a8dc-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"d71dac51-18b5-44ad-b5f4-d3c46943a8dc\") " pod="openstack/ceilometer-0" Jan 22 09:23:45 crc kubenswrapper[4811]: I0122 09:23:45.412495 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d71dac51-18b5-44ad-b5f4-d3c46943a8dc-config-data\") pod \"ceilometer-0\" (UID: \"d71dac51-18b5-44ad-b5f4-d3c46943a8dc\") " pod="openstack/ceilometer-0" Jan 22 09:23:45 crc kubenswrapper[4811]: I0122 09:23:45.412517 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9cb1658a-c6d5-4087-9978-64888c975b22-logs\") pod \"nova-api-0\" (UID: \"9cb1658a-c6d5-4087-9978-64888c975b22\") " pod="openstack/nova-api-0" Jan 22 09:23:45 crc kubenswrapper[4811]: I0122 09:23:45.412538 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9cb1658a-c6d5-4087-9978-64888c975b22-config-data\") pod \"nova-api-0\" (UID: \"9cb1658a-c6d5-4087-9978-64888c975b22\") " pod="openstack/nova-api-0" Jan 22 09:23:45 crc kubenswrapper[4811]: I0122 09:23:45.413805 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9cb1658a-c6d5-4087-9978-64888c975b22-logs\") pod \"nova-api-0\" (UID: \"9cb1658a-c6d5-4087-9978-64888c975b22\") " pod="openstack/nova-api-0" Jan 22 09:23:45 crc kubenswrapper[4811]: I0122 09:23:45.416227 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9cb1658a-c6d5-4087-9978-64888c975b22-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9cb1658a-c6d5-4087-9978-64888c975b22\") " pod="openstack/nova-api-0" Jan 22 09:23:45 crc kubenswrapper[4811]: I0122 09:23:45.420046 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9cb1658a-c6d5-4087-9978-64888c975b22-config-data\") pod \"nova-api-0\" (UID: \"9cb1658a-c6d5-4087-9978-64888c975b22\") " pod="openstack/nova-api-0" Jan 22 09:23:45 crc kubenswrapper[4811]: I0122 09:23:45.423027 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9cb1658a-c6d5-4087-9978-64888c975b22-public-tls-certs\") pod \"nova-api-0\" (UID: \"9cb1658a-c6d5-4087-9978-64888c975b22\") " pod="openstack/nova-api-0" Jan 22 09:23:45 crc kubenswrapper[4811]: I0122 09:23:45.428124 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wpsbq\" (UniqueName: \"kubernetes.io/projected/9cb1658a-c6d5-4087-9978-64888c975b22-kube-api-access-wpsbq\") pod \"nova-api-0\" (UID: \"9cb1658a-c6d5-4087-9978-64888c975b22\") " pod="openstack/nova-api-0" Jan 22 09:23:45 crc kubenswrapper[4811]: I0122 09:23:45.440478 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9cb1658a-c6d5-4087-9978-64888c975b22-internal-tls-certs\") pod \"nova-api-0\" (UID: \"9cb1658a-c6d5-4087-9978-64888c975b22\") " pod="openstack/nova-api-0" Jan 22 09:23:45 crc kubenswrapper[4811]: I0122 09:23:45.447895 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 22 09:23:45 crc kubenswrapper[4811]: I0122 09:23:45.513841 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d71dac51-18b5-44ad-b5f4-d3c46943a8dc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d71dac51-18b5-44ad-b5f4-d3c46943a8dc\") " pod="openstack/ceilometer-0" Jan 22 09:23:45 crc kubenswrapper[4811]: I0122 09:23:45.514155 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d71dac51-18b5-44ad-b5f4-d3c46943a8dc-scripts\") pod \"ceilometer-0\" (UID: \"d71dac51-18b5-44ad-b5f4-d3c46943a8dc\") " pod="openstack/ceilometer-0" Jan 22 09:23:45 crc kubenswrapper[4811]: I0122 09:23:45.514575 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vdv7p\" (UniqueName: \"kubernetes.io/projected/d71dac51-18b5-44ad-b5f4-d3c46943a8dc-kube-api-access-vdv7p\") pod \"ceilometer-0\" (UID: \"d71dac51-18b5-44ad-b5f4-d3c46943a8dc\") " pod="openstack/ceilometer-0" Jan 22 09:23:45 crc kubenswrapper[4811]: I0122 09:23:45.514736 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d71dac51-18b5-44ad-b5f4-d3c46943a8dc-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"d71dac51-18b5-44ad-b5f4-d3c46943a8dc\") " pod="openstack/ceilometer-0" Jan 22 09:23:45 crc kubenswrapper[4811]: I0122 09:23:45.514849 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d71dac51-18b5-44ad-b5f4-d3c46943a8dc-config-data\") pod \"ceilometer-0\" (UID: \"d71dac51-18b5-44ad-b5f4-d3c46943a8dc\") " pod="openstack/ceilometer-0" Jan 22 09:23:45 crc kubenswrapper[4811]: I0122 09:23:45.514950 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d71dac51-18b5-44ad-b5f4-d3c46943a8dc-run-httpd\") pod \"ceilometer-0\" (UID: \"d71dac51-18b5-44ad-b5f4-d3c46943a8dc\") " pod="openstack/ceilometer-0" Jan 22 09:23:45 crc kubenswrapper[4811]: I0122 09:23:45.515020 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d71dac51-18b5-44ad-b5f4-d3c46943a8dc-log-httpd\") pod \"ceilometer-0\" (UID: \"d71dac51-18b5-44ad-b5f4-d3c46943a8dc\") " pod="openstack/ceilometer-0" Jan 22 09:23:45 crc kubenswrapper[4811]: I0122 09:23:45.515080 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d71dac51-18b5-44ad-b5f4-d3c46943a8dc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d71dac51-18b5-44ad-b5f4-d3c46943a8dc\") " pod="openstack/ceilometer-0" Jan 22 09:23:45 crc kubenswrapper[4811]: I0122 09:23:45.515517 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d71dac51-18b5-44ad-b5f4-d3c46943a8dc-run-httpd\") pod \"ceilometer-0\" (UID: \"d71dac51-18b5-44ad-b5f4-d3c46943a8dc\") " pod="openstack/ceilometer-0" Jan 22 09:23:45 crc kubenswrapper[4811]: I0122 09:23:45.515778 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d71dac51-18b5-44ad-b5f4-d3c46943a8dc-log-httpd\") pod \"ceilometer-0\" (UID: \"d71dac51-18b5-44ad-b5f4-d3c46943a8dc\") " pod="openstack/ceilometer-0" Jan 22 09:23:45 crc kubenswrapper[4811]: I0122 09:23:45.517314 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d71dac51-18b5-44ad-b5f4-d3c46943a8dc-scripts\") pod \"ceilometer-0\" (UID: \"d71dac51-18b5-44ad-b5f4-d3c46943a8dc\") " pod="openstack/ceilometer-0" Jan 22 09:23:45 crc kubenswrapper[4811]: I0122 09:23:45.519492 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d71dac51-18b5-44ad-b5f4-d3c46943a8dc-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"d71dac51-18b5-44ad-b5f4-d3c46943a8dc\") " pod="openstack/ceilometer-0" Jan 22 09:23:45 crc kubenswrapper[4811]: I0122 09:23:45.520029 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d71dac51-18b5-44ad-b5f4-d3c46943a8dc-config-data\") pod \"ceilometer-0\" (UID: \"d71dac51-18b5-44ad-b5f4-d3c46943a8dc\") " pod="openstack/ceilometer-0" Jan 22 09:23:45 crc kubenswrapper[4811]: I0122 09:23:45.529574 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d71dac51-18b5-44ad-b5f4-d3c46943a8dc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d71dac51-18b5-44ad-b5f4-d3c46943a8dc\") " pod="openstack/ceilometer-0" Jan 22 09:23:45 crc kubenswrapper[4811]: I0122 09:23:45.535902 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdv7p\" (UniqueName: \"kubernetes.io/projected/d71dac51-18b5-44ad-b5f4-d3c46943a8dc-kube-api-access-vdv7p\") pod \"ceilometer-0\" (UID: \"d71dac51-18b5-44ad-b5f4-d3c46943a8dc\") " pod="openstack/ceilometer-0" Jan 22 09:23:45 crc kubenswrapper[4811]: I0122 09:23:45.543913 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d71dac51-18b5-44ad-b5f4-d3c46943a8dc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d71dac51-18b5-44ad-b5f4-d3c46943a8dc\") " pod="openstack/ceilometer-0" Jan 22 09:23:45 crc kubenswrapper[4811]: I0122 09:23:45.591071 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 22 09:23:45 crc kubenswrapper[4811]: I0122 09:23:45.614010 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 22 09:23:45 crc kubenswrapper[4811]: I0122 09:23:45.622264 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 22 09:23:45 crc kubenswrapper[4811]: I0122 09:23:45.622919 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 22 09:23:45 crc kubenswrapper[4811]: I0122 09:23:45.691818 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 09:23:45 crc kubenswrapper[4811]: I0122 09:23:45.908416 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 22 09:23:46 crc kubenswrapper[4811]: I0122 09:23:46.019316 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf971fef-0ef1-47b6-ba75-ca76dde2e658" path="/var/lib/kubelet/pods/bf971fef-0ef1-47b6-ba75-ca76dde2e658/volumes" Jan 22 09:23:46 crc kubenswrapper[4811]: I0122 09:23:46.027549 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e229ce79-29a8-487c-9468-3a0ae1449022" path="/var/lib/kubelet/pods/e229ce79-29a8-487c-9468-3a0ae1449022/volumes" Jan 22 09:23:46 crc kubenswrapper[4811]: I0122 09:23:46.072790 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9cb1658a-c6d5-4087-9978-64888c975b22","Type":"ContainerStarted","Data":"e24ee2b71b18796bfe336f41841cfa7036361f04a45301dd24ecb7a2f6ac50ed"} Jan 22 09:23:46 crc kubenswrapper[4811]: I0122 09:23:46.091851 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 22 09:23:46 crc kubenswrapper[4811]: I0122 09:23:46.182506 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 22 09:23:46 crc kubenswrapper[4811]: I0122 09:23:46.568497 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-8ttct"] Jan 22 09:23:46 crc kubenswrapper[4811]: I0122 09:23:46.569835 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-8ttct" Jan 22 09:23:46 crc kubenswrapper[4811]: I0122 09:23:46.575572 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bz6z\" (UniqueName: \"kubernetes.io/projected/b83447d0-bf76-4337-b4c8-26bb45903b5c-kube-api-access-7bz6z\") pod \"nova-cell1-cell-mapping-8ttct\" (UID: \"b83447d0-bf76-4337-b4c8-26bb45903b5c\") " pod="openstack/nova-cell1-cell-mapping-8ttct" Jan 22 09:23:46 crc kubenswrapper[4811]: I0122 09:23:46.575604 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b83447d0-bf76-4337-b4c8-26bb45903b5c-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-8ttct\" (UID: \"b83447d0-bf76-4337-b4c8-26bb45903b5c\") " pod="openstack/nova-cell1-cell-mapping-8ttct" Jan 22 09:23:46 crc kubenswrapper[4811]: I0122 09:23:46.575797 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b83447d0-bf76-4337-b4c8-26bb45903b5c-config-data\") pod \"nova-cell1-cell-mapping-8ttct\" (UID: \"b83447d0-bf76-4337-b4c8-26bb45903b5c\") " pod="openstack/nova-cell1-cell-mapping-8ttct" Jan 22 09:23:46 crc kubenswrapper[4811]: I0122 09:23:46.575820 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b83447d0-bf76-4337-b4c8-26bb45903b5c-scripts\") pod \"nova-cell1-cell-mapping-8ttct\" (UID: \"b83447d0-bf76-4337-b4c8-26bb45903b5c\") " pod="openstack/nova-cell1-cell-mapping-8ttct" Jan 22 09:23:46 crc kubenswrapper[4811]: I0122 09:23:46.576407 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 22 09:23:46 crc kubenswrapper[4811]: I0122 09:23:46.576605 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 22 09:23:46 crc kubenswrapper[4811]: I0122 09:23:46.584079 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-8ttct"] Jan 22 09:23:46 crc kubenswrapper[4811]: I0122 09:23:46.634734 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="8807b3ea-b4ec-44d4-b2bc-d79fcb7c0a90" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.185:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 22 09:23:46 crc kubenswrapper[4811]: I0122 09:23:46.634753 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="8807b3ea-b4ec-44d4-b2bc-d79fcb7c0a90" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.185:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 22 09:23:46 crc kubenswrapper[4811]: I0122 09:23:46.677772 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b83447d0-bf76-4337-b4c8-26bb45903b5c-scripts\") pod \"nova-cell1-cell-mapping-8ttct\" (UID: \"b83447d0-bf76-4337-b4c8-26bb45903b5c\") " pod="openstack/nova-cell1-cell-mapping-8ttct" Jan 22 09:23:46 crc kubenswrapper[4811]: I0122 09:23:46.677926 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7bz6z\" (UniqueName: \"kubernetes.io/projected/b83447d0-bf76-4337-b4c8-26bb45903b5c-kube-api-access-7bz6z\") pod \"nova-cell1-cell-mapping-8ttct\" (UID: \"b83447d0-bf76-4337-b4c8-26bb45903b5c\") " pod="openstack/nova-cell1-cell-mapping-8ttct" Jan 22 09:23:46 crc kubenswrapper[4811]: I0122 09:23:46.677951 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b83447d0-bf76-4337-b4c8-26bb45903b5c-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-8ttct\" (UID: \"b83447d0-bf76-4337-b4c8-26bb45903b5c\") " pod="openstack/nova-cell1-cell-mapping-8ttct" Jan 22 09:23:46 crc kubenswrapper[4811]: I0122 09:23:46.678134 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b83447d0-bf76-4337-b4c8-26bb45903b5c-config-data\") pod \"nova-cell1-cell-mapping-8ttct\" (UID: \"b83447d0-bf76-4337-b4c8-26bb45903b5c\") " pod="openstack/nova-cell1-cell-mapping-8ttct" Jan 22 09:23:46 crc kubenswrapper[4811]: I0122 09:23:46.685961 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b83447d0-bf76-4337-b4c8-26bb45903b5c-scripts\") pod \"nova-cell1-cell-mapping-8ttct\" (UID: \"b83447d0-bf76-4337-b4c8-26bb45903b5c\") " pod="openstack/nova-cell1-cell-mapping-8ttct" Jan 22 09:23:46 crc kubenswrapper[4811]: I0122 09:23:46.686912 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b83447d0-bf76-4337-b4c8-26bb45903b5c-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-8ttct\" (UID: \"b83447d0-bf76-4337-b4c8-26bb45903b5c\") " pod="openstack/nova-cell1-cell-mapping-8ttct" Jan 22 09:23:46 crc kubenswrapper[4811]: I0122 09:23:46.687153 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b83447d0-bf76-4337-b4c8-26bb45903b5c-config-data\") pod \"nova-cell1-cell-mapping-8ttct\" (UID: \"b83447d0-bf76-4337-b4c8-26bb45903b5c\") " pod="openstack/nova-cell1-cell-mapping-8ttct" Jan 22 09:23:46 crc kubenswrapper[4811]: I0122 09:23:46.703356 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7bz6z\" (UniqueName: \"kubernetes.io/projected/b83447d0-bf76-4337-b4c8-26bb45903b5c-kube-api-access-7bz6z\") pod \"nova-cell1-cell-mapping-8ttct\" (UID: \"b83447d0-bf76-4337-b4c8-26bb45903b5c\") " pod="openstack/nova-cell1-cell-mapping-8ttct" Jan 22 09:23:46 crc kubenswrapper[4811]: I0122 09:23:46.906926 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-8ttct" Jan 22 09:23:47 crc kubenswrapper[4811]: I0122 09:23:47.082519 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d71dac51-18b5-44ad-b5f4-d3c46943a8dc","Type":"ContainerStarted","Data":"71fea412a387028262aca8c831f527b6b91c1a02915a19ea7219ff8a6866cf8c"} Jan 22 09:23:47 crc kubenswrapper[4811]: I0122 09:23:47.082744 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d71dac51-18b5-44ad-b5f4-d3c46943a8dc","Type":"ContainerStarted","Data":"39e018f3cccc54360d617a9395607e403fa0637051a71cbb1715f6ddff695337"} Jan 22 09:23:47 crc kubenswrapper[4811]: I0122 09:23:47.084801 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9cb1658a-c6d5-4087-9978-64888c975b22","Type":"ContainerStarted","Data":"1f74b37b3819e43dd59813acec79b05402c9400ad0debe0b3f857060ff152ef5"} Jan 22 09:23:47 crc kubenswrapper[4811]: I0122 09:23:47.084826 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9cb1658a-c6d5-4087-9978-64888c975b22","Type":"ContainerStarted","Data":"3af2515e32cabf69a934355fadc076744ceea1775c110830c9e6a9b1415102de"} Jan 22 09:23:47 crc kubenswrapper[4811]: I0122 09:23:47.112427 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.112404895 podStartE2EDuration="2.112404895s" podCreationTimestamp="2026-01-22 09:23:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:23:47.11221669 +0000 UTC m=+1071.434403813" watchObservedRunningTime="2026-01-22 09:23:47.112404895 +0000 UTC m=+1071.434592018" Jan 22 09:23:47 crc kubenswrapper[4811]: I0122 09:23:47.326163 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-8ttct"] Jan 22 09:23:48 crc kubenswrapper[4811]: I0122 09:23:48.097923 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d71dac51-18b5-44ad-b5f4-d3c46943a8dc","Type":"ContainerStarted","Data":"9ce4358d058a0d4623771855bcce81da227b8f8455d98febbb5126ab691d54e7"} Jan 22 09:23:48 crc kubenswrapper[4811]: I0122 09:23:48.105743 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-8ttct" event={"ID":"b83447d0-bf76-4337-b4c8-26bb45903b5c","Type":"ContainerStarted","Data":"2430565ced8c6cab1a1add153fcaef06d155dda63af82c956a753cc878f67596"} Jan 22 09:23:48 crc kubenswrapper[4811]: I0122 09:23:48.105865 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-8ttct" event={"ID":"b83447d0-bf76-4337-b4c8-26bb45903b5c","Type":"ContainerStarted","Data":"cd53b4fb104653c0b98e64331085dab39465cd21afca6ca118667f9ed97beaf8"} Jan 22 09:23:48 crc kubenswrapper[4811]: I0122 09:23:48.423777 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-78c596d7cf-qxrcp" Jan 22 09:23:48 crc kubenswrapper[4811]: I0122 09:23:48.463297 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-8ttct" podStartSLOduration=2.463254074 podStartE2EDuration="2.463254074s" podCreationTimestamp="2026-01-22 09:23:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:23:48.12713578 +0000 UTC m=+1072.449322903" watchObservedRunningTime="2026-01-22 09:23:48.463254074 +0000 UTC m=+1072.785441197" Jan 22 09:23:48 crc kubenswrapper[4811]: I0122 09:23:48.523977 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59fd54bbff-sw5lk"] Jan 22 09:23:48 crc kubenswrapper[4811]: I0122 09:23:48.524247 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-59fd54bbff-sw5lk" podUID="ebb2968f-70cb-4b6b-8fab-9647ddaa5e93" containerName="dnsmasq-dns" containerID="cri-o://aa8a3d2863c61ba38f90c8d05e036afa5a37411875d3eb114e6b0a3565d3b68a" gracePeriod=10 Jan 22 09:23:49 crc kubenswrapper[4811]: I0122 09:23:49.005822 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59fd54bbff-sw5lk" Jan 22 09:23:49 crc kubenswrapper[4811]: I0122 09:23:49.041041 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ebb2968f-70cb-4b6b-8fab-9647ddaa5e93-ovsdbserver-nb\") pod \"ebb2968f-70cb-4b6b-8fab-9647ddaa5e93\" (UID: \"ebb2968f-70cb-4b6b-8fab-9647ddaa5e93\") " Jan 22 09:23:49 crc kubenswrapper[4811]: I0122 09:23:49.041131 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ebb2968f-70cb-4b6b-8fab-9647ddaa5e93-dns-svc\") pod \"ebb2968f-70cb-4b6b-8fab-9647ddaa5e93\" (UID: \"ebb2968f-70cb-4b6b-8fab-9647ddaa5e93\") " Jan 22 09:23:49 crc kubenswrapper[4811]: I0122 09:23:49.041257 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l27gj\" (UniqueName: \"kubernetes.io/projected/ebb2968f-70cb-4b6b-8fab-9647ddaa5e93-kube-api-access-l27gj\") pod \"ebb2968f-70cb-4b6b-8fab-9647ddaa5e93\" (UID: \"ebb2968f-70cb-4b6b-8fab-9647ddaa5e93\") " Jan 22 09:23:49 crc kubenswrapper[4811]: I0122 09:23:49.041293 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ebb2968f-70cb-4b6b-8fab-9647ddaa5e93-ovsdbserver-sb\") pod \"ebb2968f-70cb-4b6b-8fab-9647ddaa5e93\" (UID: \"ebb2968f-70cb-4b6b-8fab-9647ddaa5e93\") " Jan 22 09:23:49 crc kubenswrapper[4811]: I0122 09:23:49.041319 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebb2968f-70cb-4b6b-8fab-9647ddaa5e93-config\") pod \"ebb2968f-70cb-4b6b-8fab-9647ddaa5e93\" (UID: \"ebb2968f-70cb-4b6b-8fab-9647ddaa5e93\") " Jan 22 09:23:49 crc kubenswrapper[4811]: I0122 09:23:49.053738 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ebb2968f-70cb-4b6b-8fab-9647ddaa5e93-kube-api-access-l27gj" (OuterVolumeSpecName: "kube-api-access-l27gj") pod "ebb2968f-70cb-4b6b-8fab-9647ddaa5e93" (UID: "ebb2968f-70cb-4b6b-8fab-9647ddaa5e93"). InnerVolumeSpecName "kube-api-access-l27gj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:23:49 crc kubenswrapper[4811]: I0122 09:23:49.116758 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ebb2968f-70cb-4b6b-8fab-9647ddaa5e93-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ebb2968f-70cb-4b6b-8fab-9647ddaa5e93" (UID: "ebb2968f-70cb-4b6b-8fab-9647ddaa5e93"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:23:49 crc kubenswrapper[4811]: I0122 09:23:49.121214 4811 generic.go:334] "Generic (PLEG): container finished" podID="ebb2968f-70cb-4b6b-8fab-9647ddaa5e93" containerID="aa8a3d2863c61ba38f90c8d05e036afa5a37411875d3eb114e6b0a3565d3b68a" exitCode=0 Jan 22 09:23:49 crc kubenswrapper[4811]: I0122 09:23:49.121300 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59fd54bbff-sw5lk" event={"ID":"ebb2968f-70cb-4b6b-8fab-9647ddaa5e93","Type":"ContainerDied","Data":"aa8a3d2863c61ba38f90c8d05e036afa5a37411875d3eb114e6b0a3565d3b68a"} Jan 22 09:23:49 crc kubenswrapper[4811]: I0122 09:23:49.121335 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59fd54bbff-sw5lk" event={"ID":"ebb2968f-70cb-4b6b-8fab-9647ddaa5e93","Type":"ContainerDied","Data":"d184f635d21c075129fc76fcf0060e89d5955345ab51fda239ed7329a7648c3b"} Jan 22 09:23:49 crc kubenswrapper[4811]: I0122 09:23:49.121354 4811 scope.go:117] "RemoveContainer" containerID="aa8a3d2863c61ba38f90c8d05e036afa5a37411875d3eb114e6b0a3565d3b68a" Jan 22 09:23:49 crc kubenswrapper[4811]: I0122 09:23:49.121377 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ebb2968f-70cb-4b6b-8fab-9647ddaa5e93-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ebb2968f-70cb-4b6b-8fab-9647ddaa5e93" (UID: "ebb2968f-70cb-4b6b-8fab-9647ddaa5e93"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:23:49 crc kubenswrapper[4811]: I0122 09:23:49.121511 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59fd54bbff-sw5lk" Jan 22 09:23:49 crc kubenswrapper[4811]: I0122 09:23:49.131008 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d71dac51-18b5-44ad-b5f4-d3c46943a8dc","Type":"ContainerStarted","Data":"4eacedf749444e027dc4d754bb9a46b1cc9e4d2c6604cfc8ac52aa4961d23721"} Jan 22 09:23:49 crc kubenswrapper[4811]: I0122 09:23:49.144775 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l27gj\" (UniqueName: \"kubernetes.io/projected/ebb2968f-70cb-4b6b-8fab-9647ddaa5e93-kube-api-access-l27gj\") on node \"crc\" DevicePath \"\"" Jan 22 09:23:49 crc kubenswrapper[4811]: I0122 09:23:49.144911 4811 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ebb2968f-70cb-4b6b-8fab-9647ddaa5e93-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 22 09:23:49 crc kubenswrapper[4811]: I0122 09:23:49.144981 4811 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ebb2968f-70cb-4b6b-8fab-9647ddaa5e93-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 22 09:23:49 crc kubenswrapper[4811]: I0122 09:23:49.158892 4811 scope.go:117] "RemoveContainer" containerID="66073c6bb12fed90c567e6cbabda3721f5e0393d0089870514a396c28741676f" Jan 22 09:23:49 crc kubenswrapper[4811]: I0122 09:23:49.160147 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ebb2968f-70cb-4b6b-8fab-9647ddaa5e93-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ebb2968f-70cb-4b6b-8fab-9647ddaa5e93" (UID: "ebb2968f-70cb-4b6b-8fab-9647ddaa5e93"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:23:49 crc kubenswrapper[4811]: I0122 09:23:49.165012 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ebb2968f-70cb-4b6b-8fab-9647ddaa5e93-config" (OuterVolumeSpecName: "config") pod "ebb2968f-70cb-4b6b-8fab-9647ddaa5e93" (UID: "ebb2968f-70cb-4b6b-8fab-9647ddaa5e93"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:23:49 crc kubenswrapper[4811]: I0122 09:23:49.189116 4811 scope.go:117] "RemoveContainer" containerID="aa8a3d2863c61ba38f90c8d05e036afa5a37411875d3eb114e6b0a3565d3b68a" Jan 22 09:23:49 crc kubenswrapper[4811]: E0122 09:23:49.192434 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa8a3d2863c61ba38f90c8d05e036afa5a37411875d3eb114e6b0a3565d3b68a\": container with ID starting with aa8a3d2863c61ba38f90c8d05e036afa5a37411875d3eb114e6b0a3565d3b68a not found: ID does not exist" containerID="aa8a3d2863c61ba38f90c8d05e036afa5a37411875d3eb114e6b0a3565d3b68a" Jan 22 09:23:49 crc kubenswrapper[4811]: I0122 09:23:49.192539 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa8a3d2863c61ba38f90c8d05e036afa5a37411875d3eb114e6b0a3565d3b68a"} err="failed to get container status \"aa8a3d2863c61ba38f90c8d05e036afa5a37411875d3eb114e6b0a3565d3b68a\": rpc error: code = NotFound desc = could not find container \"aa8a3d2863c61ba38f90c8d05e036afa5a37411875d3eb114e6b0a3565d3b68a\": container with ID starting with aa8a3d2863c61ba38f90c8d05e036afa5a37411875d3eb114e6b0a3565d3b68a not found: ID does not exist" Jan 22 09:23:49 crc kubenswrapper[4811]: I0122 09:23:49.192634 4811 scope.go:117] "RemoveContainer" containerID="66073c6bb12fed90c567e6cbabda3721f5e0393d0089870514a396c28741676f" Jan 22 09:23:49 crc kubenswrapper[4811]: E0122 09:23:49.193010 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"66073c6bb12fed90c567e6cbabda3721f5e0393d0089870514a396c28741676f\": container with ID starting with 66073c6bb12fed90c567e6cbabda3721f5e0393d0089870514a396c28741676f not found: ID does not exist" containerID="66073c6bb12fed90c567e6cbabda3721f5e0393d0089870514a396c28741676f" Jan 22 09:23:49 crc kubenswrapper[4811]: I0122 09:23:49.193063 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66073c6bb12fed90c567e6cbabda3721f5e0393d0089870514a396c28741676f"} err="failed to get container status \"66073c6bb12fed90c567e6cbabda3721f5e0393d0089870514a396c28741676f\": rpc error: code = NotFound desc = could not find container \"66073c6bb12fed90c567e6cbabda3721f5e0393d0089870514a396c28741676f\": container with ID starting with 66073c6bb12fed90c567e6cbabda3721f5e0393d0089870514a396c28741676f not found: ID does not exist" Jan 22 09:23:49 crc kubenswrapper[4811]: I0122 09:23:49.247097 4811 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ebb2968f-70cb-4b6b-8fab-9647ddaa5e93-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 22 09:23:49 crc kubenswrapper[4811]: I0122 09:23:49.247129 4811 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebb2968f-70cb-4b6b-8fab-9647ddaa5e93-config\") on node \"crc\" DevicePath \"\"" Jan 22 09:23:49 crc kubenswrapper[4811]: I0122 09:23:49.446583 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59fd54bbff-sw5lk"] Jan 22 09:23:49 crc kubenswrapper[4811]: I0122 09:23:49.452729 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-59fd54bbff-sw5lk"] Jan 22 09:23:50 crc kubenswrapper[4811]: I0122 09:23:50.002144 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ebb2968f-70cb-4b6b-8fab-9647ddaa5e93" path="/var/lib/kubelet/pods/ebb2968f-70cb-4b6b-8fab-9647ddaa5e93/volumes" Jan 22 09:23:50 crc kubenswrapper[4811]: I0122 09:23:50.142130 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d71dac51-18b5-44ad-b5f4-d3c46943a8dc","Type":"ContainerStarted","Data":"b7dec49aa88a4935c5cfa19723dd0eb4e3a95dcb633cfdaccfb56e8ae3eef017"} Jan 22 09:23:50 crc kubenswrapper[4811]: I0122 09:23:50.143460 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 22 09:23:50 crc kubenswrapper[4811]: I0122 09:23:50.169237 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.712902455 podStartE2EDuration="5.169214045s" podCreationTimestamp="2026-01-22 09:23:45 +0000 UTC" firstStartedPulling="2026-01-22 09:23:46.203955715 +0000 UTC m=+1070.526142828" lastFinishedPulling="2026-01-22 09:23:49.660267295 +0000 UTC m=+1073.982454418" observedRunningTime="2026-01-22 09:23:50.162962044 +0000 UTC m=+1074.485149168" watchObservedRunningTime="2026-01-22 09:23:50.169214045 +0000 UTC m=+1074.491401168" Jan 22 09:23:52 crc kubenswrapper[4811]: I0122 09:23:52.165941 4811 generic.go:334] "Generic (PLEG): container finished" podID="b83447d0-bf76-4337-b4c8-26bb45903b5c" containerID="2430565ced8c6cab1a1add153fcaef06d155dda63af82c956a753cc878f67596" exitCode=0 Jan 22 09:23:52 crc kubenswrapper[4811]: I0122 09:23:52.165982 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-8ttct" event={"ID":"b83447d0-bf76-4337-b4c8-26bb45903b5c","Type":"ContainerDied","Data":"2430565ced8c6cab1a1add153fcaef06d155dda63af82c956a753cc878f67596"} Jan 22 09:23:53 crc kubenswrapper[4811]: I0122 09:23:53.441526 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-8ttct" Jan 22 09:23:53 crc kubenswrapper[4811]: I0122 09:23:53.524098 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b83447d0-bf76-4337-b4c8-26bb45903b5c-combined-ca-bundle\") pod \"b83447d0-bf76-4337-b4c8-26bb45903b5c\" (UID: \"b83447d0-bf76-4337-b4c8-26bb45903b5c\") " Jan 22 09:23:53 crc kubenswrapper[4811]: I0122 09:23:53.524825 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b83447d0-bf76-4337-b4c8-26bb45903b5c-config-data\") pod \"b83447d0-bf76-4337-b4c8-26bb45903b5c\" (UID: \"b83447d0-bf76-4337-b4c8-26bb45903b5c\") " Jan 22 09:23:53 crc kubenswrapper[4811]: I0122 09:23:53.525153 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7bz6z\" (UniqueName: \"kubernetes.io/projected/b83447d0-bf76-4337-b4c8-26bb45903b5c-kube-api-access-7bz6z\") pod \"b83447d0-bf76-4337-b4c8-26bb45903b5c\" (UID: \"b83447d0-bf76-4337-b4c8-26bb45903b5c\") " Jan 22 09:23:53 crc kubenswrapper[4811]: I0122 09:23:53.525266 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b83447d0-bf76-4337-b4c8-26bb45903b5c-scripts\") pod \"b83447d0-bf76-4337-b4c8-26bb45903b5c\" (UID: \"b83447d0-bf76-4337-b4c8-26bb45903b5c\") " Jan 22 09:23:53 crc kubenswrapper[4811]: I0122 09:23:53.532386 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b83447d0-bf76-4337-b4c8-26bb45903b5c-kube-api-access-7bz6z" (OuterVolumeSpecName: "kube-api-access-7bz6z") pod "b83447d0-bf76-4337-b4c8-26bb45903b5c" (UID: "b83447d0-bf76-4337-b4c8-26bb45903b5c"). InnerVolumeSpecName "kube-api-access-7bz6z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:23:53 crc kubenswrapper[4811]: I0122 09:23:53.539722 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b83447d0-bf76-4337-b4c8-26bb45903b5c-scripts" (OuterVolumeSpecName: "scripts") pod "b83447d0-bf76-4337-b4c8-26bb45903b5c" (UID: "b83447d0-bf76-4337-b4c8-26bb45903b5c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:23:53 crc kubenswrapper[4811]: I0122 09:23:53.549050 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b83447d0-bf76-4337-b4c8-26bb45903b5c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b83447d0-bf76-4337-b4c8-26bb45903b5c" (UID: "b83447d0-bf76-4337-b4c8-26bb45903b5c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:23:53 crc kubenswrapper[4811]: I0122 09:23:53.553763 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b83447d0-bf76-4337-b4c8-26bb45903b5c-config-data" (OuterVolumeSpecName: "config-data") pod "b83447d0-bf76-4337-b4c8-26bb45903b5c" (UID: "b83447d0-bf76-4337-b4c8-26bb45903b5c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:23:53 crc kubenswrapper[4811]: I0122 09:23:53.628157 4811 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b83447d0-bf76-4337-b4c8-26bb45903b5c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 09:23:53 crc kubenswrapper[4811]: I0122 09:23:53.628200 4811 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b83447d0-bf76-4337-b4c8-26bb45903b5c-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 09:23:53 crc kubenswrapper[4811]: I0122 09:23:53.628210 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7bz6z\" (UniqueName: \"kubernetes.io/projected/b83447d0-bf76-4337-b4c8-26bb45903b5c-kube-api-access-7bz6z\") on node \"crc\" DevicePath \"\"" Jan 22 09:23:53 crc kubenswrapper[4811]: I0122 09:23:53.628221 4811 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b83447d0-bf76-4337-b4c8-26bb45903b5c-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 09:23:54 crc kubenswrapper[4811]: I0122 09:23:54.182047 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-8ttct" event={"ID":"b83447d0-bf76-4337-b4c8-26bb45903b5c","Type":"ContainerDied","Data":"cd53b4fb104653c0b98e64331085dab39465cd21afca6ca118667f9ed97beaf8"} Jan 22 09:23:54 crc kubenswrapper[4811]: I0122 09:23:54.182106 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cd53b4fb104653c0b98e64331085dab39465cd21afca6ca118667f9ed97beaf8" Jan 22 09:23:54 crc kubenswrapper[4811]: I0122 09:23:54.182104 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-8ttct" Jan 22 09:23:54 crc kubenswrapper[4811]: I0122 09:23:54.353062 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 22 09:23:54 crc kubenswrapper[4811]: I0122 09:23:54.353455 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="9cb1658a-c6d5-4087-9978-64888c975b22" containerName="nova-api-api" containerID="cri-o://1f74b37b3819e43dd59813acec79b05402c9400ad0debe0b3f857060ff152ef5" gracePeriod=30 Jan 22 09:23:54 crc kubenswrapper[4811]: I0122 09:23:54.353363 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="9cb1658a-c6d5-4087-9978-64888c975b22" containerName="nova-api-log" containerID="cri-o://3af2515e32cabf69a934355fadc076744ceea1775c110830c9e6a9b1415102de" gracePeriod=30 Jan 22 09:23:54 crc kubenswrapper[4811]: I0122 09:23:54.363955 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 22 09:23:54 crc kubenswrapper[4811]: I0122 09:23:54.364430 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="738ef216-ebaa-4000-8021-1c3b675abb3a" containerName="nova-scheduler-scheduler" containerID="cri-o://b41df3f94807c81186b37a6668bb739f700725818de37a3652af3b6080e12c13" gracePeriod=30 Jan 22 09:23:54 crc kubenswrapper[4811]: I0122 09:23:54.390215 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 22 09:23:54 crc kubenswrapper[4811]: I0122 09:23:54.390596 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="8807b3ea-b4ec-44d4-b2bc-d79fcb7c0a90" containerName="nova-metadata-log" containerID="cri-o://638643c7bf0ec3aeec80d2b38cd6ae1c6cf1f9ef9681af2086a1574c73c87a09" gracePeriod=30 Jan 22 09:23:54 crc kubenswrapper[4811]: I0122 09:23:54.390719 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="8807b3ea-b4ec-44d4-b2bc-d79fcb7c0a90" containerName="nova-metadata-metadata" containerID="cri-o://9aeee3bf886228f178ff79e6abb6efd4b28625304470f1cd6dfe411000da41dd" gracePeriod=30 Jan 22 09:23:54 crc kubenswrapper[4811]: I0122 09:23:54.877464 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 22 09:23:54 crc kubenswrapper[4811]: I0122 09:23:54.962012 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9cb1658a-c6d5-4087-9978-64888c975b22-combined-ca-bundle\") pod \"9cb1658a-c6d5-4087-9978-64888c975b22\" (UID: \"9cb1658a-c6d5-4087-9978-64888c975b22\") " Jan 22 09:23:54 crc kubenswrapper[4811]: I0122 09:23:54.962198 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9cb1658a-c6d5-4087-9978-64888c975b22-internal-tls-certs\") pod \"9cb1658a-c6d5-4087-9978-64888c975b22\" (UID: \"9cb1658a-c6d5-4087-9978-64888c975b22\") " Jan 22 09:23:54 crc kubenswrapper[4811]: I0122 09:23:54.962220 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9cb1658a-c6d5-4087-9978-64888c975b22-logs\") pod \"9cb1658a-c6d5-4087-9978-64888c975b22\" (UID: \"9cb1658a-c6d5-4087-9978-64888c975b22\") " Jan 22 09:23:54 crc kubenswrapper[4811]: I0122 09:23:54.962282 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wpsbq\" (UniqueName: \"kubernetes.io/projected/9cb1658a-c6d5-4087-9978-64888c975b22-kube-api-access-wpsbq\") pod \"9cb1658a-c6d5-4087-9978-64888c975b22\" (UID: \"9cb1658a-c6d5-4087-9978-64888c975b22\") " Jan 22 09:23:54 crc kubenswrapper[4811]: I0122 09:23:54.962302 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9cb1658a-c6d5-4087-9978-64888c975b22-public-tls-certs\") pod \"9cb1658a-c6d5-4087-9978-64888c975b22\" (UID: \"9cb1658a-c6d5-4087-9978-64888c975b22\") " Jan 22 09:23:54 crc kubenswrapper[4811]: I0122 09:23:54.962455 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9cb1658a-c6d5-4087-9978-64888c975b22-config-data\") pod \"9cb1658a-c6d5-4087-9978-64888c975b22\" (UID: \"9cb1658a-c6d5-4087-9978-64888c975b22\") " Jan 22 09:23:54 crc kubenswrapper[4811]: I0122 09:23:54.963283 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9cb1658a-c6d5-4087-9978-64888c975b22-logs" (OuterVolumeSpecName: "logs") pod "9cb1658a-c6d5-4087-9978-64888c975b22" (UID: "9cb1658a-c6d5-4087-9978-64888c975b22"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:23:54 crc kubenswrapper[4811]: I0122 09:23:54.966717 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9cb1658a-c6d5-4087-9978-64888c975b22-kube-api-access-wpsbq" (OuterVolumeSpecName: "kube-api-access-wpsbq") pod "9cb1658a-c6d5-4087-9978-64888c975b22" (UID: "9cb1658a-c6d5-4087-9978-64888c975b22"). InnerVolumeSpecName "kube-api-access-wpsbq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:23:54 crc kubenswrapper[4811]: I0122 09:23:54.992793 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9cb1658a-c6d5-4087-9978-64888c975b22-config-data" (OuterVolumeSpecName: "config-data") pod "9cb1658a-c6d5-4087-9978-64888c975b22" (UID: "9cb1658a-c6d5-4087-9978-64888c975b22"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:23:54 crc kubenswrapper[4811]: I0122 09:23:54.992987 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9cb1658a-c6d5-4087-9978-64888c975b22-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9cb1658a-c6d5-4087-9978-64888c975b22" (UID: "9cb1658a-c6d5-4087-9978-64888c975b22"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:23:55 crc kubenswrapper[4811]: I0122 09:23:55.003821 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9cb1658a-c6d5-4087-9978-64888c975b22-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "9cb1658a-c6d5-4087-9978-64888c975b22" (UID: "9cb1658a-c6d5-4087-9978-64888c975b22"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:23:55 crc kubenswrapper[4811]: I0122 09:23:55.011604 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9cb1658a-c6d5-4087-9978-64888c975b22-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "9cb1658a-c6d5-4087-9978-64888c975b22" (UID: "9cb1658a-c6d5-4087-9978-64888c975b22"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:23:55 crc kubenswrapper[4811]: I0122 09:23:55.064999 4811 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9cb1658a-c6d5-4087-9978-64888c975b22-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 09:23:55 crc kubenswrapper[4811]: I0122 09:23:55.065035 4811 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9cb1658a-c6d5-4087-9978-64888c975b22-logs\") on node \"crc\" DevicePath \"\"" Jan 22 09:23:55 crc kubenswrapper[4811]: I0122 09:23:55.065047 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wpsbq\" (UniqueName: \"kubernetes.io/projected/9cb1658a-c6d5-4087-9978-64888c975b22-kube-api-access-wpsbq\") on node \"crc\" DevicePath \"\"" Jan 22 09:23:55 crc kubenswrapper[4811]: I0122 09:23:55.065057 4811 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9cb1658a-c6d5-4087-9978-64888c975b22-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 09:23:55 crc kubenswrapper[4811]: I0122 09:23:55.065068 4811 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9cb1658a-c6d5-4087-9978-64888c975b22-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 09:23:55 crc kubenswrapper[4811]: I0122 09:23:55.065078 4811 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9cb1658a-c6d5-4087-9978-64888c975b22-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 09:23:55 crc kubenswrapper[4811]: I0122 09:23:55.197299 4811 generic.go:334] "Generic (PLEG): container finished" podID="738ef216-ebaa-4000-8021-1c3b675abb3a" containerID="b41df3f94807c81186b37a6668bb739f700725818de37a3652af3b6080e12c13" exitCode=0 Jan 22 09:23:55 crc kubenswrapper[4811]: I0122 09:23:55.197652 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"738ef216-ebaa-4000-8021-1c3b675abb3a","Type":"ContainerDied","Data":"b41df3f94807c81186b37a6668bb739f700725818de37a3652af3b6080e12c13"} Jan 22 09:23:55 crc kubenswrapper[4811]: I0122 09:23:55.199568 4811 generic.go:334] "Generic (PLEG): container finished" podID="8807b3ea-b4ec-44d4-b2bc-d79fcb7c0a90" containerID="638643c7bf0ec3aeec80d2b38cd6ae1c6cf1f9ef9681af2086a1574c73c87a09" exitCode=143 Jan 22 09:23:55 crc kubenswrapper[4811]: I0122 09:23:55.199670 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8807b3ea-b4ec-44d4-b2bc-d79fcb7c0a90","Type":"ContainerDied","Data":"638643c7bf0ec3aeec80d2b38cd6ae1c6cf1f9ef9681af2086a1574c73c87a09"} Jan 22 09:23:55 crc kubenswrapper[4811]: I0122 09:23:55.201403 4811 generic.go:334] "Generic (PLEG): container finished" podID="9cb1658a-c6d5-4087-9978-64888c975b22" containerID="1f74b37b3819e43dd59813acec79b05402c9400ad0debe0b3f857060ff152ef5" exitCode=0 Jan 22 09:23:55 crc kubenswrapper[4811]: I0122 09:23:55.201428 4811 generic.go:334] "Generic (PLEG): container finished" podID="9cb1658a-c6d5-4087-9978-64888c975b22" containerID="3af2515e32cabf69a934355fadc076744ceea1775c110830c9e6a9b1415102de" exitCode=143 Jan 22 09:23:55 crc kubenswrapper[4811]: I0122 09:23:55.201446 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9cb1658a-c6d5-4087-9978-64888c975b22","Type":"ContainerDied","Data":"1f74b37b3819e43dd59813acec79b05402c9400ad0debe0b3f857060ff152ef5"} Jan 22 09:23:55 crc kubenswrapper[4811]: I0122 09:23:55.201464 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9cb1658a-c6d5-4087-9978-64888c975b22","Type":"ContainerDied","Data":"3af2515e32cabf69a934355fadc076744ceea1775c110830c9e6a9b1415102de"} Jan 22 09:23:55 crc kubenswrapper[4811]: I0122 09:23:55.201475 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9cb1658a-c6d5-4087-9978-64888c975b22","Type":"ContainerDied","Data":"e24ee2b71b18796bfe336f41841cfa7036361f04a45301dd24ecb7a2f6ac50ed"} Jan 22 09:23:55 crc kubenswrapper[4811]: I0122 09:23:55.201492 4811 scope.go:117] "RemoveContainer" containerID="1f74b37b3819e43dd59813acec79b05402c9400ad0debe0b3f857060ff152ef5" Jan 22 09:23:55 crc kubenswrapper[4811]: I0122 09:23:55.201651 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 22 09:23:55 crc kubenswrapper[4811]: I0122 09:23:55.233280 4811 scope.go:117] "RemoveContainer" containerID="3af2515e32cabf69a934355fadc076744ceea1775c110830c9e6a9b1415102de" Jan 22 09:23:55 crc kubenswrapper[4811]: I0122 09:23:55.234912 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 22 09:23:55 crc kubenswrapper[4811]: I0122 09:23:55.242151 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 22 09:23:55 crc kubenswrapper[4811]: I0122 09:23:55.267635 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 22 09:23:55 crc kubenswrapper[4811]: E0122 09:23:55.267994 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b83447d0-bf76-4337-b4c8-26bb45903b5c" containerName="nova-manage" Jan 22 09:23:55 crc kubenswrapper[4811]: I0122 09:23:55.268013 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="b83447d0-bf76-4337-b4c8-26bb45903b5c" containerName="nova-manage" Jan 22 09:23:55 crc kubenswrapper[4811]: E0122 09:23:55.268033 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebb2968f-70cb-4b6b-8fab-9647ddaa5e93" containerName="init" Jan 22 09:23:55 crc kubenswrapper[4811]: I0122 09:23:55.268039 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebb2968f-70cb-4b6b-8fab-9647ddaa5e93" containerName="init" Jan 22 09:23:55 crc kubenswrapper[4811]: E0122 09:23:55.268054 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebb2968f-70cb-4b6b-8fab-9647ddaa5e93" containerName="dnsmasq-dns" Jan 22 09:23:55 crc kubenswrapper[4811]: I0122 09:23:55.268061 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebb2968f-70cb-4b6b-8fab-9647ddaa5e93" containerName="dnsmasq-dns" Jan 22 09:23:55 crc kubenswrapper[4811]: E0122 09:23:55.268068 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cb1658a-c6d5-4087-9978-64888c975b22" containerName="nova-api-log" Jan 22 09:23:55 crc kubenswrapper[4811]: I0122 09:23:55.268074 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cb1658a-c6d5-4087-9978-64888c975b22" containerName="nova-api-log" Jan 22 09:23:55 crc kubenswrapper[4811]: E0122 09:23:55.268083 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cb1658a-c6d5-4087-9978-64888c975b22" containerName="nova-api-api" Jan 22 09:23:55 crc kubenswrapper[4811]: I0122 09:23:55.268089 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cb1658a-c6d5-4087-9978-64888c975b22" containerName="nova-api-api" Jan 22 09:23:55 crc kubenswrapper[4811]: I0122 09:23:55.268288 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebb2968f-70cb-4b6b-8fab-9647ddaa5e93" containerName="dnsmasq-dns" Jan 22 09:23:55 crc kubenswrapper[4811]: I0122 09:23:55.268301 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="b83447d0-bf76-4337-b4c8-26bb45903b5c" containerName="nova-manage" Jan 22 09:23:55 crc kubenswrapper[4811]: I0122 09:23:55.268313 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="9cb1658a-c6d5-4087-9978-64888c975b22" containerName="nova-api-api" Jan 22 09:23:55 crc kubenswrapper[4811]: I0122 09:23:55.268323 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="9cb1658a-c6d5-4087-9978-64888c975b22" containerName="nova-api-log" Jan 22 09:23:55 crc kubenswrapper[4811]: I0122 09:23:55.269147 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 22 09:23:55 crc kubenswrapper[4811]: I0122 09:23:55.273900 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 22 09:23:55 crc kubenswrapper[4811]: I0122 09:23:55.274058 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 22 09:23:55 crc kubenswrapper[4811]: I0122 09:23:55.274173 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 22 09:23:55 crc kubenswrapper[4811]: I0122 09:23:55.286326 4811 scope.go:117] "RemoveContainer" containerID="1f74b37b3819e43dd59813acec79b05402c9400ad0debe0b3f857060ff152ef5" Jan 22 09:23:55 crc kubenswrapper[4811]: I0122 09:23:55.289604 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 22 09:23:55 crc kubenswrapper[4811]: E0122 09:23:55.295877 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f74b37b3819e43dd59813acec79b05402c9400ad0debe0b3f857060ff152ef5\": container with ID starting with 1f74b37b3819e43dd59813acec79b05402c9400ad0debe0b3f857060ff152ef5 not found: ID does not exist" containerID="1f74b37b3819e43dd59813acec79b05402c9400ad0debe0b3f857060ff152ef5" Jan 22 09:23:55 crc kubenswrapper[4811]: I0122 09:23:55.295918 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f74b37b3819e43dd59813acec79b05402c9400ad0debe0b3f857060ff152ef5"} err="failed to get container status \"1f74b37b3819e43dd59813acec79b05402c9400ad0debe0b3f857060ff152ef5\": rpc error: code = NotFound desc = could not find container \"1f74b37b3819e43dd59813acec79b05402c9400ad0debe0b3f857060ff152ef5\": container with ID starting with 1f74b37b3819e43dd59813acec79b05402c9400ad0debe0b3f857060ff152ef5 not found: ID does not exist" Jan 22 09:23:55 crc kubenswrapper[4811]: I0122 09:23:55.295943 4811 scope.go:117] "RemoveContainer" containerID="3af2515e32cabf69a934355fadc076744ceea1775c110830c9e6a9b1415102de" Jan 22 09:23:55 crc kubenswrapper[4811]: E0122 09:23:55.296252 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3af2515e32cabf69a934355fadc076744ceea1775c110830c9e6a9b1415102de\": container with ID starting with 3af2515e32cabf69a934355fadc076744ceea1775c110830c9e6a9b1415102de not found: ID does not exist" containerID="3af2515e32cabf69a934355fadc076744ceea1775c110830c9e6a9b1415102de" Jan 22 09:23:55 crc kubenswrapper[4811]: I0122 09:23:55.296279 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3af2515e32cabf69a934355fadc076744ceea1775c110830c9e6a9b1415102de"} err="failed to get container status \"3af2515e32cabf69a934355fadc076744ceea1775c110830c9e6a9b1415102de\": rpc error: code = NotFound desc = could not find container \"3af2515e32cabf69a934355fadc076744ceea1775c110830c9e6a9b1415102de\": container with ID starting with 3af2515e32cabf69a934355fadc076744ceea1775c110830c9e6a9b1415102de not found: ID does not exist" Jan 22 09:23:55 crc kubenswrapper[4811]: I0122 09:23:55.296299 4811 scope.go:117] "RemoveContainer" containerID="1f74b37b3819e43dd59813acec79b05402c9400ad0debe0b3f857060ff152ef5" Jan 22 09:23:55 crc kubenswrapper[4811]: I0122 09:23:55.296495 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f74b37b3819e43dd59813acec79b05402c9400ad0debe0b3f857060ff152ef5"} err="failed to get container status \"1f74b37b3819e43dd59813acec79b05402c9400ad0debe0b3f857060ff152ef5\": rpc error: code = NotFound desc = could not find container \"1f74b37b3819e43dd59813acec79b05402c9400ad0debe0b3f857060ff152ef5\": container with ID starting with 1f74b37b3819e43dd59813acec79b05402c9400ad0debe0b3f857060ff152ef5 not found: ID does not exist" Jan 22 09:23:55 crc kubenswrapper[4811]: I0122 09:23:55.296513 4811 scope.go:117] "RemoveContainer" containerID="3af2515e32cabf69a934355fadc076744ceea1775c110830c9e6a9b1415102de" Jan 22 09:23:55 crc kubenswrapper[4811]: I0122 09:23:55.297323 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3af2515e32cabf69a934355fadc076744ceea1775c110830c9e6a9b1415102de"} err="failed to get container status \"3af2515e32cabf69a934355fadc076744ceea1775c110830c9e6a9b1415102de\": rpc error: code = NotFound desc = could not find container \"3af2515e32cabf69a934355fadc076744ceea1775c110830c9e6a9b1415102de\": container with ID starting with 3af2515e32cabf69a934355fadc076744ceea1775c110830c9e6a9b1415102de not found: ID does not exist" Jan 22 09:23:55 crc kubenswrapper[4811]: I0122 09:23:55.370753 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dacc4f5b-746c-48bb-9d16-a30e402aa461-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"dacc4f5b-746c-48bb-9d16-a30e402aa461\") " pod="openstack/nova-api-0" Jan 22 09:23:55 crc kubenswrapper[4811]: I0122 09:23:55.370811 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dacc4f5b-746c-48bb-9d16-a30e402aa461-public-tls-certs\") pod \"nova-api-0\" (UID: \"dacc4f5b-746c-48bb-9d16-a30e402aa461\") " pod="openstack/nova-api-0" Jan 22 09:23:55 crc kubenswrapper[4811]: I0122 09:23:55.370990 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gt4r\" (UniqueName: \"kubernetes.io/projected/dacc4f5b-746c-48bb-9d16-a30e402aa461-kube-api-access-5gt4r\") pod \"nova-api-0\" (UID: \"dacc4f5b-746c-48bb-9d16-a30e402aa461\") " pod="openstack/nova-api-0" Jan 22 09:23:55 crc kubenswrapper[4811]: I0122 09:23:55.371109 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dacc4f5b-746c-48bb-9d16-a30e402aa461-config-data\") pod \"nova-api-0\" (UID: \"dacc4f5b-746c-48bb-9d16-a30e402aa461\") " pod="openstack/nova-api-0" Jan 22 09:23:55 crc kubenswrapper[4811]: I0122 09:23:55.371148 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dacc4f5b-746c-48bb-9d16-a30e402aa461-logs\") pod \"nova-api-0\" (UID: \"dacc4f5b-746c-48bb-9d16-a30e402aa461\") " pod="openstack/nova-api-0" Jan 22 09:23:55 crc kubenswrapper[4811]: I0122 09:23:55.371234 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dacc4f5b-746c-48bb-9d16-a30e402aa461-internal-tls-certs\") pod \"nova-api-0\" (UID: \"dacc4f5b-746c-48bb-9d16-a30e402aa461\") " pod="openstack/nova-api-0" Jan 22 09:23:55 crc kubenswrapper[4811]: I0122 09:23:55.479836 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dacc4f5b-746c-48bb-9d16-a30e402aa461-public-tls-certs\") pod \"nova-api-0\" (UID: \"dacc4f5b-746c-48bb-9d16-a30e402aa461\") " pod="openstack/nova-api-0" Jan 22 09:23:55 crc kubenswrapper[4811]: I0122 09:23:55.480028 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5gt4r\" (UniqueName: \"kubernetes.io/projected/dacc4f5b-746c-48bb-9d16-a30e402aa461-kube-api-access-5gt4r\") pod \"nova-api-0\" (UID: \"dacc4f5b-746c-48bb-9d16-a30e402aa461\") " pod="openstack/nova-api-0" Jan 22 09:23:55 crc kubenswrapper[4811]: I0122 09:23:55.480143 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dacc4f5b-746c-48bb-9d16-a30e402aa461-config-data\") pod \"nova-api-0\" (UID: \"dacc4f5b-746c-48bb-9d16-a30e402aa461\") " pod="openstack/nova-api-0" Jan 22 09:23:55 crc kubenswrapper[4811]: I0122 09:23:55.480195 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dacc4f5b-746c-48bb-9d16-a30e402aa461-logs\") pod \"nova-api-0\" (UID: \"dacc4f5b-746c-48bb-9d16-a30e402aa461\") " pod="openstack/nova-api-0" Jan 22 09:23:55 crc kubenswrapper[4811]: I0122 09:23:55.480274 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dacc4f5b-746c-48bb-9d16-a30e402aa461-internal-tls-certs\") pod \"nova-api-0\" (UID: \"dacc4f5b-746c-48bb-9d16-a30e402aa461\") " pod="openstack/nova-api-0" Jan 22 09:23:55 crc kubenswrapper[4811]: I0122 09:23:55.480312 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dacc4f5b-746c-48bb-9d16-a30e402aa461-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"dacc4f5b-746c-48bb-9d16-a30e402aa461\") " pod="openstack/nova-api-0" Jan 22 09:23:55 crc kubenswrapper[4811]: I0122 09:23:55.493740 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dacc4f5b-746c-48bb-9d16-a30e402aa461-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"dacc4f5b-746c-48bb-9d16-a30e402aa461\") " pod="openstack/nova-api-0" Jan 22 09:23:55 crc kubenswrapper[4811]: I0122 09:23:55.495499 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dacc4f5b-746c-48bb-9d16-a30e402aa461-logs\") pod \"nova-api-0\" (UID: \"dacc4f5b-746c-48bb-9d16-a30e402aa461\") " pod="openstack/nova-api-0" Jan 22 09:23:55 crc kubenswrapper[4811]: I0122 09:23:55.499647 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dacc4f5b-746c-48bb-9d16-a30e402aa461-internal-tls-certs\") pod \"nova-api-0\" (UID: \"dacc4f5b-746c-48bb-9d16-a30e402aa461\") " pod="openstack/nova-api-0" Jan 22 09:23:55 crc kubenswrapper[4811]: I0122 09:23:55.501450 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dacc4f5b-746c-48bb-9d16-a30e402aa461-config-data\") pod \"nova-api-0\" (UID: \"dacc4f5b-746c-48bb-9d16-a30e402aa461\") " pod="openstack/nova-api-0" Jan 22 09:23:55 crc kubenswrapper[4811]: I0122 09:23:55.515444 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dacc4f5b-746c-48bb-9d16-a30e402aa461-public-tls-certs\") pod \"nova-api-0\" (UID: \"dacc4f5b-746c-48bb-9d16-a30e402aa461\") " pod="openstack/nova-api-0" Jan 22 09:23:55 crc kubenswrapper[4811]: I0122 09:23:55.521003 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5gt4r\" (UniqueName: \"kubernetes.io/projected/dacc4f5b-746c-48bb-9d16-a30e402aa461-kube-api-access-5gt4r\") pod \"nova-api-0\" (UID: \"dacc4f5b-746c-48bb-9d16-a30e402aa461\") " pod="openstack/nova-api-0" Jan 22 09:23:55 crc kubenswrapper[4811]: I0122 09:23:55.566441 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 22 09:23:55 crc kubenswrapper[4811]: I0122 09:23:55.581134 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/738ef216-ebaa-4000-8021-1c3b675abb3a-combined-ca-bundle\") pod \"738ef216-ebaa-4000-8021-1c3b675abb3a\" (UID: \"738ef216-ebaa-4000-8021-1c3b675abb3a\") " Jan 22 09:23:55 crc kubenswrapper[4811]: I0122 09:23:55.581409 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/738ef216-ebaa-4000-8021-1c3b675abb3a-config-data\") pod \"738ef216-ebaa-4000-8021-1c3b675abb3a\" (UID: \"738ef216-ebaa-4000-8021-1c3b675abb3a\") " Jan 22 09:23:55 crc kubenswrapper[4811]: I0122 09:23:55.581527 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vrg9v\" (UniqueName: \"kubernetes.io/projected/738ef216-ebaa-4000-8021-1c3b675abb3a-kube-api-access-vrg9v\") pod \"738ef216-ebaa-4000-8021-1c3b675abb3a\" (UID: \"738ef216-ebaa-4000-8021-1c3b675abb3a\") " Jan 22 09:23:55 crc kubenswrapper[4811]: I0122 09:23:55.585064 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/738ef216-ebaa-4000-8021-1c3b675abb3a-kube-api-access-vrg9v" (OuterVolumeSpecName: "kube-api-access-vrg9v") pod "738ef216-ebaa-4000-8021-1c3b675abb3a" (UID: "738ef216-ebaa-4000-8021-1c3b675abb3a"). InnerVolumeSpecName "kube-api-access-vrg9v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:23:55 crc kubenswrapper[4811]: I0122 09:23:55.597047 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 22 09:23:55 crc kubenswrapper[4811]: I0122 09:23:55.604836 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/738ef216-ebaa-4000-8021-1c3b675abb3a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "738ef216-ebaa-4000-8021-1c3b675abb3a" (UID: "738ef216-ebaa-4000-8021-1c3b675abb3a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:23:55 crc kubenswrapper[4811]: I0122 09:23:55.627561 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/738ef216-ebaa-4000-8021-1c3b675abb3a-config-data" (OuterVolumeSpecName: "config-data") pod "738ef216-ebaa-4000-8021-1c3b675abb3a" (UID: "738ef216-ebaa-4000-8021-1c3b675abb3a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:23:55 crc kubenswrapper[4811]: I0122 09:23:55.687747 4811 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/738ef216-ebaa-4000-8021-1c3b675abb3a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 09:23:55 crc kubenswrapper[4811]: I0122 09:23:55.687807 4811 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/738ef216-ebaa-4000-8021-1c3b675abb3a-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 09:23:55 crc kubenswrapper[4811]: I0122 09:23:55.687820 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vrg9v\" (UniqueName: \"kubernetes.io/projected/738ef216-ebaa-4000-8021-1c3b675abb3a-kube-api-access-vrg9v\") on node \"crc\" DevicePath \"\"" Jan 22 09:23:56 crc kubenswrapper[4811]: I0122 09:23:56.004609 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9cb1658a-c6d5-4087-9978-64888c975b22" path="/var/lib/kubelet/pods/9cb1658a-c6d5-4087-9978-64888c975b22/volumes" Jan 22 09:23:56 crc kubenswrapper[4811]: I0122 09:23:56.109906 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 22 09:23:56 crc kubenswrapper[4811]: I0122 09:23:56.212814 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"dacc4f5b-746c-48bb-9d16-a30e402aa461","Type":"ContainerStarted","Data":"9fd90a25a11502ceb32c4621e5057ffb4be779476838dc8ace8727060425e3c6"} Jan 22 09:23:56 crc kubenswrapper[4811]: I0122 09:23:56.215421 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"738ef216-ebaa-4000-8021-1c3b675abb3a","Type":"ContainerDied","Data":"d6777979f00bfb1aba0b48e9a0a87023bfa9fef2d133bb3d6c74b8eacec33ab1"} Jan 22 09:23:56 crc kubenswrapper[4811]: I0122 09:23:56.215533 4811 scope.go:117] "RemoveContainer" containerID="b41df3f94807c81186b37a6668bb739f700725818de37a3652af3b6080e12c13" Jan 22 09:23:56 crc kubenswrapper[4811]: I0122 09:23:56.215472 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 22 09:23:56 crc kubenswrapper[4811]: I0122 09:23:56.233240 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 22 09:23:56 crc kubenswrapper[4811]: I0122 09:23:56.242931 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 22 09:23:56 crc kubenswrapper[4811]: I0122 09:23:56.254168 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 22 09:23:56 crc kubenswrapper[4811]: E0122 09:23:56.254589 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="738ef216-ebaa-4000-8021-1c3b675abb3a" containerName="nova-scheduler-scheduler" Jan 22 09:23:56 crc kubenswrapper[4811]: I0122 09:23:56.254610 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="738ef216-ebaa-4000-8021-1c3b675abb3a" containerName="nova-scheduler-scheduler" Jan 22 09:23:56 crc kubenswrapper[4811]: I0122 09:23:56.254828 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="738ef216-ebaa-4000-8021-1c3b675abb3a" containerName="nova-scheduler-scheduler" Jan 22 09:23:56 crc kubenswrapper[4811]: I0122 09:23:56.257292 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 22 09:23:56 crc kubenswrapper[4811]: I0122 09:23:56.259187 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 22 09:23:56 crc kubenswrapper[4811]: I0122 09:23:56.284850 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 22 09:23:56 crc kubenswrapper[4811]: I0122 09:23:56.297999 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5phxm\" (UniqueName: \"kubernetes.io/projected/d8786fe9-36cf-4e1e-8d24-9eaeba9b1f9b-kube-api-access-5phxm\") pod \"nova-scheduler-0\" (UID: \"d8786fe9-36cf-4e1e-8d24-9eaeba9b1f9b\") " pod="openstack/nova-scheduler-0" Jan 22 09:23:56 crc kubenswrapper[4811]: I0122 09:23:56.298087 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8786fe9-36cf-4e1e-8d24-9eaeba9b1f9b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d8786fe9-36cf-4e1e-8d24-9eaeba9b1f9b\") " pod="openstack/nova-scheduler-0" Jan 22 09:23:56 crc kubenswrapper[4811]: I0122 09:23:56.298342 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8786fe9-36cf-4e1e-8d24-9eaeba9b1f9b-config-data\") pod \"nova-scheduler-0\" (UID: \"d8786fe9-36cf-4e1e-8d24-9eaeba9b1f9b\") " pod="openstack/nova-scheduler-0" Jan 22 09:23:56 crc kubenswrapper[4811]: I0122 09:23:56.400647 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8786fe9-36cf-4e1e-8d24-9eaeba9b1f9b-config-data\") pod \"nova-scheduler-0\" (UID: \"d8786fe9-36cf-4e1e-8d24-9eaeba9b1f9b\") " pod="openstack/nova-scheduler-0" Jan 22 09:23:56 crc kubenswrapper[4811]: I0122 09:23:56.400866 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5phxm\" (UniqueName: \"kubernetes.io/projected/d8786fe9-36cf-4e1e-8d24-9eaeba9b1f9b-kube-api-access-5phxm\") pod \"nova-scheduler-0\" (UID: \"d8786fe9-36cf-4e1e-8d24-9eaeba9b1f9b\") " pod="openstack/nova-scheduler-0" Jan 22 09:23:56 crc kubenswrapper[4811]: I0122 09:23:56.400931 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8786fe9-36cf-4e1e-8d24-9eaeba9b1f9b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d8786fe9-36cf-4e1e-8d24-9eaeba9b1f9b\") " pod="openstack/nova-scheduler-0" Jan 22 09:23:56 crc kubenswrapper[4811]: I0122 09:23:56.404050 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8786fe9-36cf-4e1e-8d24-9eaeba9b1f9b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d8786fe9-36cf-4e1e-8d24-9eaeba9b1f9b\") " pod="openstack/nova-scheduler-0" Jan 22 09:23:56 crc kubenswrapper[4811]: I0122 09:23:56.404168 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8786fe9-36cf-4e1e-8d24-9eaeba9b1f9b-config-data\") pod \"nova-scheduler-0\" (UID: \"d8786fe9-36cf-4e1e-8d24-9eaeba9b1f9b\") " pod="openstack/nova-scheduler-0" Jan 22 09:23:56 crc kubenswrapper[4811]: I0122 09:23:56.418777 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5phxm\" (UniqueName: \"kubernetes.io/projected/d8786fe9-36cf-4e1e-8d24-9eaeba9b1f9b-kube-api-access-5phxm\") pod \"nova-scheduler-0\" (UID: \"d8786fe9-36cf-4e1e-8d24-9eaeba9b1f9b\") " pod="openstack/nova-scheduler-0" Jan 22 09:23:56 crc kubenswrapper[4811]: I0122 09:23:56.570925 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 22 09:23:57 crc kubenswrapper[4811]: I0122 09:23:57.185263 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 22 09:23:57 crc kubenswrapper[4811]: I0122 09:23:57.231429 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"dacc4f5b-746c-48bb-9d16-a30e402aa461","Type":"ContainerStarted","Data":"2d7da185d5b816b7a7e73a4ba087ddf32fad7a34e6118b5f8711f02d97e7b1c8"} Jan 22 09:23:57 crc kubenswrapper[4811]: I0122 09:23:57.231472 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"dacc4f5b-746c-48bb-9d16-a30e402aa461","Type":"ContainerStarted","Data":"4c9512013c3d15f187a7a638a23cf85ce8e7496754a3c0c87c8129e574dec0d2"} Jan 22 09:23:57 crc kubenswrapper[4811]: I0122 09:23:57.235088 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d8786fe9-36cf-4e1e-8d24-9eaeba9b1f9b","Type":"ContainerStarted","Data":"dde90a0cdcd0b90bd1edf7bc16995531a7f93df6b51d83026abb0b03ef9fe162"} Jan 22 09:23:57 crc kubenswrapper[4811]: I0122 09:23:57.263972 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.26395588 podStartE2EDuration="2.26395588s" podCreationTimestamp="2026-01-22 09:23:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:23:57.254908036 +0000 UTC m=+1081.577095180" watchObservedRunningTime="2026-01-22 09:23:57.26395588 +0000 UTC m=+1081.586143003" Jan 22 09:23:57 crc kubenswrapper[4811]: I0122 09:23:57.927066 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 22 09:23:57 crc kubenswrapper[4811]: I0122 09:23:57.935568 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8807b3ea-b4ec-44d4-b2bc-d79fcb7c0a90-combined-ca-bundle\") pod \"8807b3ea-b4ec-44d4-b2bc-d79fcb7c0a90\" (UID: \"8807b3ea-b4ec-44d4-b2bc-d79fcb7c0a90\") " Jan 22 09:23:57 crc kubenswrapper[4811]: I0122 09:23:57.935694 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8807b3ea-b4ec-44d4-b2bc-d79fcb7c0a90-config-data\") pod \"8807b3ea-b4ec-44d4-b2bc-d79fcb7c0a90\" (UID: \"8807b3ea-b4ec-44d4-b2bc-d79fcb7c0a90\") " Jan 22 09:23:57 crc kubenswrapper[4811]: I0122 09:23:57.935743 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8807b3ea-b4ec-44d4-b2bc-d79fcb7c0a90-logs\") pod \"8807b3ea-b4ec-44d4-b2bc-d79fcb7c0a90\" (UID: \"8807b3ea-b4ec-44d4-b2bc-d79fcb7c0a90\") " Jan 22 09:23:57 crc kubenswrapper[4811]: I0122 09:23:57.935774 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jh6nn\" (UniqueName: \"kubernetes.io/projected/8807b3ea-b4ec-44d4-b2bc-d79fcb7c0a90-kube-api-access-jh6nn\") pod \"8807b3ea-b4ec-44d4-b2bc-d79fcb7c0a90\" (UID: \"8807b3ea-b4ec-44d4-b2bc-d79fcb7c0a90\") " Jan 22 09:23:57 crc kubenswrapper[4811]: I0122 09:23:57.935816 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8807b3ea-b4ec-44d4-b2bc-d79fcb7c0a90-nova-metadata-tls-certs\") pod \"8807b3ea-b4ec-44d4-b2bc-d79fcb7c0a90\" (UID: \"8807b3ea-b4ec-44d4-b2bc-d79fcb7c0a90\") " Jan 22 09:23:57 crc kubenswrapper[4811]: I0122 09:23:57.937285 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8807b3ea-b4ec-44d4-b2bc-d79fcb7c0a90-logs" (OuterVolumeSpecName: "logs") pod "8807b3ea-b4ec-44d4-b2bc-d79fcb7c0a90" (UID: "8807b3ea-b4ec-44d4-b2bc-d79fcb7c0a90"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:23:57 crc kubenswrapper[4811]: I0122 09:23:57.941075 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8807b3ea-b4ec-44d4-b2bc-d79fcb7c0a90-kube-api-access-jh6nn" (OuterVolumeSpecName: "kube-api-access-jh6nn") pod "8807b3ea-b4ec-44d4-b2bc-d79fcb7c0a90" (UID: "8807b3ea-b4ec-44d4-b2bc-d79fcb7c0a90"). InnerVolumeSpecName "kube-api-access-jh6nn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:23:57 crc kubenswrapper[4811]: I0122 09:23:57.965957 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8807b3ea-b4ec-44d4-b2bc-d79fcb7c0a90-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8807b3ea-b4ec-44d4-b2bc-d79fcb7c0a90" (UID: "8807b3ea-b4ec-44d4-b2bc-d79fcb7c0a90"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:23:57 crc kubenswrapper[4811]: I0122 09:23:57.972787 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8807b3ea-b4ec-44d4-b2bc-d79fcb7c0a90-config-data" (OuterVolumeSpecName: "config-data") pod "8807b3ea-b4ec-44d4-b2bc-d79fcb7c0a90" (UID: "8807b3ea-b4ec-44d4-b2bc-d79fcb7c0a90"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:23:57 crc kubenswrapper[4811]: I0122 09:23:57.995022 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8807b3ea-b4ec-44d4-b2bc-d79fcb7c0a90-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "8807b3ea-b4ec-44d4-b2bc-d79fcb7c0a90" (UID: "8807b3ea-b4ec-44d4-b2bc-d79fcb7c0a90"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:23:58 crc kubenswrapper[4811]: I0122 09:23:58.016025 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="738ef216-ebaa-4000-8021-1c3b675abb3a" path="/var/lib/kubelet/pods/738ef216-ebaa-4000-8021-1c3b675abb3a/volumes" Jan 22 09:23:58 crc kubenswrapper[4811]: I0122 09:23:58.037751 4811 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8807b3ea-b4ec-44d4-b2bc-d79fcb7c0a90-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 09:23:58 crc kubenswrapper[4811]: I0122 09:23:58.037870 4811 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8807b3ea-b4ec-44d4-b2bc-d79fcb7c0a90-logs\") on node \"crc\" DevicePath \"\"" Jan 22 09:23:58 crc kubenswrapper[4811]: I0122 09:23:58.038022 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jh6nn\" (UniqueName: \"kubernetes.io/projected/8807b3ea-b4ec-44d4-b2bc-d79fcb7c0a90-kube-api-access-jh6nn\") on node \"crc\" DevicePath \"\"" Jan 22 09:23:58 crc kubenswrapper[4811]: I0122 09:23:58.038087 4811 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8807b3ea-b4ec-44d4-b2bc-d79fcb7c0a90-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 09:23:58 crc kubenswrapper[4811]: I0122 09:23:58.038146 4811 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8807b3ea-b4ec-44d4-b2bc-d79fcb7c0a90-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 09:23:58 crc kubenswrapper[4811]: I0122 09:23:58.252912 4811 generic.go:334] "Generic (PLEG): container finished" podID="8807b3ea-b4ec-44d4-b2bc-d79fcb7c0a90" containerID="9aeee3bf886228f178ff79e6abb6efd4b28625304470f1cd6dfe411000da41dd" exitCode=0 Jan 22 09:23:58 crc kubenswrapper[4811]: I0122 09:23:58.252982 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 22 09:23:58 crc kubenswrapper[4811]: I0122 09:23:58.253004 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8807b3ea-b4ec-44d4-b2bc-d79fcb7c0a90","Type":"ContainerDied","Data":"9aeee3bf886228f178ff79e6abb6efd4b28625304470f1cd6dfe411000da41dd"} Jan 22 09:23:58 crc kubenswrapper[4811]: I0122 09:23:58.254070 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8807b3ea-b4ec-44d4-b2bc-d79fcb7c0a90","Type":"ContainerDied","Data":"ffbad12c23014d97efcc1f01bc2056608670f929248d19f36f658602c9de47c7"} Jan 22 09:23:58 crc kubenswrapper[4811]: I0122 09:23:58.254114 4811 scope.go:117] "RemoveContainer" containerID="9aeee3bf886228f178ff79e6abb6efd4b28625304470f1cd6dfe411000da41dd" Jan 22 09:23:58 crc kubenswrapper[4811]: I0122 09:23:58.257467 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d8786fe9-36cf-4e1e-8d24-9eaeba9b1f9b","Type":"ContainerStarted","Data":"f8560580497ee5efb52ce9b81245cfc83e8d6abff7f4d138f676aedb6f99fb4d"} Jan 22 09:23:58 crc kubenswrapper[4811]: I0122 09:23:58.292298 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 22 09:23:58 crc kubenswrapper[4811]: I0122 09:23:58.296788 4811 scope.go:117] "RemoveContainer" containerID="638643c7bf0ec3aeec80d2b38cd6ae1c6cf1f9ef9681af2086a1574c73c87a09" Jan 22 09:23:58 crc kubenswrapper[4811]: I0122 09:23:58.305751 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 22 09:23:58 crc kubenswrapper[4811]: I0122 09:23:58.313100 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 22 09:23:58 crc kubenswrapper[4811]: E0122 09:23:58.313552 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8807b3ea-b4ec-44d4-b2bc-d79fcb7c0a90" containerName="nova-metadata-metadata" Jan 22 09:23:58 crc kubenswrapper[4811]: I0122 09:23:58.313572 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="8807b3ea-b4ec-44d4-b2bc-d79fcb7c0a90" containerName="nova-metadata-metadata" Jan 22 09:23:58 crc kubenswrapper[4811]: E0122 09:23:58.313596 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8807b3ea-b4ec-44d4-b2bc-d79fcb7c0a90" containerName="nova-metadata-log" Jan 22 09:23:58 crc kubenswrapper[4811]: I0122 09:23:58.313602 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="8807b3ea-b4ec-44d4-b2bc-d79fcb7c0a90" containerName="nova-metadata-log" Jan 22 09:23:58 crc kubenswrapper[4811]: I0122 09:23:58.313795 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="8807b3ea-b4ec-44d4-b2bc-d79fcb7c0a90" containerName="nova-metadata-metadata" Jan 22 09:23:58 crc kubenswrapper[4811]: I0122 09:23:58.313815 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="8807b3ea-b4ec-44d4-b2bc-d79fcb7c0a90" containerName="nova-metadata-log" Jan 22 09:23:58 crc kubenswrapper[4811]: I0122 09:23:58.314675 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 22 09:23:58 crc kubenswrapper[4811]: I0122 09:23:58.316799 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 22 09:23:58 crc kubenswrapper[4811]: I0122 09:23:58.316830 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.316814581 podStartE2EDuration="2.316814581s" podCreationTimestamp="2026-01-22 09:23:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:23:58.297113867 +0000 UTC m=+1082.619300990" watchObservedRunningTime="2026-01-22 09:23:58.316814581 +0000 UTC m=+1082.639001723" Jan 22 09:23:58 crc kubenswrapper[4811]: I0122 09:23:58.316952 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 22 09:23:58 crc kubenswrapper[4811]: I0122 09:23:58.332305 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 22 09:23:58 crc kubenswrapper[4811]: I0122 09:23:58.341410 4811 scope.go:117] "RemoveContainer" containerID="9aeee3bf886228f178ff79e6abb6efd4b28625304470f1cd6dfe411000da41dd" Jan 22 09:23:58 crc kubenswrapper[4811]: E0122 09:23:58.346929 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9aeee3bf886228f178ff79e6abb6efd4b28625304470f1cd6dfe411000da41dd\": container with ID starting with 9aeee3bf886228f178ff79e6abb6efd4b28625304470f1cd6dfe411000da41dd not found: ID does not exist" containerID="9aeee3bf886228f178ff79e6abb6efd4b28625304470f1cd6dfe411000da41dd" Jan 22 09:23:58 crc kubenswrapper[4811]: I0122 09:23:58.346983 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9aeee3bf886228f178ff79e6abb6efd4b28625304470f1cd6dfe411000da41dd"} err="failed to get container status \"9aeee3bf886228f178ff79e6abb6efd4b28625304470f1cd6dfe411000da41dd\": rpc error: code = NotFound desc = could not find container \"9aeee3bf886228f178ff79e6abb6efd4b28625304470f1cd6dfe411000da41dd\": container with ID starting with 9aeee3bf886228f178ff79e6abb6efd4b28625304470f1cd6dfe411000da41dd not found: ID does not exist" Jan 22 09:23:58 crc kubenswrapper[4811]: I0122 09:23:58.347007 4811 scope.go:117] "RemoveContainer" containerID="638643c7bf0ec3aeec80d2b38cd6ae1c6cf1f9ef9681af2086a1574c73c87a09" Jan 22 09:23:58 crc kubenswrapper[4811]: E0122 09:23:58.347466 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"638643c7bf0ec3aeec80d2b38cd6ae1c6cf1f9ef9681af2086a1574c73c87a09\": container with ID starting with 638643c7bf0ec3aeec80d2b38cd6ae1c6cf1f9ef9681af2086a1574c73c87a09 not found: ID does not exist" containerID="638643c7bf0ec3aeec80d2b38cd6ae1c6cf1f9ef9681af2086a1574c73c87a09" Jan 22 09:23:58 crc kubenswrapper[4811]: I0122 09:23:58.347498 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"638643c7bf0ec3aeec80d2b38cd6ae1c6cf1f9ef9681af2086a1574c73c87a09"} err="failed to get container status \"638643c7bf0ec3aeec80d2b38cd6ae1c6cf1f9ef9681af2086a1574c73c87a09\": rpc error: code = NotFound desc = could not find container \"638643c7bf0ec3aeec80d2b38cd6ae1c6cf1f9ef9681af2086a1574c73c87a09\": container with ID starting with 638643c7bf0ec3aeec80d2b38cd6ae1c6cf1f9ef9681af2086a1574c73c87a09 not found: ID does not exist" Jan 22 09:23:58 crc kubenswrapper[4811]: I0122 09:23:58.348483 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ecace20-035f-4590-a0f0-32914d411253-config-data\") pod \"nova-metadata-0\" (UID: \"9ecace20-035f-4590-a0f0-32914d411253\") " pod="openstack/nova-metadata-0" Jan 22 09:23:58 crc kubenswrapper[4811]: I0122 09:23:58.348530 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9ecace20-035f-4590-a0f0-32914d411253-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"9ecace20-035f-4590-a0f0-32914d411253\") " pod="openstack/nova-metadata-0" Jan 22 09:23:58 crc kubenswrapper[4811]: I0122 09:23:58.348579 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ecace20-035f-4590-a0f0-32914d411253-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"9ecace20-035f-4590-a0f0-32914d411253\") " pod="openstack/nova-metadata-0" Jan 22 09:23:58 crc kubenswrapper[4811]: I0122 09:23:58.348643 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9ecace20-035f-4590-a0f0-32914d411253-logs\") pod \"nova-metadata-0\" (UID: \"9ecace20-035f-4590-a0f0-32914d411253\") " pod="openstack/nova-metadata-0" Jan 22 09:23:58 crc kubenswrapper[4811]: I0122 09:23:58.348841 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ckwj\" (UniqueName: \"kubernetes.io/projected/9ecace20-035f-4590-a0f0-32914d411253-kube-api-access-2ckwj\") pod \"nova-metadata-0\" (UID: \"9ecace20-035f-4590-a0f0-32914d411253\") " pod="openstack/nova-metadata-0" Jan 22 09:23:58 crc kubenswrapper[4811]: I0122 09:23:58.450645 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ecace20-035f-4590-a0f0-32914d411253-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"9ecace20-035f-4590-a0f0-32914d411253\") " pod="openstack/nova-metadata-0" Jan 22 09:23:58 crc kubenswrapper[4811]: I0122 09:23:58.450763 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9ecace20-035f-4590-a0f0-32914d411253-logs\") pod \"nova-metadata-0\" (UID: \"9ecace20-035f-4590-a0f0-32914d411253\") " pod="openstack/nova-metadata-0" Jan 22 09:23:58 crc kubenswrapper[4811]: I0122 09:23:58.450872 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ckwj\" (UniqueName: \"kubernetes.io/projected/9ecace20-035f-4590-a0f0-32914d411253-kube-api-access-2ckwj\") pod \"nova-metadata-0\" (UID: \"9ecace20-035f-4590-a0f0-32914d411253\") " pod="openstack/nova-metadata-0" Jan 22 09:23:58 crc kubenswrapper[4811]: I0122 09:23:58.450977 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ecace20-035f-4590-a0f0-32914d411253-config-data\") pod \"nova-metadata-0\" (UID: \"9ecace20-035f-4590-a0f0-32914d411253\") " pod="openstack/nova-metadata-0" Jan 22 09:23:58 crc kubenswrapper[4811]: I0122 09:23:58.451018 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9ecace20-035f-4590-a0f0-32914d411253-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"9ecace20-035f-4590-a0f0-32914d411253\") " pod="openstack/nova-metadata-0" Jan 22 09:23:58 crc kubenswrapper[4811]: I0122 09:23:58.452709 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9ecace20-035f-4590-a0f0-32914d411253-logs\") pod \"nova-metadata-0\" (UID: \"9ecace20-035f-4590-a0f0-32914d411253\") " pod="openstack/nova-metadata-0" Jan 22 09:23:58 crc kubenswrapper[4811]: I0122 09:23:58.455092 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ecace20-035f-4590-a0f0-32914d411253-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"9ecace20-035f-4590-a0f0-32914d411253\") " pod="openstack/nova-metadata-0" Jan 22 09:23:58 crc kubenswrapper[4811]: I0122 09:23:58.455117 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9ecace20-035f-4590-a0f0-32914d411253-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"9ecace20-035f-4590-a0f0-32914d411253\") " pod="openstack/nova-metadata-0" Jan 22 09:23:58 crc kubenswrapper[4811]: I0122 09:23:58.459095 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ecace20-035f-4590-a0f0-32914d411253-config-data\") pod \"nova-metadata-0\" (UID: \"9ecace20-035f-4590-a0f0-32914d411253\") " pod="openstack/nova-metadata-0" Jan 22 09:23:58 crc kubenswrapper[4811]: I0122 09:23:58.466100 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ckwj\" (UniqueName: \"kubernetes.io/projected/9ecace20-035f-4590-a0f0-32914d411253-kube-api-access-2ckwj\") pod \"nova-metadata-0\" (UID: \"9ecace20-035f-4590-a0f0-32914d411253\") " pod="openstack/nova-metadata-0" Jan 22 09:23:58 crc kubenswrapper[4811]: I0122 09:23:58.636649 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 22 09:23:59 crc kubenswrapper[4811]: I0122 09:23:59.053891 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 22 09:23:59 crc kubenswrapper[4811]: W0122 09:23:59.064965 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9ecace20_035f_4590_a0f0_32914d411253.slice/crio-133c8b119919f6bdaece1f31da1d032576f9cfa130ae5aa042ea97081817e07a WatchSource:0}: Error finding container 133c8b119919f6bdaece1f31da1d032576f9cfa130ae5aa042ea97081817e07a: Status 404 returned error can't find the container with id 133c8b119919f6bdaece1f31da1d032576f9cfa130ae5aa042ea97081817e07a Jan 22 09:23:59 crc kubenswrapper[4811]: I0122 09:23:59.267091 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9ecace20-035f-4590-a0f0-32914d411253","Type":"ContainerStarted","Data":"91cbeee16fefe8ead95731becd5803cbdeff4bd9d9088d9934053e29f0d6fe2c"} Jan 22 09:23:59 crc kubenswrapper[4811]: I0122 09:23:59.267350 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9ecace20-035f-4590-a0f0-32914d411253","Type":"ContainerStarted","Data":"133c8b119919f6bdaece1f31da1d032576f9cfa130ae5aa042ea97081817e07a"} Jan 22 09:24:00 crc kubenswrapper[4811]: I0122 09:24:00.028519 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8807b3ea-b4ec-44d4-b2bc-d79fcb7c0a90" path="/var/lib/kubelet/pods/8807b3ea-b4ec-44d4-b2bc-d79fcb7c0a90/volumes" Jan 22 09:24:00 crc kubenswrapper[4811]: I0122 09:24:00.282388 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9ecace20-035f-4590-a0f0-32914d411253","Type":"ContainerStarted","Data":"10b941c8aed6b38fd8f42713b171866067f6fe1920dd25af235a58e3433d09dc"} Jan 22 09:24:00 crc kubenswrapper[4811]: I0122 09:24:00.299532 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.299515376 podStartE2EDuration="2.299515376s" podCreationTimestamp="2026-01-22 09:23:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:24:00.29568969 +0000 UTC m=+1084.617876814" watchObservedRunningTime="2026-01-22 09:24:00.299515376 +0000 UTC m=+1084.621702498" Jan 22 09:24:01 crc kubenswrapper[4811]: I0122 09:24:01.572401 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 22 09:24:03 crc kubenswrapper[4811]: I0122 09:24:03.637221 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 22 09:24:03 crc kubenswrapper[4811]: I0122 09:24:03.637273 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 22 09:24:05 crc kubenswrapper[4811]: I0122 09:24:05.501358 4811 patch_prober.go:28] interesting pod/machine-config-daemon-txvcq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 09:24:05 crc kubenswrapper[4811]: I0122 09:24:05.502285 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 09:24:05 crc kubenswrapper[4811]: I0122 09:24:05.597294 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 22 09:24:05 crc kubenswrapper[4811]: I0122 09:24:05.597510 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 22 09:24:06 crc kubenswrapper[4811]: I0122 09:24:06.572332 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 22 09:24:06 crc kubenswrapper[4811]: I0122 09:24:06.595794 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 22 09:24:06 crc kubenswrapper[4811]: I0122 09:24:06.607751 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="dacc4f5b-746c-48bb-9d16-a30e402aa461" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.190:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 09:24:06 crc kubenswrapper[4811]: I0122 09:24:06.607763 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="dacc4f5b-746c-48bb-9d16-a30e402aa461" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.190:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 22 09:24:07 crc kubenswrapper[4811]: I0122 09:24:07.363530 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 22 09:24:08 crc kubenswrapper[4811]: I0122 09:24:08.637149 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 22 09:24:08 crc kubenswrapper[4811]: I0122 09:24:08.637233 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 22 09:24:09 crc kubenswrapper[4811]: I0122 09:24:09.650766 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="9ecace20-035f-4590-a0f0-32914d411253" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.192:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 22 09:24:09 crc kubenswrapper[4811]: I0122 09:24:09.650769 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="9ecace20-035f-4590-a0f0-32914d411253" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.192:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 09:24:15 crc kubenswrapper[4811]: I0122 09:24:15.613320 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 22 09:24:15 crc kubenswrapper[4811]: I0122 09:24:15.614203 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 22 09:24:15 crc kubenswrapper[4811]: I0122 09:24:15.618741 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 22 09:24:15 crc kubenswrapper[4811]: I0122 09:24:15.619569 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 22 09:24:15 crc kubenswrapper[4811]: I0122 09:24:15.902897 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 22 09:24:16 crc kubenswrapper[4811]: I0122 09:24:16.401587 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 22 09:24:16 crc kubenswrapper[4811]: I0122 09:24:16.407458 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 22 09:24:18 crc kubenswrapper[4811]: I0122 09:24:18.643684 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 22 09:24:18 crc kubenswrapper[4811]: I0122 09:24:18.644269 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 22 09:24:18 crc kubenswrapper[4811]: I0122 09:24:18.650454 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 22 09:24:18 crc kubenswrapper[4811]: I0122 09:24:18.651736 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 22 09:24:24 crc kubenswrapper[4811]: I0122 09:24:24.770801 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 22 09:24:26 crc kubenswrapper[4811]: I0122 09:24:26.369785 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 22 09:24:28 crc kubenswrapper[4811]: I0122 09:24:28.738715 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="9e61948b-1761-46b8-9ab3-e776224f335a" containerName="rabbitmq" containerID="cri-o://1acf2d1b1ace65c9ae049c114455199e1f10221371de344fd552a4aeabf05033" gracePeriod=604797 Jan 22 09:24:29 crc kubenswrapper[4811]: I0122 09:24:29.749592 4811 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="9e61948b-1761-46b8-9ab3-e776224f335a" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.94:5671: connect: connection refused" Jan 22 09:24:30 crc kubenswrapper[4811]: I0122 09:24:30.351494 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="0241fc5c-fa26-44a1-9db3-006b438b9123" containerName="rabbitmq" containerID="cri-o://5a60a75c8f726053514360ec57baed7b88c720089fe931ad5add6184105ab8ff" gracePeriod=604797 Jan 22 09:24:35 crc kubenswrapper[4811]: I0122 09:24:35.145160 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 22 09:24:35 crc kubenswrapper[4811]: I0122 09:24:35.244752 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9e61948b-1761-46b8-9ab3-e776224f335a-server-conf\") pod \"9e61948b-1761-46b8-9ab3-e776224f335a\" (UID: \"9e61948b-1761-46b8-9ab3-e776224f335a\") " Jan 22 09:24:35 crc kubenswrapper[4811]: I0122 09:24:35.245170 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9e61948b-1761-46b8-9ab3-e776224f335a-rabbitmq-plugins\") pod \"9e61948b-1761-46b8-9ab3-e776224f335a\" (UID: \"9e61948b-1761-46b8-9ab3-e776224f335a\") " Jan 22 09:24:35 crc kubenswrapper[4811]: I0122 09:24:35.245478 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e61948b-1761-46b8-9ab3-e776224f335a-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "9e61948b-1761-46b8-9ab3-e776224f335a" (UID: "9e61948b-1761-46b8-9ab3-e776224f335a"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:24:35 crc kubenswrapper[4811]: I0122 09:24:35.245524 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"9e61948b-1761-46b8-9ab3-e776224f335a\" (UID: \"9e61948b-1761-46b8-9ab3-e776224f335a\") " Jan 22 09:24:35 crc kubenswrapper[4811]: I0122 09:24:35.245792 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9e61948b-1761-46b8-9ab3-e776224f335a-config-data\") pod \"9e61948b-1761-46b8-9ab3-e776224f335a\" (UID: \"9e61948b-1761-46b8-9ab3-e776224f335a\") " Jan 22 09:24:35 crc kubenswrapper[4811]: I0122 09:24:35.245874 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9e61948b-1761-46b8-9ab3-e776224f335a-plugins-conf\") pod \"9e61948b-1761-46b8-9ab3-e776224f335a\" (UID: \"9e61948b-1761-46b8-9ab3-e776224f335a\") " Jan 22 09:24:35 crc kubenswrapper[4811]: I0122 09:24:35.245948 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-27qdb\" (UniqueName: \"kubernetes.io/projected/9e61948b-1761-46b8-9ab3-e776224f335a-kube-api-access-27qdb\") pod \"9e61948b-1761-46b8-9ab3-e776224f335a\" (UID: \"9e61948b-1761-46b8-9ab3-e776224f335a\") " Jan 22 09:24:35 crc kubenswrapper[4811]: I0122 09:24:35.246034 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9e61948b-1761-46b8-9ab3-e776224f335a-rabbitmq-erlang-cookie\") pod \"9e61948b-1761-46b8-9ab3-e776224f335a\" (UID: \"9e61948b-1761-46b8-9ab3-e776224f335a\") " Jan 22 09:24:35 crc kubenswrapper[4811]: I0122 09:24:35.246084 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9e61948b-1761-46b8-9ab3-e776224f335a-rabbitmq-confd\") pod \"9e61948b-1761-46b8-9ab3-e776224f335a\" (UID: \"9e61948b-1761-46b8-9ab3-e776224f335a\") " Jan 22 09:24:35 crc kubenswrapper[4811]: I0122 09:24:35.246114 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9e61948b-1761-46b8-9ab3-e776224f335a-rabbitmq-tls\") pod \"9e61948b-1761-46b8-9ab3-e776224f335a\" (UID: \"9e61948b-1761-46b8-9ab3-e776224f335a\") " Jan 22 09:24:35 crc kubenswrapper[4811]: I0122 09:24:35.246154 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9e61948b-1761-46b8-9ab3-e776224f335a-erlang-cookie-secret\") pod \"9e61948b-1761-46b8-9ab3-e776224f335a\" (UID: \"9e61948b-1761-46b8-9ab3-e776224f335a\") " Jan 22 09:24:35 crc kubenswrapper[4811]: I0122 09:24:35.246182 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9e61948b-1761-46b8-9ab3-e776224f335a-pod-info\") pod \"9e61948b-1761-46b8-9ab3-e776224f335a\" (UID: \"9e61948b-1761-46b8-9ab3-e776224f335a\") " Jan 22 09:24:35 crc kubenswrapper[4811]: I0122 09:24:35.246491 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e61948b-1761-46b8-9ab3-e776224f335a-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "9e61948b-1761-46b8-9ab3-e776224f335a" (UID: "9e61948b-1761-46b8-9ab3-e776224f335a"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:24:35 crc kubenswrapper[4811]: I0122 09:24:35.246786 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e61948b-1761-46b8-9ab3-e776224f335a-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "9e61948b-1761-46b8-9ab3-e776224f335a" (UID: "9e61948b-1761-46b8-9ab3-e776224f335a"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:24:35 crc kubenswrapper[4811]: I0122 09:24:35.246958 4811 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9e61948b-1761-46b8-9ab3-e776224f335a-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 22 09:24:35 crc kubenswrapper[4811]: I0122 09:24:35.246981 4811 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9e61948b-1761-46b8-9ab3-e776224f335a-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 22 09:24:35 crc kubenswrapper[4811]: I0122 09:24:35.246990 4811 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9e61948b-1761-46b8-9ab3-e776224f335a-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 22 09:24:35 crc kubenswrapper[4811]: I0122 09:24:35.255137 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e61948b-1761-46b8-9ab3-e776224f335a-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "9e61948b-1761-46b8-9ab3-e776224f335a" (UID: "9e61948b-1761-46b8-9ab3-e776224f335a"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:24:35 crc kubenswrapper[4811]: I0122 09:24:35.262727 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "persistence") pod "9e61948b-1761-46b8-9ab3-e776224f335a" (UID: "9e61948b-1761-46b8-9ab3-e776224f335a"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 22 09:24:35 crc kubenswrapper[4811]: I0122 09:24:35.262926 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/9e61948b-1761-46b8-9ab3-e776224f335a-pod-info" (OuterVolumeSpecName: "pod-info") pod "9e61948b-1761-46b8-9ab3-e776224f335a" (UID: "9e61948b-1761-46b8-9ab3-e776224f335a"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 22 09:24:35 crc kubenswrapper[4811]: I0122 09:24:35.272933 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e61948b-1761-46b8-9ab3-e776224f335a-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "9e61948b-1761-46b8-9ab3-e776224f335a" (UID: "9e61948b-1761-46b8-9ab3-e776224f335a"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:24:35 crc kubenswrapper[4811]: I0122 09:24:35.273032 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e61948b-1761-46b8-9ab3-e776224f335a-config-data" (OuterVolumeSpecName: "config-data") pod "9e61948b-1761-46b8-9ab3-e776224f335a" (UID: "9e61948b-1761-46b8-9ab3-e776224f335a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:24:35 crc kubenswrapper[4811]: I0122 09:24:35.275338 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e61948b-1761-46b8-9ab3-e776224f335a-kube-api-access-27qdb" (OuterVolumeSpecName: "kube-api-access-27qdb") pod "9e61948b-1761-46b8-9ab3-e776224f335a" (UID: "9e61948b-1761-46b8-9ab3-e776224f335a"). InnerVolumeSpecName "kube-api-access-27qdb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:24:35 crc kubenswrapper[4811]: I0122 09:24:35.295099 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e61948b-1761-46b8-9ab3-e776224f335a-server-conf" (OuterVolumeSpecName: "server-conf") pod "9e61948b-1761-46b8-9ab3-e776224f335a" (UID: "9e61948b-1761-46b8-9ab3-e776224f335a"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:24:35 crc kubenswrapper[4811]: I0122 09:24:35.348648 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-27qdb\" (UniqueName: \"kubernetes.io/projected/9e61948b-1761-46b8-9ab3-e776224f335a-kube-api-access-27qdb\") on node \"crc\" DevicePath \"\"" Jan 22 09:24:35 crc kubenswrapper[4811]: I0122 09:24:35.348679 4811 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9e61948b-1761-46b8-9ab3-e776224f335a-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 22 09:24:35 crc kubenswrapper[4811]: I0122 09:24:35.348689 4811 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9e61948b-1761-46b8-9ab3-e776224f335a-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 22 09:24:35 crc kubenswrapper[4811]: I0122 09:24:35.348696 4811 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9e61948b-1761-46b8-9ab3-e776224f335a-pod-info\") on node \"crc\" DevicePath \"\"" Jan 22 09:24:35 crc kubenswrapper[4811]: I0122 09:24:35.348704 4811 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9e61948b-1761-46b8-9ab3-e776224f335a-server-conf\") on node \"crc\" DevicePath \"\"" Jan 22 09:24:35 crc kubenswrapper[4811]: I0122 09:24:35.348728 4811 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Jan 22 09:24:35 crc kubenswrapper[4811]: I0122 09:24:35.348736 4811 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9e61948b-1761-46b8-9ab3-e776224f335a-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 09:24:35 crc kubenswrapper[4811]: I0122 09:24:35.351056 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e61948b-1761-46b8-9ab3-e776224f335a-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "9e61948b-1761-46b8-9ab3-e776224f335a" (UID: "9e61948b-1761-46b8-9ab3-e776224f335a"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:24:35 crc kubenswrapper[4811]: I0122 09:24:35.365349 4811 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Jan 22 09:24:35 crc kubenswrapper[4811]: I0122 09:24:35.450302 4811 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Jan 22 09:24:35 crc kubenswrapper[4811]: I0122 09:24:35.450342 4811 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9e61948b-1761-46b8-9ab3-e776224f335a-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 22 09:24:35 crc kubenswrapper[4811]: I0122 09:24:35.501558 4811 patch_prober.go:28] interesting pod/machine-config-daemon-txvcq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 09:24:35 crc kubenswrapper[4811]: I0122 09:24:35.501640 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 09:24:35 crc kubenswrapper[4811]: I0122 09:24:35.501698 4811 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" Jan 22 09:24:35 crc kubenswrapper[4811]: I0122 09:24:35.502361 4811 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"48a246c7a0e2d8e856bc2e774a41e4c4a571a73e6dcfe43b9eda16ad78191748"} pod="openshift-machine-config-operator/machine-config-daemon-txvcq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 09:24:35 crc kubenswrapper[4811]: I0122 09:24:35.502423 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" containerName="machine-config-daemon" containerID="cri-o://48a246c7a0e2d8e856bc2e774a41e4c4a571a73e6dcfe43b9eda16ad78191748" gracePeriod=600 Jan 22 09:24:35 crc kubenswrapper[4811]: I0122 09:24:35.559345 4811 generic.go:334] "Generic (PLEG): container finished" podID="9e61948b-1761-46b8-9ab3-e776224f335a" containerID="1acf2d1b1ace65c9ae049c114455199e1f10221371de344fd552a4aeabf05033" exitCode=0 Jan 22 09:24:35 crc kubenswrapper[4811]: I0122 09:24:35.559391 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"9e61948b-1761-46b8-9ab3-e776224f335a","Type":"ContainerDied","Data":"1acf2d1b1ace65c9ae049c114455199e1f10221371de344fd552a4aeabf05033"} Jan 22 09:24:35 crc kubenswrapper[4811]: I0122 09:24:35.559421 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"9e61948b-1761-46b8-9ab3-e776224f335a","Type":"ContainerDied","Data":"384cd764743c61387d16fa47233c3c5901d73950512ce9d783dff5cbef81eecb"} Jan 22 09:24:35 crc kubenswrapper[4811]: I0122 09:24:35.559441 4811 scope.go:117] "RemoveContainer" containerID="1acf2d1b1ace65c9ae049c114455199e1f10221371de344fd552a4aeabf05033" Jan 22 09:24:35 crc kubenswrapper[4811]: I0122 09:24:35.559445 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 22 09:24:35 crc kubenswrapper[4811]: I0122 09:24:35.596667 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 22 09:24:35 crc kubenswrapper[4811]: I0122 09:24:35.606085 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 22 09:24:35 crc kubenswrapper[4811]: I0122 09:24:35.614741 4811 scope.go:117] "RemoveContainer" containerID="1e0f56dd8bb31f9c39584ee804cbf93abc9a85a0b2b1fa91f78ccc97ea695f74" Jan 22 09:24:35 crc kubenswrapper[4811]: I0122 09:24:35.629085 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 22 09:24:35 crc kubenswrapper[4811]: E0122 09:24:35.629495 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e61948b-1761-46b8-9ab3-e776224f335a" containerName="setup-container" Jan 22 09:24:35 crc kubenswrapper[4811]: I0122 09:24:35.629519 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e61948b-1761-46b8-9ab3-e776224f335a" containerName="setup-container" Jan 22 09:24:35 crc kubenswrapper[4811]: E0122 09:24:35.631471 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e61948b-1761-46b8-9ab3-e776224f335a" containerName="rabbitmq" Jan 22 09:24:35 crc kubenswrapper[4811]: I0122 09:24:35.631511 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e61948b-1761-46b8-9ab3-e776224f335a" containerName="rabbitmq" Jan 22 09:24:35 crc kubenswrapper[4811]: I0122 09:24:35.631902 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e61948b-1761-46b8-9ab3-e776224f335a" containerName="rabbitmq" Jan 22 09:24:35 crc kubenswrapper[4811]: I0122 09:24:35.633018 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 22 09:24:35 crc kubenswrapper[4811]: I0122 09:24:35.635527 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 22 09:24:35 crc kubenswrapper[4811]: I0122 09:24:35.635847 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 22 09:24:35 crc kubenswrapper[4811]: I0122 09:24:35.636038 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-2z5wj" Jan 22 09:24:35 crc kubenswrapper[4811]: I0122 09:24:35.637001 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 22 09:24:35 crc kubenswrapper[4811]: I0122 09:24:35.637230 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 22 09:24:35 crc kubenswrapper[4811]: I0122 09:24:35.637364 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 22 09:24:35 crc kubenswrapper[4811]: I0122 09:24:35.638801 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 22 09:24:35 crc kubenswrapper[4811]: I0122 09:24:35.644958 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 22 09:24:35 crc kubenswrapper[4811]: I0122 09:24:35.661137 4811 scope.go:117] "RemoveContainer" containerID="1acf2d1b1ace65c9ae049c114455199e1f10221371de344fd552a4aeabf05033" Jan 22 09:24:35 crc kubenswrapper[4811]: E0122 09:24:35.661961 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1acf2d1b1ace65c9ae049c114455199e1f10221371de344fd552a4aeabf05033\": container with ID starting with 1acf2d1b1ace65c9ae049c114455199e1f10221371de344fd552a4aeabf05033 not found: ID does not exist" containerID="1acf2d1b1ace65c9ae049c114455199e1f10221371de344fd552a4aeabf05033" Jan 22 09:24:35 crc kubenswrapper[4811]: I0122 09:24:35.662003 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1acf2d1b1ace65c9ae049c114455199e1f10221371de344fd552a4aeabf05033"} err="failed to get container status \"1acf2d1b1ace65c9ae049c114455199e1f10221371de344fd552a4aeabf05033\": rpc error: code = NotFound desc = could not find container \"1acf2d1b1ace65c9ae049c114455199e1f10221371de344fd552a4aeabf05033\": container with ID starting with 1acf2d1b1ace65c9ae049c114455199e1f10221371de344fd552a4aeabf05033 not found: ID does not exist" Jan 22 09:24:35 crc kubenswrapper[4811]: I0122 09:24:35.662074 4811 scope.go:117] "RemoveContainer" containerID="1e0f56dd8bb31f9c39584ee804cbf93abc9a85a0b2b1fa91f78ccc97ea695f74" Jan 22 09:24:35 crc kubenswrapper[4811]: E0122 09:24:35.662422 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e0f56dd8bb31f9c39584ee804cbf93abc9a85a0b2b1fa91f78ccc97ea695f74\": container with ID starting with 1e0f56dd8bb31f9c39584ee804cbf93abc9a85a0b2b1fa91f78ccc97ea695f74 not found: ID does not exist" containerID="1e0f56dd8bb31f9c39584ee804cbf93abc9a85a0b2b1fa91f78ccc97ea695f74" Jan 22 09:24:35 crc kubenswrapper[4811]: I0122 09:24:35.662452 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e0f56dd8bb31f9c39584ee804cbf93abc9a85a0b2b1fa91f78ccc97ea695f74"} err="failed to get container status \"1e0f56dd8bb31f9c39584ee804cbf93abc9a85a0b2b1fa91f78ccc97ea695f74\": rpc error: code = NotFound desc = could not find container \"1e0f56dd8bb31f9c39584ee804cbf93abc9a85a0b2b1fa91f78ccc97ea695f74\": container with ID starting with 1e0f56dd8bb31f9c39584ee804cbf93abc9a85a0b2b1fa91f78ccc97ea695f74 not found: ID does not exist" Jan 22 09:24:35 crc kubenswrapper[4811]: I0122 09:24:35.756696 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ztdk\" (UniqueName: \"kubernetes.io/projected/f7aa6fa4-4d1f-4243-bd9c-8b9caa013bb8-kube-api-access-5ztdk\") pod \"rabbitmq-server-0\" (UID: \"f7aa6fa4-4d1f-4243-bd9c-8b9caa013bb8\") " pod="openstack/rabbitmq-server-0" Jan 22 09:24:35 crc kubenswrapper[4811]: I0122 09:24:35.756999 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f7aa6fa4-4d1f-4243-bd9c-8b9caa013bb8-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"f7aa6fa4-4d1f-4243-bd9c-8b9caa013bb8\") " pod="openstack/rabbitmq-server-0" Jan 22 09:24:35 crc kubenswrapper[4811]: I0122 09:24:35.757037 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"f7aa6fa4-4d1f-4243-bd9c-8b9caa013bb8\") " pod="openstack/rabbitmq-server-0" Jan 22 09:24:35 crc kubenswrapper[4811]: I0122 09:24:35.757094 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f7aa6fa4-4d1f-4243-bd9c-8b9caa013bb8-pod-info\") pod \"rabbitmq-server-0\" (UID: \"f7aa6fa4-4d1f-4243-bd9c-8b9caa013bb8\") " pod="openstack/rabbitmq-server-0" Jan 22 09:24:35 crc kubenswrapper[4811]: I0122 09:24:35.757112 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f7aa6fa4-4d1f-4243-bd9c-8b9caa013bb8-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"f7aa6fa4-4d1f-4243-bd9c-8b9caa013bb8\") " pod="openstack/rabbitmq-server-0" Jan 22 09:24:35 crc kubenswrapper[4811]: I0122 09:24:35.757170 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f7aa6fa4-4d1f-4243-bd9c-8b9caa013bb8-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"f7aa6fa4-4d1f-4243-bd9c-8b9caa013bb8\") " pod="openstack/rabbitmq-server-0" Jan 22 09:24:35 crc kubenswrapper[4811]: I0122 09:24:35.757205 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f7aa6fa4-4d1f-4243-bd9c-8b9caa013bb8-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"f7aa6fa4-4d1f-4243-bd9c-8b9caa013bb8\") " pod="openstack/rabbitmq-server-0" Jan 22 09:24:35 crc kubenswrapper[4811]: I0122 09:24:35.757295 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f7aa6fa4-4d1f-4243-bd9c-8b9caa013bb8-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"f7aa6fa4-4d1f-4243-bd9c-8b9caa013bb8\") " pod="openstack/rabbitmq-server-0" Jan 22 09:24:35 crc kubenswrapper[4811]: I0122 09:24:35.757335 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f7aa6fa4-4d1f-4243-bd9c-8b9caa013bb8-config-data\") pod \"rabbitmq-server-0\" (UID: \"f7aa6fa4-4d1f-4243-bd9c-8b9caa013bb8\") " pod="openstack/rabbitmq-server-0" Jan 22 09:24:35 crc kubenswrapper[4811]: I0122 09:24:35.757559 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f7aa6fa4-4d1f-4243-bd9c-8b9caa013bb8-server-conf\") pod \"rabbitmq-server-0\" (UID: \"f7aa6fa4-4d1f-4243-bd9c-8b9caa013bb8\") " pod="openstack/rabbitmq-server-0" Jan 22 09:24:35 crc kubenswrapper[4811]: I0122 09:24:35.757655 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f7aa6fa4-4d1f-4243-bd9c-8b9caa013bb8-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"f7aa6fa4-4d1f-4243-bd9c-8b9caa013bb8\") " pod="openstack/rabbitmq-server-0" Jan 22 09:24:35 crc kubenswrapper[4811]: I0122 09:24:35.859455 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f7aa6fa4-4d1f-4243-bd9c-8b9caa013bb8-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"f7aa6fa4-4d1f-4243-bd9c-8b9caa013bb8\") " pod="openstack/rabbitmq-server-0" Jan 22 09:24:35 crc kubenswrapper[4811]: I0122 09:24:35.859514 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f7aa6fa4-4d1f-4243-bd9c-8b9caa013bb8-config-data\") pod \"rabbitmq-server-0\" (UID: \"f7aa6fa4-4d1f-4243-bd9c-8b9caa013bb8\") " pod="openstack/rabbitmq-server-0" Jan 22 09:24:35 crc kubenswrapper[4811]: I0122 09:24:35.859601 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f7aa6fa4-4d1f-4243-bd9c-8b9caa013bb8-server-conf\") pod \"rabbitmq-server-0\" (UID: \"f7aa6fa4-4d1f-4243-bd9c-8b9caa013bb8\") " pod="openstack/rabbitmq-server-0" Jan 22 09:24:35 crc kubenswrapper[4811]: I0122 09:24:35.859656 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f7aa6fa4-4d1f-4243-bd9c-8b9caa013bb8-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"f7aa6fa4-4d1f-4243-bd9c-8b9caa013bb8\") " pod="openstack/rabbitmq-server-0" Jan 22 09:24:35 crc kubenswrapper[4811]: I0122 09:24:35.859715 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5ztdk\" (UniqueName: \"kubernetes.io/projected/f7aa6fa4-4d1f-4243-bd9c-8b9caa013bb8-kube-api-access-5ztdk\") pod \"rabbitmq-server-0\" (UID: \"f7aa6fa4-4d1f-4243-bd9c-8b9caa013bb8\") " pod="openstack/rabbitmq-server-0" Jan 22 09:24:35 crc kubenswrapper[4811]: I0122 09:24:35.859750 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f7aa6fa4-4d1f-4243-bd9c-8b9caa013bb8-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"f7aa6fa4-4d1f-4243-bd9c-8b9caa013bb8\") " pod="openstack/rabbitmq-server-0" Jan 22 09:24:35 crc kubenswrapper[4811]: I0122 09:24:35.859775 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"f7aa6fa4-4d1f-4243-bd9c-8b9caa013bb8\") " pod="openstack/rabbitmq-server-0" Jan 22 09:24:35 crc kubenswrapper[4811]: I0122 09:24:35.859815 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f7aa6fa4-4d1f-4243-bd9c-8b9caa013bb8-pod-info\") pod \"rabbitmq-server-0\" (UID: \"f7aa6fa4-4d1f-4243-bd9c-8b9caa013bb8\") " pod="openstack/rabbitmq-server-0" Jan 22 09:24:35 crc kubenswrapper[4811]: I0122 09:24:35.859832 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f7aa6fa4-4d1f-4243-bd9c-8b9caa013bb8-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"f7aa6fa4-4d1f-4243-bd9c-8b9caa013bb8\") " pod="openstack/rabbitmq-server-0" Jan 22 09:24:35 crc kubenswrapper[4811]: I0122 09:24:35.859868 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f7aa6fa4-4d1f-4243-bd9c-8b9caa013bb8-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"f7aa6fa4-4d1f-4243-bd9c-8b9caa013bb8\") " pod="openstack/rabbitmq-server-0" Jan 22 09:24:35 crc kubenswrapper[4811]: I0122 09:24:35.859895 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f7aa6fa4-4d1f-4243-bd9c-8b9caa013bb8-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"f7aa6fa4-4d1f-4243-bd9c-8b9caa013bb8\") " pod="openstack/rabbitmq-server-0" Jan 22 09:24:35 crc kubenswrapper[4811]: I0122 09:24:35.860310 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f7aa6fa4-4d1f-4243-bd9c-8b9caa013bb8-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"f7aa6fa4-4d1f-4243-bd9c-8b9caa013bb8\") " pod="openstack/rabbitmq-server-0" Jan 22 09:24:35 crc kubenswrapper[4811]: I0122 09:24:35.860374 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f7aa6fa4-4d1f-4243-bd9c-8b9caa013bb8-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"f7aa6fa4-4d1f-4243-bd9c-8b9caa013bb8\") " pod="openstack/rabbitmq-server-0" Jan 22 09:24:35 crc kubenswrapper[4811]: I0122 09:24:35.860688 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f7aa6fa4-4d1f-4243-bd9c-8b9caa013bb8-config-data\") pod \"rabbitmq-server-0\" (UID: \"f7aa6fa4-4d1f-4243-bd9c-8b9caa013bb8\") " pod="openstack/rabbitmq-server-0" Jan 22 09:24:35 crc kubenswrapper[4811]: I0122 09:24:35.860866 4811 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"f7aa6fa4-4d1f-4243-bd9c-8b9caa013bb8\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/rabbitmq-server-0" Jan 22 09:24:35 crc kubenswrapper[4811]: I0122 09:24:35.861497 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f7aa6fa4-4d1f-4243-bd9c-8b9caa013bb8-server-conf\") pod \"rabbitmq-server-0\" (UID: \"f7aa6fa4-4d1f-4243-bd9c-8b9caa013bb8\") " pod="openstack/rabbitmq-server-0" Jan 22 09:24:35 crc kubenswrapper[4811]: I0122 09:24:35.864119 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f7aa6fa4-4d1f-4243-bd9c-8b9caa013bb8-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"f7aa6fa4-4d1f-4243-bd9c-8b9caa013bb8\") " pod="openstack/rabbitmq-server-0" Jan 22 09:24:35 crc kubenswrapper[4811]: I0122 09:24:35.864580 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f7aa6fa4-4d1f-4243-bd9c-8b9caa013bb8-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"f7aa6fa4-4d1f-4243-bd9c-8b9caa013bb8\") " pod="openstack/rabbitmq-server-0" Jan 22 09:24:35 crc kubenswrapper[4811]: I0122 09:24:35.864855 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f7aa6fa4-4d1f-4243-bd9c-8b9caa013bb8-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"f7aa6fa4-4d1f-4243-bd9c-8b9caa013bb8\") " pod="openstack/rabbitmq-server-0" Jan 22 09:24:35 crc kubenswrapper[4811]: I0122 09:24:35.865876 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f7aa6fa4-4d1f-4243-bd9c-8b9caa013bb8-pod-info\") pod \"rabbitmq-server-0\" (UID: \"f7aa6fa4-4d1f-4243-bd9c-8b9caa013bb8\") " pod="openstack/rabbitmq-server-0" Jan 22 09:24:35 crc kubenswrapper[4811]: I0122 09:24:35.874178 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f7aa6fa4-4d1f-4243-bd9c-8b9caa013bb8-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"f7aa6fa4-4d1f-4243-bd9c-8b9caa013bb8\") " pod="openstack/rabbitmq-server-0" Jan 22 09:24:35 crc kubenswrapper[4811]: I0122 09:24:35.878159 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5ztdk\" (UniqueName: \"kubernetes.io/projected/f7aa6fa4-4d1f-4243-bd9c-8b9caa013bb8-kube-api-access-5ztdk\") pod \"rabbitmq-server-0\" (UID: \"f7aa6fa4-4d1f-4243-bd9c-8b9caa013bb8\") " pod="openstack/rabbitmq-server-0" Jan 22 09:24:35 crc kubenswrapper[4811]: I0122 09:24:35.889950 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"f7aa6fa4-4d1f-4243-bd9c-8b9caa013bb8\") " pod="openstack/rabbitmq-server-0" Jan 22 09:24:35 crc kubenswrapper[4811]: I0122 09:24:35.958405 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 22 09:24:36 crc kubenswrapper[4811]: I0122 09:24:36.003563 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e61948b-1761-46b8-9ab3-e776224f335a" path="/var/lib/kubelet/pods/9e61948b-1761-46b8-9ab3-e776224f335a/volumes" Jan 22 09:24:36 crc kubenswrapper[4811]: I0122 09:24:36.390825 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 22 09:24:36 crc kubenswrapper[4811]: W0122 09:24:36.395723 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf7aa6fa4_4d1f_4243_bd9c_8b9caa013bb8.slice/crio-e10c4498a0dc35a58247de91deb1eb69683e90bf1d8e29a688d56453502d79eb WatchSource:0}: Error finding container e10c4498a0dc35a58247de91deb1eb69683e90bf1d8e29a688d56453502d79eb: Status 404 returned error can't find the container with id e10c4498a0dc35a58247de91deb1eb69683e90bf1d8e29a688d56453502d79eb Jan 22 09:24:36 crc kubenswrapper[4811]: I0122 09:24:36.598985 4811 generic.go:334] "Generic (PLEG): container finished" podID="0241fc5c-fa26-44a1-9db3-006b438b9123" containerID="5a60a75c8f726053514360ec57baed7b88c720089fe931ad5add6184105ab8ff" exitCode=0 Jan 22 09:24:36 crc kubenswrapper[4811]: I0122 09:24:36.599060 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"0241fc5c-fa26-44a1-9db3-006b438b9123","Type":"ContainerDied","Data":"5a60a75c8f726053514360ec57baed7b88c720089fe931ad5add6184105ab8ff"} Jan 22 09:24:36 crc kubenswrapper[4811]: I0122 09:24:36.601658 4811 generic.go:334] "Generic (PLEG): container finished" podID="84068a6b-e189-419b-87f5-f31428f6eafe" containerID="48a246c7a0e2d8e856bc2e774a41e4c4a571a73e6dcfe43b9eda16ad78191748" exitCode=0 Jan 22 09:24:36 crc kubenswrapper[4811]: I0122 09:24:36.601707 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" event={"ID":"84068a6b-e189-419b-87f5-f31428f6eafe","Type":"ContainerDied","Data":"48a246c7a0e2d8e856bc2e774a41e4c4a571a73e6dcfe43b9eda16ad78191748"} Jan 22 09:24:36 crc kubenswrapper[4811]: I0122 09:24:36.601734 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" event={"ID":"84068a6b-e189-419b-87f5-f31428f6eafe","Type":"ContainerStarted","Data":"299e4eba2c58f82289fee4b97dbc4b550594fde4ef2f24c1751afd92694d35d2"} Jan 22 09:24:36 crc kubenswrapper[4811]: I0122 09:24:36.601751 4811 scope.go:117] "RemoveContainer" containerID="db7394b1a0d63dd1b71a9c2dafe49c27f24a0ba7e76972ad14dd0e5bca8208b9" Jan 22 09:24:36 crc kubenswrapper[4811]: I0122 09:24:36.604279 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f7aa6fa4-4d1f-4243-bd9c-8b9caa013bb8","Type":"ContainerStarted","Data":"e10c4498a0dc35a58247de91deb1eb69683e90bf1d8e29a688d56453502d79eb"} Jan 22 09:24:36 crc kubenswrapper[4811]: I0122 09:24:36.692835 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:24:36 crc kubenswrapper[4811]: I0122 09:24:36.880992 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0241fc5c-fa26-44a1-9db3-006b438b9123-pod-info\") pod \"0241fc5c-fa26-44a1-9db3-006b438b9123\" (UID: \"0241fc5c-fa26-44a1-9db3-006b438b9123\") " Jan 22 09:24:36 crc kubenswrapper[4811]: I0122 09:24:36.881053 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0241fc5c-fa26-44a1-9db3-006b438b9123-rabbitmq-confd\") pod \"0241fc5c-fa26-44a1-9db3-006b438b9123\" (UID: \"0241fc5c-fa26-44a1-9db3-006b438b9123\") " Jan 22 09:24:36 crc kubenswrapper[4811]: I0122 09:24:36.881134 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8q7tr\" (UniqueName: \"kubernetes.io/projected/0241fc5c-fa26-44a1-9db3-006b438b9123-kube-api-access-8q7tr\") pod \"0241fc5c-fa26-44a1-9db3-006b438b9123\" (UID: \"0241fc5c-fa26-44a1-9db3-006b438b9123\") " Jan 22 09:24:36 crc kubenswrapper[4811]: I0122 09:24:36.881153 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0241fc5c-fa26-44a1-9db3-006b438b9123-config-data\") pod \"0241fc5c-fa26-44a1-9db3-006b438b9123\" (UID: \"0241fc5c-fa26-44a1-9db3-006b438b9123\") " Jan 22 09:24:36 crc kubenswrapper[4811]: I0122 09:24:36.881208 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0241fc5c-fa26-44a1-9db3-006b438b9123-plugins-conf\") pod \"0241fc5c-fa26-44a1-9db3-006b438b9123\" (UID: \"0241fc5c-fa26-44a1-9db3-006b438b9123\") " Jan 22 09:24:36 crc kubenswrapper[4811]: I0122 09:24:36.881239 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0241fc5c-fa26-44a1-9db3-006b438b9123-rabbitmq-plugins\") pod \"0241fc5c-fa26-44a1-9db3-006b438b9123\" (UID: \"0241fc5c-fa26-44a1-9db3-006b438b9123\") " Jan 22 09:24:36 crc kubenswrapper[4811]: I0122 09:24:36.881307 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0241fc5c-fa26-44a1-9db3-006b438b9123-rabbitmq-tls\") pod \"0241fc5c-fa26-44a1-9db3-006b438b9123\" (UID: \"0241fc5c-fa26-44a1-9db3-006b438b9123\") " Jan 22 09:24:36 crc kubenswrapper[4811]: I0122 09:24:36.881355 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0241fc5c-fa26-44a1-9db3-006b438b9123-erlang-cookie-secret\") pod \"0241fc5c-fa26-44a1-9db3-006b438b9123\" (UID: \"0241fc5c-fa26-44a1-9db3-006b438b9123\") " Jan 22 09:24:36 crc kubenswrapper[4811]: I0122 09:24:36.881443 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"0241fc5c-fa26-44a1-9db3-006b438b9123\" (UID: \"0241fc5c-fa26-44a1-9db3-006b438b9123\") " Jan 22 09:24:36 crc kubenswrapper[4811]: I0122 09:24:36.881466 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0241fc5c-fa26-44a1-9db3-006b438b9123-rabbitmq-erlang-cookie\") pod \"0241fc5c-fa26-44a1-9db3-006b438b9123\" (UID: \"0241fc5c-fa26-44a1-9db3-006b438b9123\") " Jan 22 09:24:36 crc kubenswrapper[4811]: I0122 09:24:36.881478 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0241fc5c-fa26-44a1-9db3-006b438b9123-server-conf\") pod \"0241fc5c-fa26-44a1-9db3-006b438b9123\" (UID: \"0241fc5c-fa26-44a1-9db3-006b438b9123\") " Jan 22 09:24:36 crc kubenswrapper[4811]: I0122 09:24:36.881973 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0241fc5c-fa26-44a1-9db3-006b438b9123-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "0241fc5c-fa26-44a1-9db3-006b438b9123" (UID: "0241fc5c-fa26-44a1-9db3-006b438b9123"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:24:36 crc kubenswrapper[4811]: I0122 09:24:36.883191 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0241fc5c-fa26-44a1-9db3-006b438b9123-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "0241fc5c-fa26-44a1-9db3-006b438b9123" (UID: "0241fc5c-fa26-44a1-9db3-006b438b9123"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:24:36 crc kubenswrapper[4811]: I0122 09:24:36.888110 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/0241fc5c-fa26-44a1-9db3-006b438b9123-pod-info" (OuterVolumeSpecName: "pod-info") pod "0241fc5c-fa26-44a1-9db3-006b438b9123" (UID: "0241fc5c-fa26-44a1-9db3-006b438b9123"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 22 09:24:36 crc kubenswrapper[4811]: I0122 09:24:36.889189 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0241fc5c-fa26-44a1-9db3-006b438b9123-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "0241fc5c-fa26-44a1-9db3-006b438b9123" (UID: "0241fc5c-fa26-44a1-9db3-006b438b9123"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:24:36 crc kubenswrapper[4811]: I0122 09:24:36.890655 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0241fc5c-fa26-44a1-9db3-006b438b9123-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "0241fc5c-fa26-44a1-9db3-006b438b9123" (UID: "0241fc5c-fa26-44a1-9db3-006b438b9123"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:24:36 crc kubenswrapper[4811]: I0122 09:24:36.890641 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0241fc5c-fa26-44a1-9db3-006b438b9123-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "0241fc5c-fa26-44a1-9db3-006b438b9123" (UID: "0241fc5c-fa26-44a1-9db3-006b438b9123"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:24:36 crc kubenswrapper[4811]: I0122 09:24:36.892764 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0241fc5c-fa26-44a1-9db3-006b438b9123-kube-api-access-8q7tr" (OuterVolumeSpecName: "kube-api-access-8q7tr") pod "0241fc5c-fa26-44a1-9db3-006b438b9123" (UID: "0241fc5c-fa26-44a1-9db3-006b438b9123"). InnerVolumeSpecName "kube-api-access-8q7tr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:24:36 crc kubenswrapper[4811]: I0122 09:24:36.898310 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "persistence") pod "0241fc5c-fa26-44a1-9db3-006b438b9123" (UID: "0241fc5c-fa26-44a1-9db3-006b438b9123"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 22 09:24:36 crc kubenswrapper[4811]: I0122 09:24:36.905784 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0241fc5c-fa26-44a1-9db3-006b438b9123-config-data" (OuterVolumeSpecName: "config-data") pod "0241fc5c-fa26-44a1-9db3-006b438b9123" (UID: "0241fc5c-fa26-44a1-9db3-006b438b9123"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:24:36 crc kubenswrapper[4811]: I0122 09:24:36.928185 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0241fc5c-fa26-44a1-9db3-006b438b9123-server-conf" (OuterVolumeSpecName: "server-conf") pod "0241fc5c-fa26-44a1-9db3-006b438b9123" (UID: "0241fc5c-fa26-44a1-9db3-006b438b9123"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:24:36 crc kubenswrapper[4811]: I0122 09:24:36.983034 4811 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0241fc5c-fa26-44a1-9db3-006b438b9123-pod-info\") on node \"crc\" DevicePath \"\"" Jan 22 09:24:36 crc kubenswrapper[4811]: I0122 09:24:36.983067 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8q7tr\" (UniqueName: \"kubernetes.io/projected/0241fc5c-fa26-44a1-9db3-006b438b9123-kube-api-access-8q7tr\") on node \"crc\" DevicePath \"\"" Jan 22 09:24:36 crc kubenswrapper[4811]: I0122 09:24:36.983078 4811 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0241fc5c-fa26-44a1-9db3-006b438b9123-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 09:24:36 crc kubenswrapper[4811]: I0122 09:24:36.983087 4811 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0241fc5c-fa26-44a1-9db3-006b438b9123-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 22 09:24:36 crc kubenswrapper[4811]: I0122 09:24:36.983098 4811 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0241fc5c-fa26-44a1-9db3-006b438b9123-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 22 09:24:36 crc kubenswrapper[4811]: I0122 09:24:36.983107 4811 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0241fc5c-fa26-44a1-9db3-006b438b9123-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 22 09:24:36 crc kubenswrapper[4811]: I0122 09:24:36.983114 4811 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0241fc5c-fa26-44a1-9db3-006b438b9123-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 22 09:24:36 crc kubenswrapper[4811]: I0122 09:24:36.983142 4811 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Jan 22 09:24:36 crc kubenswrapper[4811]: I0122 09:24:36.983151 4811 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0241fc5c-fa26-44a1-9db3-006b438b9123-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 22 09:24:36 crc kubenswrapper[4811]: I0122 09:24:36.983158 4811 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0241fc5c-fa26-44a1-9db3-006b438b9123-server-conf\") on node \"crc\" DevicePath \"\"" Jan 22 09:24:36 crc kubenswrapper[4811]: I0122 09:24:36.997358 4811 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Jan 22 09:24:37 crc kubenswrapper[4811]: I0122 09:24:37.065884 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0241fc5c-fa26-44a1-9db3-006b438b9123-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "0241fc5c-fa26-44a1-9db3-006b438b9123" (UID: "0241fc5c-fa26-44a1-9db3-006b438b9123"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:24:37 crc kubenswrapper[4811]: I0122 09:24:37.084742 4811 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Jan 22 09:24:37 crc kubenswrapper[4811]: I0122 09:24:37.084768 4811 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0241fc5c-fa26-44a1-9db3-006b438b9123-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 22 09:24:37 crc kubenswrapper[4811]: I0122 09:24:37.613078 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f7aa6fa4-4d1f-4243-bd9c-8b9caa013bb8","Type":"ContainerStarted","Data":"2c34d29977810866db29da86f3b282cc130f556f35ba30c9df40aa7c03f84233"} Jan 22 09:24:37 crc kubenswrapper[4811]: I0122 09:24:37.615410 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"0241fc5c-fa26-44a1-9db3-006b438b9123","Type":"ContainerDied","Data":"2796e813866443f7a51b4bb8a2cf7c91cb1efd4134ad227f9111830fdedf185c"} Jan 22 09:24:37 crc kubenswrapper[4811]: I0122 09:24:37.615479 4811 scope.go:117] "RemoveContainer" containerID="5a60a75c8f726053514360ec57baed7b88c720089fe931ad5add6184105ab8ff" Jan 22 09:24:37 crc kubenswrapper[4811]: I0122 09:24:37.615666 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:24:37 crc kubenswrapper[4811]: I0122 09:24:37.651566 4811 scope.go:117] "RemoveContainer" containerID="7a0909ca18cf916fd8bc5743542561fad1746c91028910b1139cffb297ae2099" Jan 22 09:24:37 crc kubenswrapper[4811]: I0122 09:24:37.686976 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 22 09:24:37 crc kubenswrapper[4811]: I0122 09:24:37.696800 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 22 09:24:37 crc kubenswrapper[4811]: I0122 09:24:37.708342 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 22 09:24:37 crc kubenswrapper[4811]: E0122 09:24:37.708740 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0241fc5c-fa26-44a1-9db3-006b438b9123" containerName="setup-container" Jan 22 09:24:37 crc kubenswrapper[4811]: I0122 09:24:37.708764 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="0241fc5c-fa26-44a1-9db3-006b438b9123" containerName="setup-container" Jan 22 09:24:37 crc kubenswrapper[4811]: E0122 09:24:37.708781 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0241fc5c-fa26-44a1-9db3-006b438b9123" containerName="rabbitmq" Jan 22 09:24:37 crc kubenswrapper[4811]: I0122 09:24:37.708789 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="0241fc5c-fa26-44a1-9db3-006b438b9123" containerName="rabbitmq" Jan 22 09:24:37 crc kubenswrapper[4811]: I0122 09:24:37.709043 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="0241fc5c-fa26-44a1-9db3-006b438b9123" containerName="rabbitmq" Jan 22 09:24:37 crc kubenswrapper[4811]: I0122 09:24:37.710194 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:24:37 crc kubenswrapper[4811]: I0122 09:24:37.722410 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 22 09:24:37 crc kubenswrapper[4811]: I0122 09:24:37.722482 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 22 09:24:37 crc kubenswrapper[4811]: I0122 09:24:37.722705 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 22 09:24:37 crc kubenswrapper[4811]: I0122 09:24:37.722769 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 22 09:24:37 crc kubenswrapper[4811]: I0122 09:24:37.722885 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 22 09:24:37 crc kubenswrapper[4811]: I0122 09:24:37.722979 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 22 09:24:37 crc kubenswrapper[4811]: I0122 09:24:37.723446 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-rsthp" Jan 22 09:24:37 crc kubenswrapper[4811]: I0122 09:24:37.725448 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 22 09:24:37 crc kubenswrapper[4811]: I0122 09:24:37.904575 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a2ce439e-8652-40cb-9d5d-90913d18bea1-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"a2ce439e-8652-40cb-9d5d-90913d18bea1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:24:37 crc kubenswrapper[4811]: I0122 09:24:37.904940 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"a2ce439e-8652-40cb-9d5d-90913d18bea1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:24:37 crc kubenswrapper[4811]: I0122 09:24:37.905015 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a2ce439e-8652-40cb-9d5d-90913d18bea1-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a2ce439e-8652-40cb-9d5d-90913d18bea1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:24:37 crc kubenswrapper[4811]: I0122 09:24:37.905078 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a2ce439e-8652-40cb-9d5d-90913d18bea1-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"a2ce439e-8652-40cb-9d5d-90913d18bea1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:24:37 crc kubenswrapper[4811]: I0122 09:24:37.905096 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbsfd\" (UniqueName: \"kubernetes.io/projected/a2ce439e-8652-40cb-9d5d-90913d18bea1-kube-api-access-cbsfd\") pod \"rabbitmq-cell1-server-0\" (UID: \"a2ce439e-8652-40cb-9d5d-90913d18bea1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:24:37 crc kubenswrapper[4811]: I0122 09:24:37.905160 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a2ce439e-8652-40cb-9d5d-90913d18bea1-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"a2ce439e-8652-40cb-9d5d-90913d18bea1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:24:37 crc kubenswrapper[4811]: I0122 09:24:37.905278 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a2ce439e-8652-40cb-9d5d-90913d18bea1-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"a2ce439e-8652-40cb-9d5d-90913d18bea1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:24:37 crc kubenswrapper[4811]: I0122 09:24:37.905323 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a2ce439e-8652-40cb-9d5d-90913d18bea1-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"a2ce439e-8652-40cb-9d5d-90913d18bea1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:24:37 crc kubenswrapper[4811]: I0122 09:24:37.905348 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a2ce439e-8652-40cb-9d5d-90913d18bea1-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"a2ce439e-8652-40cb-9d5d-90913d18bea1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:24:37 crc kubenswrapper[4811]: I0122 09:24:37.905365 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a2ce439e-8652-40cb-9d5d-90913d18bea1-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"a2ce439e-8652-40cb-9d5d-90913d18bea1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:24:37 crc kubenswrapper[4811]: I0122 09:24:37.905384 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a2ce439e-8652-40cb-9d5d-90913d18bea1-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a2ce439e-8652-40cb-9d5d-90913d18bea1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:24:38 crc kubenswrapper[4811]: I0122 09:24:38.000611 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0241fc5c-fa26-44a1-9db3-006b438b9123" path="/var/lib/kubelet/pods/0241fc5c-fa26-44a1-9db3-006b438b9123/volumes" Jan 22 09:24:38 crc kubenswrapper[4811]: I0122 09:24:38.006914 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a2ce439e-8652-40cb-9d5d-90913d18bea1-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"a2ce439e-8652-40cb-9d5d-90913d18bea1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:24:38 crc kubenswrapper[4811]: I0122 09:24:38.006952 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbsfd\" (UniqueName: \"kubernetes.io/projected/a2ce439e-8652-40cb-9d5d-90913d18bea1-kube-api-access-cbsfd\") pod \"rabbitmq-cell1-server-0\" (UID: \"a2ce439e-8652-40cb-9d5d-90913d18bea1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:24:38 crc kubenswrapper[4811]: I0122 09:24:38.007010 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a2ce439e-8652-40cb-9d5d-90913d18bea1-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"a2ce439e-8652-40cb-9d5d-90913d18bea1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:24:38 crc kubenswrapper[4811]: I0122 09:24:38.007066 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a2ce439e-8652-40cb-9d5d-90913d18bea1-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"a2ce439e-8652-40cb-9d5d-90913d18bea1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:24:38 crc kubenswrapper[4811]: I0122 09:24:38.007087 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a2ce439e-8652-40cb-9d5d-90913d18bea1-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"a2ce439e-8652-40cb-9d5d-90913d18bea1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:24:38 crc kubenswrapper[4811]: I0122 09:24:38.007111 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a2ce439e-8652-40cb-9d5d-90913d18bea1-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"a2ce439e-8652-40cb-9d5d-90913d18bea1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:24:38 crc kubenswrapper[4811]: I0122 09:24:38.007131 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a2ce439e-8652-40cb-9d5d-90913d18bea1-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"a2ce439e-8652-40cb-9d5d-90913d18bea1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:24:38 crc kubenswrapper[4811]: I0122 09:24:38.007147 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a2ce439e-8652-40cb-9d5d-90913d18bea1-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a2ce439e-8652-40cb-9d5d-90913d18bea1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:24:38 crc kubenswrapper[4811]: I0122 09:24:38.007166 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a2ce439e-8652-40cb-9d5d-90913d18bea1-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"a2ce439e-8652-40cb-9d5d-90913d18bea1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:24:38 crc kubenswrapper[4811]: I0122 09:24:38.007184 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"a2ce439e-8652-40cb-9d5d-90913d18bea1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:24:38 crc kubenswrapper[4811]: I0122 09:24:38.007233 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a2ce439e-8652-40cb-9d5d-90913d18bea1-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a2ce439e-8652-40cb-9d5d-90913d18bea1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:24:38 crc kubenswrapper[4811]: I0122 09:24:38.008156 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a2ce439e-8652-40cb-9d5d-90913d18bea1-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"a2ce439e-8652-40cb-9d5d-90913d18bea1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:24:38 crc kubenswrapper[4811]: I0122 09:24:38.008374 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a2ce439e-8652-40cb-9d5d-90913d18bea1-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a2ce439e-8652-40cb-9d5d-90913d18bea1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:24:38 crc kubenswrapper[4811]: I0122 09:24:38.008485 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a2ce439e-8652-40cb-9d5d-90913d18bea1-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a2ce439e-8652-40cb-9d5d-90913d18bea1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:24:38 crc kubenswrapper[4811]: I0122 09:24:38.008670 4811 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"a2ce439e-8652-40cb-9d5d-90913d18bea1\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:24:38 crc kubenswrapper[4811]: I0122 09:24:38.008780 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a2ce439e-8652-40cb-9d5d-90913d18bea1-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"a2ce439e-8652-40cb-9d5d-90913d18bea1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:24:38 crc kubenswrapper[4811]: I0122 09:24:38.009165 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a2ce439e-8652-40cb-9d5d-90913d18bea1-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"a2ce439e-8652-40cb-9d5d-90913d18bea1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:24:38 crc kubenswrapper[4811]: I0122 09:24:38.013022 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a2ce439e-8652-40cb-9d5d-90913d18bea1-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"a2ce439e-8652-40cb-9d5d-90913d18bea1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:24:38 crc kubenswrapper[4811]: I0122 09:24:38.013738 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a2ce439e-8652-40cb-9d5d-90913d18bea1-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"a2ce439e-8652-40cb-9d5d-90913d18bea1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:24:38 crc kubenswrapper[4811]: I0122 09:24:38.019724 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a2ce439e-8652-40cb-9d5d-90913d18bea1-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"a2ce439e-8652-40cb-9d5d-90913d18bea1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:24:38 crc kubenswrapper[4811]: I0122 09:24:38.021928 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbsfd\" (UniqueName: \"kubernetes.io/projected/a2ce439e-8652-40cb-9d5d-90913d18bea1-kube-api-access-cbsfd\") pod \"rabbitmq-cell1-server-0\" (UID: \"a2ce439e-8652-40cb-9d5d-90913d18bea1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:24:38 crc kubenswrapper[4811]: I0122 09:24:38.021994 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a2ce439e-8652-40cb-9d5d-90913d18bea1-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"a2ce439e-8652-40cb-9d5d-90913d18bea1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:24:38 crc kubenswrapper[4811]: I0122 09:24:38.031034 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"a2ce439e-8652-40cb-9d5d-90913d18bea1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:24:38 crc kubenswrapper[4811]: I0122 09:24:38.037305 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:24:38 crc kubenswrapper[4811]: I0122 09:24:38.240745 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-74d67684c-wf22p"] Jan 22 09:24:38 crc kubenswrapper[4811]: I0122 09:24:38.243078 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74d67684c-wf22p" Jan 22 09:24:38 crc kubenswrapper[4811]: I0122 09:24:38.249843 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Jan 22 09:24:38 crc kubenswrapper[4811]: I0122 09:24:38.260363 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74d67684c-wf22p"] Jan 22 09:24:38 crc kubenswrapper[4811]: I0122 09:24:38.419208 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4562dbc0-d12e-4a25-ac30-fefa38448372-ovsdbserver-nb\") pod \"dnsmasq-dns-74d67684c-wf22p\" (UID: \"4562dbc0-d12e-4a25-ac30-fefa38448372\") " pod="openstack/dnsmasq-dns-74d67684c-wf22p" Jan 22 09:24:38 crc kubenswrapper[4811]: I0122 09:24:38.419306 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4562dbc0-d12e-4a25-ac30-fefa38448372-ovsdbserver-sb\") pod \"dnsmasq-dns-74d67684c-wf22p\" (UID: \"4562dbc0-d12e-4a25-ac30-fefa38448372\") " pod="openstack/dnsmasq-dns-74d67684c-wf22p" Jan 22 09:24:38 crc kubenswrapper[4811]: I0122 09:24:38.419541 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bq8lq\" (UniqueName: \"kubernetes.io/projected/4562dbc0-d12e-4a25-ac30-fefa38448372-kube-api-access-bq8lq\") pod \"dnsmasq-dns-74d67684c-wf22p\" (UID: \"4562dbc0-d12e-4a25-ac30-fefa38448372\") " pod="openstack/dnsmasq-dns-74d67684c-wf22p" Jan 22 09:24:38 crc kubenswrapper[4811]: I0122 09:24:38.419732 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4562dbc0-d12e-4a25-ac30-fefa38448372-config\") pod \"dnsmasq-dns-74d67684c-wf22p\" (UID: \"4562dbc0-d12e-4a25-ac30-fefa38448372\") " pod="openstack/dnsmasq-dns-74d67684c-wf22p" Jan 22 09:24:38 crc kubenswrapper[4811]: I0122 09:24:38.419948 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/4562dbc0-d12e-4a25-ac30-fefa38448372-openstack-edpm-ipam\") pod \"dnsmasq-dns-74d67684c-wf22p\" (UID: \"4562dbc0-d12e-4a25-ac30-fefa38448372\") " pod="openstack/dnsmasq-dns-74d67684c-wf22p" Jan 22 09:24:38 crc kubenswrapper[4811]: I0122 09:24:38.420112 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4562dbc0-d12e-4a25-ac30-fefa38448372-dns-svc\") pod \"dnsmasq-dns-74d67684c-wf22p\" (UID: \"4562dbc0-d12e-4a25-ac30-fefa38448372\") " pod="openstack/dnsmasq-dns-74d67684c-wf22p" Jan 22 09:24:38 crc kubenswrapper[4811]: I0122 09:24:38.442642 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 22 09:24:38 crc kubenswrapper[4811]: I0122 09:24:38.522562 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4562dbc0-d12e-4a25-ac30-fefa38448372-ovsdbserver-sb\") pod \"dnsmasq-dns-74d67684c-wf22p\" (UID: \"4562dbc0-d12e-4a25-ac30-fefa38448372\") " pod="openstack/dnsmasq-dns-74d67684c-wf22p" Jan 22 09:24:38 crc kubenswrapper[4811]: I0122 09:24:38.522820 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bq8lq\" (UniqueName: \"kubernetes.io/projected/4562dbc0-d12e-4a25-ac30-fefa38448372-kube-api-access-bq8lq\") pod \"dnsmasq-dns-74d67684c-wf22p\" (UID: \"4562dbc0-d12e-4a25-ac30-fefa38448372\") " pod="openstack/dnsmasq-dns-74d67684c-wf22p" Jan 22 09:24:38 crc kubenswrapper[4811]: I0122 09:24:38.522962 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4562dbc0-d12e-4a25-ac30-fefa38448372-config\") pod \"dnsmasq-dns-74d67684c-wf22p\" (UID: \"4562dbc0-d12e-4a25-ac30-fefa38448372\") " pod="openstack/dnsmasq-dns-74d67684c-wf22p" Jan 22 09:24:38 crc kubenswrapper[4811]: I0122 09:24:38.523058 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/4562dbc0-d12e-4a25-ac30-fefa38448372-openstack-edpm-ipam\") pod \"dnsmasq-dns-74d67684c-wf22p\" (UID: \"4562dbc0-d12e-4a25-ac30-fefa38448372\") " pod="openstack/dnsmasq-dns-74d67684c-wf22p" Jan 22 09:24:38 crc kubenswrapper[4811]: I0122 09:24:38.523118 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4562dbc0-d12e-4a25-ac30-fefa38448372-dns-svc\") pod \"dnsmasq-dns-74d67684c-wf22p\" (UID: \"4562dbc0-d12e-4a25-ac30-fefa38448372\") " pod="openstack/dnsmasq-dns-74d67684c-wf22p" Jan 22 09:24:38 crc kubenswrapper[4811]: I0122 09:24:38.523318 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4562dbc0-d12e-4a25-ac30-fefa38448372-ovsdbserver-nb\") pod \"dnsmasq-dns-74d67684c-wf22p\" (UID: \"4562dbc0-d12e-4a25-ac30-fefa38448372\") " pod="openstack/dnsmasq-dns-74d67684c-wf22p" Jan 22 09:24:38 crc kubenswrapper[4811]: I0122 09:24:38.523511 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4562dbc0-d12e-4a25-ac30-fefa38448372-ovsdbserver-sb\") pod \"dnsmasq-dns-74d67684c-wf22p\" (UID: \"4562dbc0-d12e-4a25-ac30-fefa38448372\") " pod="openstack/dnsmasq-dns-74d67684c-wf22p" Jan 22 09:24:38 crc kubenswrapper[4811]: I0122 09:24:38.524226 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4562dbc0-d12e-4a25-ac30-fefa38448372-config\") pod \"dnsmasq-dns-74d67684c-wf22p\" (UID: \"4562dbc0-d12e-4a25-ac30-fefa38448372\") " pod="openstack/dnsmasq-dns-74d67684c-wf22p" Jan 22 09:24:38 crc kubenswrapper[4811]: I0122 09:24:38.524500 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/4562dbc0-d12e-4a25-ac30-fefa38448372-openstack-edpm-ipam\") pod \"dnsmasq-dns-74d67684c-wf22p\" (UID: \"4562dbc0-d12e-4a25-ac30-fefa38448372\") " pod="openstack/dnsmasq-dns-74d67684c-wf22p" Jan 22 09:24:38 crc kubenswrapper[4811]: I0122 09:24:38.524698 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4562dbc0-d12e-4a25-ac30-fefa38448372-ovsdbserver-nb\") pod \"dnsmasq-dns-74d67684c-wf22p\" (UID: \"4562dbc0-d12e-4a25-ac30-fefa38448372\") " pod="openstack/dnsmasq-dns-74d67684c-wf22p" Jan 22 09:24:38 crc kubenswrapper[4811]: I0122 09:24:38.524769 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4562dbc0-d12e-4a25-ac30-fefa38448372-dns-svc\") pod \"dnsmasq-dns-74d67684c-wf22p\" (UID: \"4562dbc0-d12e-4a25-ac30-fefa38448372\") " pod="openstack/dnsmasq-dns-74d67684c-wf22p" Jan 22 09:24:38 crc kubenswrapper[4811]: I0122 09:24:38.539976 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bq8lq\" (UniqueName: \"kubernetes.io/projected/4562dbc0-d12e-4a25-ac30-fefa38448372-kube-api-access-bq8lq\") pod \"dnsmasq-dns-74d67684c-wf22p\" (UID: \"4562dbc0-d12e-4a25-ac30-fefa38448372\") " pod="openstack/dnsmasq-dns-74d67684c-wf22p" Jan 22 09:24:38 crc kubenswrapper[4811]: I0122 09:24:38.570710 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74d67684c-wf22p" Jan 22 09:24:38 crc kubenswrapper[4811]: I0122 09:24:38.648508 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a2ce439e-8652-40cb-9d5d-90913d18bea1","Type":"ContainerStarted","Data":"393898bcd7b18ece11eccfb590eb7c3cfbfc7c5ff279066a3f098a753ea6b813"} Jan 22 09:24:38 crc kubenswrapper[4811]: I0122 09:24:38.988693 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74d67684c-wf22p"] Jan 22 09:24:39 crc kubenswrapper[4811]: I0122 09:24:39.658407 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a2ce439e-8652-40cb-9d5d-90913d18bea1","Type":"ContainerStarted","Data":"df31abefc92fec7d012c11b3fa03932723865448339449630d6499d581d7e8e4"} Jan 22 09:24:39 crc kubenswrapper[4811]: I0122 09:24:39.660854 4811 generic.go:334] "Generic (PLEG): container finished" podID="4562dbc0-d12e-4a25-ac30-fefa38448372" containerID="ab5d5b952d0c5c06edd8d5e01e21bfc07ef3d68ed726c6199d4fa746fee867cf" exitCode=0 Jan 22 09:24:39 crc kubenswrapper[4811]: I0122 09:24:39.660940 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74d67684c-wf22p" event={"ID":"4562dbc0-d12e-4a25-ac30-fefa38448372","Type":"ContainerDied","Data":"ab5d5b952d0c5c06edd8d5e01e21bfc07ef3d68ed726c6199d4fa746fee867cf"} Jan 22 09:24:39 crc kubenswrapper[4811]: I0122 09:24:39.661035 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74d67684c-wf22p" event={"ID":"4562dbc0-d12e-4a25-ac30-fefa38448372","Type":"ContainerStarted","Data":"4ce4083581d8608ed5f4102e6b1b5e080c76274b4b6e413677ca43b1ebf777a1"} Jan 22 09:24:40 crc kubenswrapper[4811]: I0122 09:24:40.672675 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74d67684c-wf22p" event={"ID":"4562dbc0-d12e-4a25-ac30-fefa38448372","Type":"ContainerStarted","Data":"f68389b5c8894c6aa681e615ff285d760729768f3624828feb9b4b5d28a0fce4"} Jan 22 09:24:40 crc kubenswrapper[4811]: I0122 09:24:40.691101 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-74d67684c-wf22p" podStartSLOduration=2.691081364 podStartE2EDuration="2.691081364s" podCreationTimestamp="2026-01-22 09:24:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:24:40.690780446 +0000 UTC m=+1125.012967569" watchObservedRunningTime="2026-01-22 09:24:40.691081364 +0000 UTC m=+1125.013268487" Jan 22 09:24:41 crc kubenswrapper[4811]: I0122 09:24:41.680117 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-74d67684c-wf22p" Jan 22 09:24:48 crc kubenswrapper[4811]: I0122 09:24:48.571849 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-74d67684c-wf22p" Jan 22 09:24:48 crc kubenswrapper[4811]: I0122 09:24:48.627209 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78c596d7cf-qxrcp"] Jan 22 09:24:48 crc kubenswrapper[4811]: I0122 09:24:48.627475 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-78c596d7cf-qxrcp" podUID="341dafad-c519-4c1d-a8ea-8de3df06709e" containerName="dnsmasq-dns" containerID="cri-o://860fb2da7bda7a58eac3af72d1c3d585ebfac063fc94ac5a5259caff333dfe9a" gracePeriod=10 Jan 22 09:24:48 crc kubenswrapper[4811]: I0122 09:24:48.824874 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-567fc67579-wl9zs"] Jan 22 09:24:48 crc kubenswrapper[4811]: I0122 09:24:48.826259 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-567fc67579-wl9zs" Jan 22 09:24:48 crc kubenswrapper[4811]: I0122 09:24:48.838838 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-567fc67579-wl9zs"] Jan 22 09:24:48 crc kubenswrapper[4811]: I0122 09:24:48.916176 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlm6n\" (UniqueName: \"kubernetes.io/projected/f8f61363-c6ef-4f43-86f2-4fe8068d6894-kube-api-access-rlm6n\") pod \"dnsmasq-dns-567fc67579-wl9zs\" (UID: \"f8f61363-c6ef-4f43-86f2-4fe8068d6894\") " pod="openstack/dnsmasq-dns-567fc67579-wl9zs" Jan 22 09:24:48 crc kubenswrapper[4811]: I0122 09:24:48.916235 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f8f61363-c6ef-4f43-86f2-4fe8068d6894-ovsdbserver-nb\") pod \"dnsmasq-dns-567fc67579-wl9zs\" (UID: \"f8f61363-c6ef-4f43-86f2-4fe8068d6894\") " pod="openstack/dnsmasq-dns-567fc67579-wl9zs" Jan 22 09:24:48 crc kubenswrapper[4811]: I0122 09:24:48.916261 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f8f61363-c6ef-4f43-86f2-4fe8068d6894-dns-svc\") pod \"dnsmasq-dns-567fc67579-wl9zs\" (UID: \"f8f61363-c6ef-4f43-86f2-4fe8068d6894\") " pod="openstack/dnsmasq-dns-567fc67579-wl9zs" Jan 22 09:24:48 crc kubenswrapper[4811]: I0122 09:24:48.916319 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f8f61363-c6ef-4f43-86f2-4fe8068d6894-ovsdbserver-sb\") pod \"dnsmasq-dns-567fc67579-wl9zs\" (UID: \"f8f61363-c6ef-4f43-86f2-4fe8068d6894\") " pod="openstack/dnsmasq-dns-567fc67579-wl9zs" Jan 22 09:24:48 crc kubenswrapper[4811]: I0122 09:24:48.916364 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/f8f61363-c6ef-4f43-86f2-4fe8068d6894-openstack-edpm-ipam\") pod \"dnsmasq-dns-567fc67579-wl9zs\" (UID: \"f8f61363-c6ef-4f43-86f2-4fe8068d6894\") " pod="openstack/dnsmasq-dns-567fc67579-wl9zs" Jan 22 09:24:48 crc kubenswrapper[4811]: I0122 09:24:48.916400 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f8f61363-c6ef-4f43-86f2-4fe8068d6894-config\") pod \"dnsmasq-dns-567fc67579-wl9zs\" (UID: \"f8f61363-c6ef-4f43-86f2-4fe8068d6894\") " pod="openstack/dnsmasq-dns-567fc67579-wl9zs" Jan 22 09:24:49 crc kubenswrapper[4811]: I0122 09:24:49.017767 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rlm6n\" (UniqueName: \"kubernetes.io/projected/f8f61363-c6ef-4f43-86f2-4fe8068d6894-kube-api-access-rlm6n\") pod \"dnsmasq-dns-567fc67579-wl9zs\" (UID: \"f8f61363-c6ef-4f43-86f2-4fe8068d6894\") " pod="openstack/dnsmasq-dns-567fc67579-wl9zs" Jan 22 09:24:49 crc kubenswrapper[4811]: I0122 09:24:49.018072 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f8f61363-c6ef-4f43-86f2-4fe8068d6894-ovsdbserver-nb\") pod \"dnsmasq-dns-567fc67579-wl9zs\" (UID: \"f8f61363-c6ef-4f43-86f2-4fe8068d6894\") " pod="openstack/dnsmasq-dns-567fc67579-wl9zs" Jan 22 09:24:49 crc kubenswrapper[4811]: I0122 09:24:49.018094 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f8f61363-c6ef-4f43-86f2-4fe8068d6894-dns-svc\") pod \"dnsmasq-dns-567fc67579-wl9zs\" (UID: \"f8f61363-c6ef-4f43-86f2-4fe8068d6894\") " pod="openstack/dnsmasq-dns-567fc67579-wl9zs" Jan 22 09:24:49 crc kubenswrapper[4811]: I0122 09:24:49.018131 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f8f61363-c6ef-4f43-86f2-4fe8068d6894-ovsdbserver-sb\") pod \"dnsmasq-dns-567fc67579-wl9zs\" (UID: \"f8f61363-c6ef-4f43-86f2-4fe8068d6894\") " pod="openstack/dnsmasq-dns-567fc67579-wl9zs" Jan 22 09:24:49 crc kubenswrapper[4811]: I0122 09:24:49.018160 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/f8f61363-c6ef-4f43-86f2-4fe8068d6894-openstack-edpm-ipam\") pod \"dnsmasq-dns-567fc67579-wl9zs\" (UID: \"f8f61363-c6ef-4f43-86f2-4fe8068d6894\") " pod="openstack/dnsmasq-dns-567fc67579-wl9zs" Jan 22 09:24:49 crc kubenswrapper[4811]: I0122 09:24:49.018184 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f8f61363-c6ef-4f43-86f2-4fe8068d6894-config\") pod \"dnsmasq-dns-567fc67579-wl9zs\" (UID: \"f8f61363-c6ef-4f43-86f2-4fe8068d6894\") " pod="openstack/dnsmasq-dns-567fc67579-wl9zs" Jan 22 09:24:49 crc kubenswrapper[4811]: I0122 09:24:49.018984 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f8f61363-c6ef-4f43-86f2-4fe8068d6894-ovsdbserver-sb\") pod \"dnsmasq-dns-567fc67579-wl9zs\" (UID: \"f8f61363-c6ef-4f43-86f2-4fe8068d6894\") " pod="openstack/dnsmasq-dns-567fc67579-wl9zs" Jan 22 09:24:49 crc kubenswrapper[4811]: I0122 09:24:49.018994 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f8f61363-c6ef-4f43-86f2-4fe8068d6894-dns-svc\") pod \"dnsmasq-dns-567fc67579-wl9zs\" (UID: \"f8f61363-c6ef-4f43-86f2-4fe8068d6894\") " pod="openstack/dnsmasq-dns-567fc67579-wl9zs" Jan 22 09:24:49 crc kubenswrapper[4811]: I0122 09:24:49.019521 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f8f61363-c6ef-4f43-86f2-4fe8068d6894-config\") pod \"dnsmasq-dns-567fc67579-wl9zs\" (UID: \"f8f61363-c6ef-4f43-86f2-4fe8068d6894\") " pod="openstack/dnsmasq-dns-567fc67579-wl9zs" Jan 22 09:24:49 crc kubenswrapper[4811]: I0122 09:24:49.019760 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f8f61363-c6ef-4f43-86f2-4fe8068d6894-ovsdbserver-nb\") pod \"dnsmasq-dns-567fc67579-wl9zs\" (UID: \"f8f61363-c6ef-4f43-86f2-4fe8068d6894\") " pod="openstack/dnsmasq-dns-567fc67579-wl9zs" Jan 22 09:24:49 crc kubenswrapper[4811]: I0122 09:24:49.020141 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/f8f61363-c6ef-4f43-86f2-4fe8068d6894-openstack-edpm-ipam\") pod \"dnsmasq-dns-567fc67579-wl9zs\" (UID: \"f8f61363-c6ef-4f43-86f2-4fe8068d6894\") " pod="openstack/dnsmasq-dns-567fc67579-wl9zs" Jan 22 09:24:49 crc kubenswrapper[4811]: I0122 09:24:49.047354 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rlm6n\" (UniqueName: \"kubernetes.io/projected/f8f61363-c6ef-4f43-86f2-4fe8068d6894-kube-api-access-rlm6n\") pod \"dnsmasq-dns-567fc67579-wl9zs\" (UID: \"f8f61363-c6ef-4f43-86f2-4fe8068d6894\") " pod="openstack/dnsmasq-dns-567fc67579-wl9zs" Jan 22 09:24:49 crc kubenswrapper[4811]: I0122 09:24:49.155259 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-567fc67579-wl9zs" Jan 22 09:24:49 crc kubenswrapper[4811]: I0122 09:24:49.250157 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78c596d7cf-qxrcp" Jan 22 09:24:49 crc kubenswrapper[4811]: I0122 09:24:49.426277 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j27s2\" (UniqueName: \"kubernetes.io/projected/341dafad-c519-4c1d-a8ea-8de3df06709e-kube-api-access-j27s2\") pod \"341dafad-c519-4c1d-a8ea-8de3df06709e\" (UID: \"341dafad-c519-4c1d-a8ea-8de3df06709e\") " Jan 22 09:24:49 crc kubenswrapper[4811]: I0122 09:24:49.426534 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/341dafad-c519-4c1d-a8ea-8de3df06709e-dns-svc\") pod \"341dafad-c519-4c1d-a8ea-8de3df06709e\" (UID: \"341dafad-c519-4c1d-a8ea-8de3df06709e\") " Jan 22 09:24:49 crc kubenswrapper[4811]: I0122 09:24:49.426590 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/341dafad-c519-4c1d-a8ea-8de3df06709e-config\") pod \"341dafad-c519-4c1d-a8ea-8de3df06709e\" (UID: \"341dafad-c519-4c1d-a8ea-8de3df06709e\") " Jan 22 09:24:49 crc kubenswrapper[4811]: I0122 09:24:49.426650 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/341dafad-c519-4c1d-a8ea-8de3df06709e-ovsdbserver-nb\") pod \"341dafad-c519-4c1d-a8ea-8de3df06709e\" (UID: \"341dafad-c519-4c1d-a8ea-8de3df06709e\") " Jan 22 09:24:49 crc kubenswrapper[4811]: I0122 09:24:49.426739 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/341dafad-c519-4c1d-a8ea-8de3df06709e-ovsdbserver-sb\") pod \"341dafad-c519-4c1d-a8ea-8de3df06709e\" (UID: \"341dafad-c519-4c1d-a8ea-8de3df06709e\") " Jan 22 09:24:49 crc kubenswrapper[4811]: I0122 09:24:49.433370 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/341dafad-c519-4c1d-a8ea-8de3df06709e-kube-api-access-j27s2" (OuterVolumeSpecName: "kube-api-access-j27s2") pod "341dafad-c519-4c1d-a8ea-8de3df06709e" (UID: "341dafad-c519-4c1d-a8ea-8de3df06709e"). InnerVolumeSpecName "kube-api-access-j27s2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:24:49 crc kubenswrapper[4811]: I0122 09:24:49.465167 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/341dafad-c519-4c1d-a8ea-8de3df06709e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "341dafad-c519-4c1d-a8ea-8de3df06709e" (UID: "341dafad-c519-4c1d-a8ea-8de3df06709e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:24:49 crc kubenswrapper[4811]: I0122 09:24:49.473477 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/341dafad-c519-4c1d-a8ea-8de3df06709e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "341dafad-c519-4c1d-a8ea-8de3df06709e" (UID: "341dafad-c519-4c1d-a8ea-8de3df06709e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:24:49 crc kubenswrapper[4811]: I0122 09:24:49.474643 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/341dafad-c519-4c1d-a8ea-8de3df06709e-config" (OuterVolumeSpecName: "config") pod "341dafad-c519-4c1d-a8ea-8de3df06709e" (UID: "341dafad-c519-4c1d-a8ea-8de3df06709e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:24:49 crc kubenswrapper[4811]: I0122 09:24:49.475473 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/341dafad-c519-4c1d-a8ea-8de3df06709e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "341dafad-c519-4c1d-a8ea-8de3df06709e" (UID: "341dafad-c519-4c1d-a8ea-8de3df06709e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:24:49 crc kubenswrapper[4811]: I0122 09:24:49.529033 4811 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/341dafad-c519-4c1d-a8ea-8de3df06709e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 22 09:24:49 crc kubenswrapper[4811]: I0122 09:24:49.529139 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j27s2\" (UniqueName: \"kubernetes.io/projected/341dafad-c519-4c1d-a8ea-8de3df06709e-kube-api-access-j27s2\") on node \"crc\" DevicePath \"\"" Jan 22 09:24:49 crc kubenswrapper[4811]: I0122 09:24:49.529200 4811 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/341dafad-c519-4c1d-a8ea-8de3df06709e-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 22 09:24:49 crc kubenswrapper[4811]: I0122 09:24:49.529257 4811 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/341dafad-c519-4c1d-a8ea-8de3df06709e-config\") on node \"crc\" DevicePath \"\"" Jan 22 09:24:49 crc kubenswrapper[4811]: I0122 09:24:49.529319 4811 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/341dafad-c519-4c1d-a8ea-8de3df06709e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 22 09:24:49 crc kubenswrapper[4811]: I0122 09:24:49.566113 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-567fc67579-wl9zs"] Jan 22 09:24:49 crc kubenswrapper[4811]: I0122 09:24:49.734484 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fhg2v"] Jan 22 09:24:49 crc kubenswrapper[4811]: E0122 09:24:49.734809 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="341dafad-c519-4c1d-a8ea-8de3df06709e" containerName="init" Jan 22 09:24:49 crc kubenswrapper[4811]: I0122 09:24:49.734823 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="341dafad-c519-4c1d-a8ea-8de3df06709e" containerName="init" Jan 22 09:24:49 crc kubenswrapper[4811]: E0122 09:24:49.734852 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="341dafad-c519-4c1d-a8ea-8de3df06709e" containerName="dnsmasq-dns" Jan 22 09:24:49 crc kubenswrapper[4811]: I0122 09:24:49.734858 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="341dafad-c519-4c1d-a8ea-8de3df06709e" containerName="dnsmasq-dns" Jan 22 09:24:49 crc kubenswrapper[4811]: I0122 09:24:49.735001 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="341dafad-c519-4c1d-a8ea-8de3df06709e" containerName="dnsmasq-dns" Jan 22 09:24:49 crc kubenswrapper[4811]: I0122 09:24:49.735468 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fhg2v" Jan 22 09:24:49 crc kubenswrapper[4811]: I0122 09:24:49.744146 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 22 09:24:49 crc kubenswrapper[4811]: I0122 09:24:49.744408 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 22 09:24:49 crc kubenswrapper[4811]: I0122 09:24:49.744432 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7cbs6" Jan 22 09:24:49 crc kubenswrapper[4811]: I0122 09:24:49.745037 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 22 09:24:49 crc kubenswrapper[4811]: I0122 09:24:49.745990 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fhg2v"] Jan 22 09:24:49 crc kubenswrapper[4811]: I0122 09:24:49.765488 4811 generic.go:334] "Generic (PLEG): container finished" podID="341dafad-c519-4c1d-a8ea-8de3df06709e" containerID="860fb2da7bda7a58eac3af72d1c3d585ebfac063fc94ac5a5259caff333dfe9a" exitCode=0 Jan 22 09:24:49 crc kubenswrapper[4811]: I0122 09:24:49.765567 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78c596d7cf-qxrcp" event={"ID":"341dafad-c519-4c1d-a8ea-8de3df06709e","Type":"ContainerDied","Data":"860fb2da7bda7a58eac3af72d1c3d585ebfac063fc94ac5a5259caff333dfe9a"} Jan 22 09:24:49 crc kubenswrapper[4811]: I0122 09:24:49.765595 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78c596d7cf-qxrcp" event={"ID":"341dafad-c519-4c1d-a8ea-8de3df06709e","Type":"ContainerDied","Data":"344d165ce621404c7b5cd025b503bce62c4b14556ea10f347da344e69054f69e"} Jan 22 09:24:49 crc kubenswrapper[4811]: I0122 09:24:49.765607 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78c596d7cf-qxrcp" Jan 22 09:24:49 crc kubenswrapper[4811]: I0122 09:24:49.765612 4811 scope.go:117] "RemoveContainer" containerID="860fb2da7bda7a58eac3af72d1c3d585ebfac063fc94ac5a5259caff333dfe9a" Jan 22 09:24:49 crc kubenswrapper[4811]: I0122 09:24:49.767890 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-567fc67579-wl9zs" event={"ID":"f8f61363-c6ef-4f43-86f2-4fe8068d6894","Type":"ContainerStarted","Data":"ab4ea92138bb16ad219a34e3802998408118da9121d1dbb3b4f97226861b5858"} Jan 22 09:24:49 crc kubenswrapper[4811]: I0122 09:24:49.767944 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-567fc67579-wl9zs" event={"ID":"f8f61363-c6ef-4f43-86f2-4fe8068d6894","Type":"ContainerStarted","Data":"e87c69c7e0715c1106a6fb5db32bfbe1af924b1eb96881c8fc6b8a9eb3cae52d"} Jan 22 09:24:49 crc kubenswrapper[4811]: I0122 09:24:49.798779 4811 scope.go:117] "RemoveContainer" containerID="84a850421e27d37426b2930d4de0238c1c954499a80e21e7d1aa56891fbc4681" Jan 22 09:24:49 crc kubenswrapper[4811]: I0122 09:24:49.833613 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c178cc43-e3bb-452e-8ed3-7a4b6a9b9006-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fhg2v\" (UID: \"c178cc43-e3bb-452e-8ed3-7a4b6a9b9006\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fhg2v" Jan 22 09:24:49 crc kubenswrapper[4811]: I0122 09:24:49.833812 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c178cc43-e3bb-452e-8ed3-7a4b6a9b9006-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fhg2v\" (UID: \"c178cc43-e3bb-452e-8ed3-7a4b6a9b9006\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fhg2v" Jan 22 09:24:49 crc kubenswrapper[4811]: I0122 09:24:49.833882 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c178cc43-e3bb-452e-8ed3-7a4b6a9b9006-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fhg2v\" (UID: \"c178cc43-e3bb-452e-8ed3-7a4b6a9b9006\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fhg2v" Jan 22 09:24:49 crc kubenswrapper[4811]: I0122 09:24:49.833910 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6hc4\" (UniqueName: \"kubernetes.io/projected/c178cc43-e3bb-452e-8ed3-7a4b6a9b9006-kube-api-access-b6hc4\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fhg2v\" (UID: \"c178cc43-e3bb-452e-8ed3-7a4b6a9b9006\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fhg2v" Jan 22 09:24:49 crc kubenswrapper[4811]: I0122 09:24:49.851555 4811 scope.go:117] "RemoveContainer" containerID="860fb2da7bda7a58eac3af72d1c3d585ebfac063fc94ac5a5259caff333dfe9a" Jan 22 09:24:49 crc kubenswrapper[4811]: E0122 09:24:49.852033 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"860fb2da7bda7a58eac3af72d1c3d585ebfac063fc94ac5a5259caff333dfe9a\": container with ID starting with 860fb2da7bda7a58eac3af72d1c3d585ebfac063fc94ac5a5259caff333dfe9a not found: ID does not exist" containerID="860fb2da7bda7a58eac3af72d1c3d585ebfac063fc94ac5a5259caff333dfe9a" Jan 22 09:24:49 crc kubenswrapper[4811]: I0122 09:24:49.852075 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"860fb2da7bda7a58eac3af72d1c3d585ebfac063fc94ac5a5259caff333dfe9a"} err="failed to get container status \"860fb2da7bda7a58eac3af72d1c3d585ebfac063fc94ac5a5259caff333dfe9a\": rpc error: code = NotFound desc = could not find container \"860fb2da7bda7a58eac3af72d1c3d585ebfac063fc94ac5a5259caff333dfe9a\": container with ID starting with 860fb2da7bda7a58eac3af72d1c3d585ebfac063fc94ac5a5259caff333dfe9a not found: ID does not exist" Jan 22 09:24:49 crc kubenswrapper[4811]: I0122 09:24:49.852101 4811 scope.go:117] "RemoveContainer" containerID="84a850421e27d37426b2930d4de0238c1c954499a80e21e7d1aa56891fbc4681" Jan 22 09:24:49 crc kubenswrapper[4811]: E0122 09:24:49.852442 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84a850421e27d37426b2930d4de0238c1c954499a80e21e7d1aa56891fbc4681\": container with ID starting with 84a850421e27d37426b2930d4de0238c1c954499a80e21e7d1aa56891fbc4681 not found: ID does not exist" containerID="84a850421e27d37426b2930d4de0238c1c954499a80e21e7d1aa56891fbc4681" Jan 22 09:24:49 crc kubenswrapper[4811]: I0122 09:24:49.852473 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84a850421e27d37426b2930d4de0238c1c954499a80e21e7d1aa56891fbc4681"} err="failed to get container status \"84a850421e27d37426b2930d4de0238c1c954499a80e21e7d1aa56891fbc4681\": rpc error: code = NotFound desc = could not find container \"84a850421e27d37426b2930d4de0238c1c954499a80e21e7d1aa56891fbc4681\": container with ID starting with 84a850421e27d37426b2930d4de0238c1c954499a80e21e7d1aa56891fbc4681 not found: ID does not exist" Jan 22 09:24:49 crc kubenswrapper[4811]: I0122 09:24:49.857603 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78c596d7cf-qxrcp"] Jan 22 09:24:49 crc kubenswrapper[4811]: I0122 09:24:49.863332 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78c596d7cf-qxrcp"] Jan 22 09:24:49 crc kubenswrapper[4811]: I0122 09:24:49.935398 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c178cc43-e3bb-452e-8ed3-7a4b6a9b9006-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fhg2v\" (UID: \"c178cc43-e3bb-452e-8ed3-7a4b6a9b9006\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fhg2v" Jan 22 09:24:49 crc kubenswrapper[4811]: I0122 09:24:49.936614 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c178cc43-e3bb-452e-8ed3-7a4b6a9b9006-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fhg2v\" (UID: \"c178cc43-e3bb-452e-8ed3-7a4b6a9b9006\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fhg2v" Jan 22 09:24:49 crc kubenswrapper[4811]: I0122 09:24:49.936777 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c178cc43-e3bb-452e-8ed3-7a4b6a9b9006-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fhg2v\" (UID: \"c178cc43-e3bb-452e-8ed3-7a4b6a9b9006\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fhg2v" Jan 22 09:24:49 crc kubenswrapper[4811]: I0122 09:24:49.936851 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6hc4\" (UniqueName: \"kubernetes.io/projected/c178cc43-e3bb-452e-8ed3-7a4b6a9b9006-kube-api-access-b6hc4\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fhg2v\" (UID: \"c178cc43-e3bb-452e-8ed3-7a4b6a9b9006\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fhg2v" Jan 22 09:24:49 crc kubenswrapper[4811]: I0122 09:24:49.939347 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c178cc43-e3bb-452e-8ed3-7a4b6a9b9006-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fhg2v\" (UID: \"c178cc43-e3bb-452e-8ed3-7a4b6a9b9006\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fhg2v" Jan 22 09:24:49 crc kubenswrapper[4811]: I0122 09:24:49.939653 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c178cc43-e3bb-452e-8ed3-7a4b6a9b9006-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fhg2v\" (UID: \"c178cc43-e3bb-452e-8ed3-7a4b6a9b9006\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fhg2v" Jan 22 09:24:49 crc kubenswrapper[4811]: I0122 09:24:49.944496 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c178cc43-e3bb-452e-8ed3-7a4b6a9b9006-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fhg2v\" (UID: \"c178cc43-e3bb-452e-8ed3-7a4b6a9b9006\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fhg2v" Jan 22 09:24:49 crc kubenswrapper[4811]: I0122 09:24:49.950797 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6hc4\" (UniqueName: \"kubernetes.io/projected/c178cc43-e3bb-452e-8ed3-7a4b6a9b9006-kube-api-access-b6hc4\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fhg2v\" (UID: \"c178cc43-e3bb-452e-8ed3-7a4b6a9b9006\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fhg2v" Jan 22 09:24:50 crc kubenswrapper[4811]: I0122 09:24:50.001807 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="341dafad-c519-4c1d-a8ea-8de3df06709e" path="/var/lib/kubelet/pods/341dafad-c519-4c1d-a8ea-8de3df06709e/volumes" Jan 22 09:24:50 crc kubenswrapper[4811]: I0122 09:24:50.171568 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fhg2v" Jan 22 09:24:50 crc kubenswrapper[4811]: I0122 09:24:50.638775 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fhg2v"] Jan 22 09:24:50 crc kubenswrapper[4811]: I0122 09:24:50.648364 4811 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 09:24:50 crc kubenswrapper[4811]: I0122 09:24:50.776826 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fhg2v" event={"ID":"c178cc43-e3bb-452e-8ed3-7a4b6a9b9006","Type":"ContainerStarted","Data":"78a98f92e3762ea198b04a72acada01a4847b85e32853f5c5db8b03146605e34"} Jan 22 09:24:50 crc kubenswrapper[4811]: I0122 09:24:50.797753 4811 generic.go:334] "Generic (PLEG): container finished" podID="f8f61363-c6ef-4f43-86f2-4fe8068d6894" containerID="ab4ea92138bb16ad219a34e3802998408118da9121d1dbb3b4f97226861b5858" exitCode=0 Jan 22 09:24:50 crc kubenswrapper[4811]: I0122 09:24:50.797919 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-567fc67579-wl9zs" event={"ID":"f8f61363-c6ef-4f43-86f2-4fe8068d6894","Type":"ContainerDied","Data":"ab4ea92138bb16ad219a34e3802998408118da9121d1dbb3b4f97226861b5858"} Jan 22 09:24:51 crc kubenswrapper[4811]: I0122 09:24:51.808533 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-567fc67579-wl9zs" event={"ID":"f8f61363-c6ef-4f43-86f2-4fe8068d6894","Type":"ContainerStarted","Data":"fa6523d23ce8f89d599e37f791aceab652c82188a35ffd67527888dec76ccd9a"} Jan 22 09:24:51 crc kubenswrapper[4811]: I0122 09:24:51.808814 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-567fc67579-wl9zs" Jan 22 09:24:51 crc kubenswrapper[4811]: I0122 09:24:51.825237 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-567fc67579-wl9zs" podStartSLOduration=3.825221543 podStartE2EDuration="3.825221543s" podCreationTimestamp="2026-01-22 09:24:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:24:51.820755712 +0000 UTC m=+1136.142942835" watchObservedRunningTime="2026-01-22 09:24:51.825221543 +0000 UTC m=+1136.147408667" Jan 22 09:24:58 crc kubenswrapper[4811]: I0122 09:24:58.382063 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 22 09:24:58 crc kubenswrapper[4811]: I0122 09:24:58.870481 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fhg2v" event={"ID":"c178cc43-e3bb-452e-8ed3-7a4b6a9b9006","Type":"ContainerStarted","Data":"94765f38a5f88220d82b00f33e2f296d153150a8d1f4c3fd98693ec5b266e414"} Jan 22 09:24:58 crc kubenswrapper[4811]: I0122 09:24:58.889653 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fhg2v" podStartSLOduration=2.157836855 podStartE2EDuration="9.889640795s" podCreationTimestamp="2026-01-22 09:24:49 +0000 UTC" firstStartedPulling="2026-01-22 09:24:50.648140791 +0000 UTC m=+1134.970327915" lastFinishedPulling="2026-01-22 09:24:58.379944732 +0000 UTC m=+1142.702131855" observedRunningTime="2026-01-22 09:24:58.88373044 +0000 UTC m=+1143.205917562" watchObservedRunningTime="2026-01-22 09:24:58.889640795 +0000 UTC m=+1143.211827919" Jan 22 09:24:59 crc kubenswrapper[4811]: I0122 09:24:59.156763 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-567fc67579-wl9zs" Jan 22 09:24:59 crc kubenswrapper[4811]: I0122 09:24:59.203487 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74d67684c-wf22p"] Jan 22 09:24:59 crc kubenswrapper[4811]: I0122 09:24:59.203715 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-74d67684c-wf22p" podUID="4562dbc0-d12e-4a25-ac30-fefa38448372" containerName="dnsmasq-dns" containerID="cri-o://f68389b5c8894c6aa681e615ff285d760729768f3624828feb9b4b5d28a0fce4" gracePeriod=10 Jan 22 09:24:59 crc kubenswrapper[4811]: I0122 09:24:59.652139 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74d67684c-wf22p" Jan 22 09:24:59 crc kubenswrapper[4811]: I0122 09:24:59.833762 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4562dbc0-d12e-4a25-ac30-fefa38448372-ovsdbserver-nb\") pod \"4562dbc0-d12e-4a25-ac30-fefa38448372\" (UID: \"4562dbc0-d12e-4a25-ac30-fefa38448372\") " Jan 22 09:24:59 crc kubenswrapper[4811]: I0122 09:24:59.834008 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4562dbc0-d12e-4a25-ac30-fefa38448372-ovsdbserver-sb\") pod \"4562dbc0-d12e-4a25-ac30-fefa38448372\" (UID: \"4562dbc0-d12e-4a25-ac30-fefa38448372\") " Jan 22 09:24:59 crc kubenswrapper[4811]: I0122 09:24:59.834035 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/4562dbc0-d12e-4a25-ac30-fefa38448372-openstack-edpm-ipam\") pod \"4562dbc0-d12e-4a25-ac30-fefa38448372\" (UID: \"4562dbc0-d12e-4a25-ac30-fefa38448372\") " Jan 22 09:24:59 crc kubenswrapper[4811]: I0122 09:24:59.834068 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4562dbc0-d12e-4a25-ac30-fefa38448372-dns-svc\") pod \"4562dbc0-d12e-4a25-ac30-fefa38448372\" (UID: \"4562dbc0-d12e-4a25-ac30-fefa38448372\") " Jan 22 09:24:59 crc kubenswrapper[4811]: I0122 09:24:59.834177 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bq8lq\" (UniqueName: \"kubernetes.io/projected/4562dbc0-d12e-4a25-ac30-fefa38448372-kube-api-access-bq8lq\") pod \"4562dbc0-d12e-4a25-ac30-fefa38448372\" (UID: \"4562dbc0-d12e-4a25-ac30-fefa38448372\") " Jan 22 09:24:59 crc kubenswrapper[4811]: I0122 09:24:59.834271 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4562dbc0-d12e-4a25-ac30-fefa38448372-config\") pod \"4562dbc0-d12e-4a25-ac30-fefa38448372\" (UID: \"4562dbc0-d12e-4a25-ac30-fefa38448372\") " Jan 22 09:24:59 crc kubenswrapper[4811]: I0122 09:24:59.851190 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4562dbc0-d12e-4a25-ac30-fefa38448372-kube-api-access-bq8lq" (OuterVolumeSpecName: "kube-api-access-bq8lq") pod "4562dbc0-d12e-4a25-ac30-fefa38448372" (UID: "4562dbc0-d12e-4a25-ac30-fefa38448372"). InnerVolumeSpecName "kube-api-access-bq8lq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:24:59 crc kubenswrapper[4811]: I0122 09:24:59.884369 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4562dbc0-d12e-4a25-ac30-fefa38448372-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "4562dbc0-d12e-4a25-ac30-fefa38448372" (UID: "4562dbc0-d12e-4a25-ac30-fefa38448372"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:24:59 crc kubenswrapper[4811]: I0122 09:24:59.894714 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4562dbc0-d12e-4a25-ac30-fefa38448372-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4562dbc0-d12e-4a25-ac30-fefa38448372" (UID: "4562dbc0-d12e-4a25-ac30-fefa38448372"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:24:59 crc kubenswrapper[4811]: I0122 09:24:59.897882 4811 generic.go:334] "Generic (PLEG): container finished" podID="4562dbc0-d12e-4a25-ac30-fefa38448372" containerID="f68389b5c8894c6aa681e615ff285d760729768f3624828feb9b4b5d28a0fce4" exitCode=0 Jan 22 09:24:59 crc kubenswrapper[4811]: I0122 09:24:59.897939 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74d67684c-wf22p" Jan 22 09:24:59 crc kubenswrapper[4811]: I0122 09:24:59.897978 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74d67684c-wf22p" event={"ID":"4562dbc0-d12e-4a25-ac30-fefa38448372","Type":"ContainerDied","Data":"f68389b5c8894c6aa681e615ff285d760729768f3624828feb9b4b5d28a0fce4"} Jan 22 09:24:59 crc kubenswrapper[4811]: I0122 09:24:59.898006 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74d67684c-wf22p" event={"ID":"4562dbc0-d12e-4a25-ac30-fefa38448372","Type":"ContainerDied","Data":"4ce4083581d8608ed5f4102e6b1b5e080c76274b4b6e413677ca43b1ebf777a1"} Jan 22 09:24:59 crc kubenswrapper[4811]: I0122 09:24:59.898023 4811 scope.go:117] "RemoveContainer" containerID="f68389b5c8894c6aa681e615ff285d760729768f3624828feb9b4b5d28a0fce4" Jan 22 09:24:59 crc kubenswrapper[4811]: I0122 09:24:59.905125 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4562dbc0-d12e-4a25-ac30-fefa38448372-config" (OuterVolumeSpecName: "config") pod "4562dbc0-d12e-4a25-ac30-fefa38448372" (UID: "4562dbc0-d12e-4a25-ac30-fefa38448372"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:24:59 crc kubenswrapper[4811]: I0122 09:24:59.911984 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4562dbc0-d12e-4a25-ac30-fefa38448372-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4562dbc0-d12e-4a25-ac30-fefa38448372" (UID: "4562dbc0-d12e-4a25-ac30-fefa38448372"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:24:59 crc kubenswrapper[4811]: I0122 09:24:59.926027 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4562dbc0-d12e-4a25-ac30-fefa38448372-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4562dbc0-d12e-4a25-ac30-fefa38448372" (UID: "4562dbc0-d12e-4a25-ac30-fefa38448372"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:24:59 crc kubenswrapper[4811]: I0122 09:24:59.936089 4811 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4562dbc0-d12e-4a25-ac30-fefa38448372-config\") on node \"crc\" DevicePath \"\"" Jan 22 09:24:59 crc kubenswrapper[4811]: I0122 09:24:59.936244 4811 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4562dbc0-d12e-4a25-ac30-fefa38448372-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 22 09:24:59 crc kubenswrapper[4811]: I0122 09:24:59.936327 4811 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4562dbc0-d12e-4a25-ac30-fefa38448372-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 22 09:24:59 crc kubenswrapper[4811]: I0122 09:24:59.936386 4811 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/4562dbc0-d12e-4a25-ac30-fefa38448372-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 22 09:24:59 crc kubenswrapper[4811]: I0122 09:24:59.936438 4811 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4562dbc0-d12e-4a25-ac30-fefa38448372-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 22 09:24:59 crc kubenswrapper[4811]: I0122 09:24:59.936495 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bq8lq\" (UniqueName: \"kubernetes.io/projected/4562dbc0-d12e-4a25-ac30-fefa38448372-kube-api-access-bq8lq\") on node \"crc\" DevicePath \"\"" Jan 22 09:24:59 crc kubenswrapper[4811]: I0122 09:24:59.963689 4811 scope.go:117] "RemoveContainer" containerID="ab5d5b952d0c5c06edd8d5e01e21bfc07ef3d68ed726c6199d4fa746fee867cf" Jan 22 09:24:59 crc kubenswrapper[4811]: I0122 09:24:59.981366 4811 scope.go:117] "RemoveContainer" containerID="f68389b5c8894c6aa681e615ff285d760729768f3624828feb9b4b5d28a0fce4" Jan 22 09:24:59 crc kubenswrapper[4811]: E0122 09:24:59.981745 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f68389b5c8894c6aa681e615ff285d760729768f3624828feb9b4b5d28a0fce4\": container with ID starting with f68389b5c8894c6aa681e615ff285d760729768f3624828feb9b4b5d28a0fce4 not found: ID does not exist" containerID="f68389b5c8894c6aa681e615ff285d760729768f3624828feb9b4b5d28a0fce4" Jan 22 09:24:59 crc kubenswrapper[4811]: I0122 09:24:59.981778 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f68389b5c8894c6aa681e615ff285d760729768f3624828feb9b4b5d28a0fce4"} err="failed to get container status \"f68389b5c8894c6aa681e615ff285d760729768f3624828feb9b4b5d28a0fce4\": rpc error: code = NotFound desc = could not find container \"f68389b5c8894c6aa681e615ff285d760729768f3624828feb9b4b5d28a0fce4\": container with ID starting with f68389b5c8894c6aa681e615ff285d760729768f3624828feb9b4b5d28a0fce4 not found: ID does not exist" Jan 22 09:24:59 crc kubenswrapper[4811]: I0122 09:24:59.981798 4811 scope.go:117] "RemoveContainer" containerID="ab5d5b952d0c5c06edd8d5e01e21bfc07ef3d68ed726c6199d4fa746fee867cf" Jan 22 09:24:59 crc kubenswrapper[4811]: E0122 09:24:59.982132 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab5d5b952d0c5c06edd8d5e01e21bfc07ef3d68ed726c6199d4fa746fee867cf\": container with ID starting with ab5d5b952d0c5c06edd8d5e01e21bfc07ef3d68ed726c6199d4fa746fee867cf not found: ID does not exist" containerID="ab5d5b952d0c5c06edd8d5e01e21bfc07ef3d68ed726c6199d4fa746fee867cf" Jan 22 09:24:59 crc kubenswrapper[4811]: I0122 09:24:59.982167 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab5d5b952d0c5c06edd8d5e01e21bfc07ef3d68ed726c6199d4fa746fee867cf"} err="failed to get container status \"ab5d5b952d0c5c06edd8d5e01e21bfc07ef3d68ed726c6199d4fa746fee867cf\": rpc error: code = NotFound desc = could not find container \"ab5d5b952d0c5c06edd8d5e01e21bfc07ef3d68ed726c6199d4fa746fee867cf\": container with ID starting with ab5d5b952d0c5c06edd8d5e01e21bfc07ef3d68ed726c6199d4fa746fee867cf not found: ID does not exist" Jan 22 09:25:00 crc kubenswrapper[4811]: I0122 09:25:00.214614 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74d67684c-wf22p"] Jan 22 09:25:00 crc kubenswrapper[4811]: I0122 09:25:00.220776 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-74d67684c-wf22p"] Jan 22 09:25:02 crc kubenswrapper[4811]: I0122 09:25:02.002494 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4562dbc0-d12e-4a25-ac30-fefa38448372" path="/var/lib/kubelet/pods/4562dbc0-d12e-4a25-ac30-fefa38448372/volumes" Jan 22 09:25:09 crc kubenswrapper[4811]: I0122 09:25:09.970425 4811 generic.go:334] "Generic (PLEG): container finished" podID="c178cc43-e3bb-452e-8ed3-7a4b6a9b9006" containerID="94765f38a5f88220d82b00f33e2f296d153150a8d1f4c3fd98693ec5b266e414" exitCode=0 Jan 22 09:25:09 crc kubenswrapper[4811]: I0122 09:25:09.970508 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fhg2v" event={"ID":"c178cc43-e3bb-452e-8ed3-7a4b6a9b9006","Type":"ContainerDied","Data":"94765f38a5f88220d82b00f33e2f296d153150a8d1f4c3fd98693ec5b266e414"} Jan 22 09:25:09 crc kubenswrapper[4811]: I0122 09:25:09.972759 4811 generic.go:334] "Generic (PLEG): container finished" podID="f7aa6fa4-4d1f-4243-bd9c-8b9caa013bb8" containerID="2c34d29977810866db29da86f3b282cc130f556f35ba30c9df40aa7c03f84233" exitCode=0 Jan 22 09:25:09 crc kubenswrapper[4811]: I0122 09:25:09.972793 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f7aa6fa4-4d1f-4243-bd9c-8b9caa013bb8","Type":"ContainerDied","Data":"2c34d29977810866db29da86f3b282cc130f556f35ba30c9df40aa7c03f84233"} Jan 22 09:25:10 crc kubenswrapper[4811]: I0122 09:25:10.982052 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f7aa6fa4-4d1f-4243-bd9c-8b9caa013bb8","Type":"ContainerStarted","Data":"2d3ee347cfcfccb3d28db1152acdc1ff12555fb6fb460dbe88ebd8cdd677bcb9"} Jan 22 09:25:10 crc kubenswrapper[4811]: I0122 09:25:10.982476 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 22 09:25:10 crc kubenswrapper[4811]: I0122 09:25:10.985021 4811 generic.go:334] "Generic (PLEG): container finished" podID="a2ce439e-8652-40cb-9d5d-90913d18bea1" containerID="df31abefc92fec7d012c11b3fa03932723865448339449630d6499d581d7e8e4" exitCode=0 Jan 22 09:25:10 crc kubenswrapper[4811]: I0122 09:25:10.985091 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a2ce439e-8652-40cb-9d5d-90913d18bea1","Type":"ContainerDied","Data":"df31abefc92fec7d012c11b3fa03932723865448339449630d6499d581d7e8e4"} Jan 22 09:25:11 crc kubenswrapper[4811]: I0122 09:25:11.044773 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=36.044756079 podStartE2EDuration="36.044756079s" podCreationTimestamp="2026-01-22 09:24:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:25:11.021170871 +0000 UTC m=+1155.343357993" watchObservedRunningTime="2026-01-22 09:25:11.044756079 +0000 UTC m=+1155.366943202" Jan 22 09:25:11 crc kubenswrapper[4811]: I0122 09:25:11.357178 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fhg2v" Jan 22 09:25:11 crc kubenswrapper[4811]: I0122 09:25:11.410451 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c178cc43-e3bb-452e-8ed3-7a4b6a9b9006-repo-setup-combined-ca-bundle\") pod \"c178cc43-e3bb-452e-8ed3-7a4b6a9b9006\" (UID: \"c178cc43-e3bb-452e-8ed3-7a4b6a9b9006\") " Jan 22 09:25:11 crc kubenswrapper[4811]: I0122 09:25:11.410611 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c178cc43-e3bb-452e-8ed3-7a4b6a9b9006-inventory\") pod \"c178cc43-e3bb-452e-8ed3-7a4b6a9b9006\" (UID: \"c178cc43-e3bb-452e-8ed3-7a4b6a9b9006\") " Jan 22 09:25:11 crc kubenswrapper[4811]: I0122 09:25:11.410704 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b6hc4\" (UniqueName: \"kubernetes.io/projected/c178cc43-e3bb-452e-8ed3-7a4b6a9b9006-kube-api-access-b6hc4\") pod \"c178cc43-e3bb-452e-8ed3-7a4b6a9b9006\" (UID: \"c178cc43-e3bb-452e-8ed3-7a4b6a9b9006\") " Jan 22 09:25:11 crc kubenswrapper[4811]: I0122 09:25:11.410878 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c178cc43-e3bb-452e-8ed3-7a4b6a9b9006-ssh-key-openstack-edpm-ipam\") pod \"c178cc43-e3bb-452e-8ed3-7a4b6a9b9006\" (UID: \"c178cc43-e3bb-452e-8ed3-7a4b6a9b9006\") " Jan 22 09:25:11 crc kubenswrapper[4811]: I0122 09:25:11.415302 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c178cc43-e3bb-452e-8ed3-7a4b6a9b9006-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "c178cc43-e3bb-452e-8ed3-7a4b6a9b9006" (UID: "c178cc43-e3bb-452e-8ed3-7a4b6a9b9006"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:25:11 crc kubenswrapper[4811]: I0122 09:25:11.415332 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c178cc43-e3bb-452e-8ed3-7a4b6a9b9006-kube-api-access-b6hc4" (OuterVolumeSpecName: "kube-api-access-b6hc4") pod "c178cc43-e3bb-452e-8ed3-7a4b6a9b9006" (UID: "c178cc43-e3bb-452e-8ed3-7a4b6a9b9006"). InnerVolumeSpecName "kube-api-access-b6hc4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:25:11 crc kubenswrapper[4811]: I0122 09:25:11.431961 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c178cc43-e3bb-452e-8ed3-7a4b6a9b9006-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "c178cc43-e3bb-452e-8ed3-7a4b6a9b9006" (UID: "c178cc43-e3bb-452e-8ed3-7a4b6a9b9006"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:25:11 crc kubenswrapper[4811]: I0122 09:25:11.432370 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c178cc43-e3bb-452e-8ed3-7a4b6a9b9006-inventory" (OuterVolumeSpecName: "inventory") pod "c178cc43-e3bb-452e-8ed3-7a4b6a9b9006" (UID: "c178cc43-e3bb-452e-8ed3-7a4b6a9b9006"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:25:11 crc kubenswrapper[4811]: I0122 09:25:11.512831 4811 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c178cc43-e3bb-452e-8ed3-7a4b6a9b9006-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 22 09:25:11 crc kubenswrapper[4811]: I0122 09:25:11.512860 4811 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c178cc43-e3bb-452e-8ed3-7a4b6a9b9006-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 09:25:11 crc kubenswrapper[4811]: I0122 09:25:11.512871 4811 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c178cc43-e3bb-452e-8ed3-7a4b6a9b9006-inventory\") on node \"crc\" DevicePath \"\"" Jan 22 09:25:11 crc kubenswrapper[4811]: I0122 09:25:11.512882 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b6hc4\" (UniqueName: \"kubernetes.io/projected/c178cc43-e3bb-452e-8ed3-7a4b6a9b9006-kube-api-access-b6hc4\") on node \"crc\" DevicePath \"\"" Jan 22 09:25:12 crc kubenswrapper[4811]: I0122 09:25:12.000492 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fhg2v" Jan 22 09:25:12 crc kubenswrapper[4811]: I0122 09:25:12.003773 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a2ce439e-8652-40cb-9d5d-90913d18bea1","Type":"ContainerStarted","Data":"d09012f5f9b12f5688a36fd72a99c89c4a7fb26fc4fde4224e9c774fae838f64"} Jan 22 09:25:12 crc kubenswrapper[4811]: I0122 09:25:12.007553 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fhg2v" event={"ID":"c178cc43-e3bb-452e-8ed3-7a4b6a9b9006","Type":"ContainerDied","Data":"78a98f92e3762ea198b04a72acada01a4847b85e32853f5c5db8b03146605e34"} Jan 22 09:25:12 crc kubenswrapper[4811]: I0122 09:25:12.007666 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="78a98f92e3762ea198b04a72acada01a4847b85e32853f5c5db8b03146605e34" Jan 22 09:25:12 crc kubenswrapper[4811]: I0122 09:25:12.008112 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:25:12 crc kubenswrapper[4811]: I0122 09:25:12.048545 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=35.048533498 podStartE2EDuration="35.048533498s" podCreationTimestamp="2026-01-22 09:24:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:25:12.042677656 +0000 UTC m=+1156.364864779" watchObservedRunningTime="2026-01-22 09:25:12.048533498 +0000 UTC m=+1156.370720612" Jan 22 09:25:12 crc kubenswrapper[4811]: I0122 09:25:12.073878 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rmqsn"] Jan 22 09:25:12 crc kubenswrapper[4811]: E0122 09:25:12.074193 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c178cc43-e3bb-452e-8ed3-7a4b6a9b9006" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 22 09:25:12 crc kubenswrapper[4811]: I0122 09:25:12.074211 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="c178cc43-e3bb-452e-8ed3-7a4b6a9b9006" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 22 09:25:12 crc kubenswrapper[4811]: E0122 09:25:12.074230 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4562dbc0-d12e-4a25-ac30-fefa38448372" containerName="dnsmasq-dns" Jan 22 09:25:12 crc kubenswrapper[4811]: I0122 09:25:12.074237 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="4562dbc0-d12e-4a25-ac30-fefa38448372" containerName="dnsmasq-dns" Jan 22 09:25:12 crc kubenswrapper[4811]: E0122 09:25:12.074249 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4562dbc0-d12e-4a25-ac30-fefa38448372" containerName="init" Jan 22 09:25:12 crc kubenswrapper[4811]: I0122 09:25:12.074254 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="4562dbc0-d12e-4a25-ac30-fefa38448372" containerName="init" Jan 22 09:25:12 crc kubenswrapper[4811]: I0122 09:25:12.074455 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="4562dbc0-d12e-4a25-ac30-fefa38448372" containerName="dnsmasq-dns" Jan 22 09:25:12 crc kubenswrapper[4811]: I0122 09:25:12.074474 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="c178cc43-e3bb-452e-8ed3-7a4b6a9b9006" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 22 09:25:12 crc kubenswrapper[4811]: I0122 09:25:12.074955 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rmqsn" Jan 22 09:25:12 crc kubenswrapper[4811]: I0122 09:25:12.081295 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 22 09:25:12 crc kubenswrapper[4811]: I0122 09:25:12.084108 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 22 09:25:12 crc kubenswrapper[4811]: I0122 09:25:12.084685 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rmqsn"] Jan 22 09:25:12 crc kubenswrapper[4811]: I0122 09:25:12.089107 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 22 09:25:12 crc kubenswrapper[4811]: I0122 09:25:12.089377 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7cbs6" Jan 22 09:25:12 crc kubenswrapper[4811]: I0122 09:25:12.120354 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5234afb-1665-4465-9df5-e9c30fff6820-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-rmqsn\" (UID: \"d5234afb-1665-4465-9df5-e9c30fff6820\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rmqsn" Jan 22 09:25:12 crc kubenswrapper[4811]: I0122 09:25:12.120500 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d5234afb-1665-4465-9df5-e9c30fff6820-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-rmqsn\" (UID: \"d5234afb-1665-4465-9df5-e9c30fff6820\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rmqsn" Jan 22 09:25:12 crc kubenswrapper[4811]: I0122 09:25:12.120550 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d5234afb-1665-4465-9df5-e9c30fff6820-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-rmqsn\" (UID: \"d5234afb-1665-4465-9df5-e9c30fff6820\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rmqsn" Jan 22 09:25:12 crc kubenswrapper[4811]: I0122 09:25:12.120642 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lmxq\" (UniqueName: \"kubernetes.io/projected/d5234afb-1665-4465-9df5-e9c30fff6820-kube-api-access-7lmxq\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-rmqsn\" (UID: \"d5234afb-1665-4465-9df5-e9c30fff6820\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rmqsn" Jan 22 09:25:12 crc kubenswrapper[4811]: I0122 09:25:12.221962 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d5234afb-1665-4465-9df5-e9c30fff6820-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-rmqsn\" (UID: \"d5234afb-1665-4465-9df5-e9c30fff6820\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rmqsn" Jan 22 09:25:12 crc kubenswrapper[4811]: I0122 09:25:12.222073 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7lmxq\" (UniqueName: \"kubernetes.io/projected/d5234afb-1665-4465-9df5-e9c30fff6820-kube-api-access-7lmxq\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-rmqsn\" (UID: \"d5234afb-1665-4465-9df5-e9c30fff6820\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rmqsn" Jan 22 09:25:12 crc kubenswrapper[4811]: I0122 09:25:12.222160 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5234afb-1665-4465-9df5-e9c30fff6820-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-rmqsn\" (UID: \"d5234afb-1665-4465-9df5-e9c30fff6820\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rmqsn" Jan 22 09:25:12 crc kubenswrapper[4811]: I0122 09:25:12.222255 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d5234afb-1665-4465-9df5-e9c30fff6820-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-rmqsn\" (UID: \"d5234afb-1665-4465-9df5-e9c30fff6820\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rmqsn" Jan 22 09:25:12 crc kubenswrapper[4811]: I0122 09:25:12.226250 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d5234afb-1665-4465-9df5-e9c30fff6820-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-rmqsn\" (UID: \"d5234afb-1665-4465-9df5-e9c30fff6820\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rmqsn" Jan 22 09:25:12 crc kubenswrapper[4811]: I0122 09:25:12.227162 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5234afb-1665-4465-9df5-e9c30fff6820-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-rmqsn\" (UID: \"d5234afb-1665-4465-9df5-e9c30fff6820\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rmqsn" Jan 22 09:25:12 crc kubenswrapper[4811]: I0122 09:25:12.234059 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d5234afb-1665-4465-9df5-e9c30fff6820-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-rmqsn\" (UID: \"d5234afb-1665-4465-9df5-e9c30fff6820\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rmqsn" Jan 22 09:25:12 crc kubenswrapper[4811]: I0122 09:25:12.236687 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7lmxq\" (UniqueName: \"kubernetes.io/projected/d5234afb-1665-4465-9df5-e9c30fff6820-kube-api-access-7lmxq\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-rmqsn\" (UID: \"d5234afb-1665-4465-9df5-e9c30fff6820\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rmqsn" Jan 22 09:25:12 crc kubenswrapper[4811]: I0122 09:25:12.390886 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rmqsn" Jan 22 09:25:12 crc kubenswrapper[4811]: I0122 09:25:12.847139 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rmqsn"] Jan 22 09:25:12 crc kubenswrapper[4811]: W0122 09:25:12.848946 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd5234afb_1665_4465_9df5_e9c30fff6820.slice/crio-26eede9452c51403d1cfd65a0bbdb439a42e26fd54f0df981a3e35024392335a WatchSource:0}: Error finding container 26eede9452c51403d1cfd65a0bbdb439a42e26fd54f0df981a3e35024392335a: Status 404 returned error can't find the container with id 26eede9452c51403d1cfd65a0bbdb439a42e26fd54f0df981a3e35024392335a Jan 22 09:25:13 crc kubenswrapper[4811]: I0122 09:25:13.010156 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rmqsn" event={"ID":"d5234afb-1665-4465-9df5-e9c30fff6820","Type":"ContainerStarted","Data":"26eede9452c51403d1cfd65a0bbdb439a42e26fd54f0df981a3e35024392335a"} Jan 22 09:25:14 crc kubenswrapper[4811]: I0122 09:25:14.018582 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rmqsn" event={"ID":"d5234afb-1665-4465-9df5-e9c30fff6820","Type":"ContainerStarted","Data":"66d7c54ecbc6718f0a55b15994900a0c8d44dd2eead9d2f185c581dfdaec9485"} Jan 22 09:25:25 crc kubenswrapper[4811]: I0122 09:25:25.963089 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 22 09:25:25 crc kubenswrapper[4811]: I0122 09:25:25.987061 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rmqsn" podStartSLOduration=13.486412135 podStartE2EDuration="13.987044836s" podCreationTimestamp="2026-01-22 09:25:12 +0000 UTC" firstStartedPulling="2026-01-22 09:25:12.860335645 +0000 UTC m=+1157.182522769" lastFinishedPulling="2026-01-22 09:25:13.360968347 +0000 UTC m=+1157.683155470" observedRunningTime="2026-01-22 09:25:14.041399558 +0000 UTC m=+1158.363586682" watchObservedRunningTime="2026-01-22 09:25:25.987044836 +0000 UTC m=+1170.309231959" Jan 22 09:25:28 crc kubenswrapper[4811]: I0122 09:25:28.040458 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 22 09:26:35 crc kubenswrapper[4811]: I0122 09:26:35.501164 4811 patch_prober.go:28] interesting pod/machine-config-daemon-txvcq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 09:26:35 crc kubenswrapper[4811]: I0122 09:26:35.501616 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 09:26:58 crc kubenswrapper[4811]: I0122 09:26:58.423472 4811 scope.go:117] "RemoveContainer" containerID="57dd7a3eed668b10480c157954b8aebb31ce6fa4182520fa00d1f992f9ba1086" Jan 22 09:26:58 crc kubenswrapper[4811]: I0122 09:26:58.447677 4811 scope.go:117] "RemoveContainer" containerID="b8b4ae49469256e4e15f97123889ff58959f51a1cb7dafd5a390c822d6f9e387" Jan 22 09:26:58 crc kubenswrapper[4811]: I0122 09:26:58.477067 4811 scope.go:117] "RemoveContainer" containerID="7b01bf0b71a90d7ca81248cb5e6e3c974fce8e8551f1e29e9ada4c211659001d" Jan 22 09:27:05 crc kubenswrapper[4811]: I0122 09:27:05.501424 4811 patch_prober.go:28] interesting pod/machine-config-daemon-txvcq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 09:27:05 crc kubenswrapper[4811]: I0122 09:27:05.501998 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 09:27:35 crc kubenswrapper[4811]: I0122 09:27:35.501973 4811 patch_prober.go:28] interesting pod/machine-config-daemon-txvcq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 09:27:35 crc kubenswrapper[4811]: I0122 09:27:35.502519 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 09:27:35 crc kubenswrapper[4811]: I0122 09:27:35.502574 4811 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" Jan 22 09:27:35 crc kubenswrapper[4811]: I0122 09:27:35.503336 4811 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"299e4eba2c58f82289fee4b97dbc4b550594fde4ef2f24c1751afd92694d35d2"} pod="openshift-machine-config-operator/machine-config-daemon-txvcq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 09:27:35 crc kubenswrapper[4811]: I0122 09:27:35.503385 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" containerName="machine-config-daemon" containerID="cri-o://299e4eba2c58f82289fee4b97dbc4b550594fde4ef2f24c1751afd92694d35d2" gracePeriod=600 Jan 22 09:27:36 crc kubenswrapper[4811]: I0122 09:27:36.095586 4811 generic.go:334] "Generic (PLEG): container finished" podID="84068a6b-e189-419b-87f5-f31428f6eafe" containerID="299e4eba2c58f82289fee4b97dbc4b550594fde4ef2f24c1751afd92694d35d2" exitCode=0 Jan 22 09:27:36 crc kubenswrapper[4811]: I0122 09:27:36.095674 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" event={"ID":"84068a6b-e189-419b-87f5-f31428f6eafe","Type":"ContainerDied","Data":"299e4eba2c58f82289fee4b97dbc4b550594fde4ef2f24c1751afd92694d35d2"} Jan 22 09:27:36 crc kubenswrapper[4811]: I0122 09:27:36.096129 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" event={"ID":"84068a6b-e189-419b-87f5-f31428f6eafe","Type":"ContainerStarted","Data":"a7045e9e75753498e22be1f49f2ac6b7e1f1858c2287359ab0b186d3e061fbd1"} Jan 22 09:27:36 crc kubenswrapper[4811]: I0122 09:27:36.096319 4811 scope.go:117] "RemoveContainer" containerID="48a246c7a0e2d8e856bc2e774a41e4c4a571a73e6dcfe43b9eda16ad78191748" Jan 22 09:27:58 crc kubenswrapper[4811]: I0122 09:27:58.553213 4811 scope.go:117] "RemoveContainer" containerID="ae3bf1fa80785b6d4df74303472c557a6ac0718284d5eb541b8118e80750e927" Jan 22 09:28:18 crc kubenswrapper[4811]: I0122 09:28:18.462096 4811 generic.go:334] "Generic (PLEG): container finished" podID="d5234afb-1665-4465-9df5-e9c30fff6820" containerID="66d7c54ecbc6718f0a55b15994900a0c8d44dd2eead9d2f185c581dfdaec9485" exitCode=0 Jan 22 09:28:18 crc kubenswrapper[4811]: I0122 09:28:18.462182 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rmqsn" event={"ID":"d5234afb-1665-4465-9df5-e9c30fff6820","Type":"ContainerDied","Data":"66d7c54ecbc6718f0a55b15994900a0c8d44dd2eead9d2f185c581dfdaec9485"} Jan 22 09:28:19 crc kubenswrapper[4811]: I0122 09:28:19.791969 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rmqsn" Jan 22 09:28:19 crc kubenswrapper[4811]: I0122 09:28:19.894050 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7lmxq\" (UniqueName: \"kubernetes.io/projected/d5234afb-1665-4465-9df5-e9c30fff6820-kube-api-access-7lmxq\") pod \"d5234afb-1665-4465-9df5-e9c30fff6820\" (UID: \"d5234afb-1665-4465-9df5-e9c30fff6820\") " Jan 22 09:28:19 crc kubenswrapper[4811]: I0122 09:28:19.894152 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5234afb-1665-4465-9df5-e9c30fff6820-bootstrap-combined-ca-bundle\") pod \"d5234afb-1665-4465-9df5-e9c30fff6820\" (UID: \"d5234afb-1665-4465-9df5-e9c30fff6820\") " Jan 22 09:28:19 crc kubenswrapper[4811]: I0122 09:28:19.894182 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d5234afb-1665-4465-9df5-e9c30fff6820-ssh-key-openstack-edpm-ipam\") pod \"d5234afb-1665-4465-9df5-e9c30fff6820\" (UID: \"d5234afb-1665-4465-9df5-e9c30fff6820\") " Jan 22 09:28:19 crc kubenswrapper[4811]: I0122 09:28:19.894230 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d5234afb-1665-4465-9df5-e9c30fff6820-inventory\") pod \"d5234afb-1665-4465-9df5-e9c30fff6820\" (UID: \"d5234afb-1665-4465-9df5-e9c30fff6820\") " Jan 22 09:28:19 crc kubenswrapper[4811]: I0122 09:28:19.900160 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5234afb-1665-4465-9df5-e9c30fff6820-kube-api-access-7lmxq" (OuterVolumeSpecName: "kube-api-access-7lmxq") pod "d5234afb-1665-4465-9df5-e9c30fff6820" (UID: "d5234afb-1665-4465-9df5-e9c30fff6820"). InnerVolumeSpecName "kube-api-access-7lmxq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:28:19 crc kubenswrapper[4811]: I0122 09:28:19.900595 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5234afb-1665-4465-9df5-e9c30fff6820-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "d5234afb-1665-4465-9df5-e9c30fff6820" (UID: "d5234afb-1665-4465-9df5-e9c30fff6820"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:28:19 crc kubenswrapper[4811]: I0122 09:28:19.915602 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5234afb-1665-4465-9df5-e9c30fff6820-inventory" (OuterVolumeSpecName: "inventory") pod "d5234afb-1665-4465-9df5-e9c30fff6820" (UID: "d5234afb-1665-4465-9df5-e9c30fff6820"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:28:19 crc kubenswrapper[4811]: I0122 09:28:19.919069 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5234afb-1665-4465-9df5-e9c30fff6820-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "d5234afb-1665-4465-9df5-e9c30fff6820" (UID: "d5234afb-1665-4465-9df5-e9c30fff6820"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:28:19 crc kubenswrapper[4811]: I0122 09:28:19.995354 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7lmxq\" (UniqueName: \"kubernetes.io/projected/d5234afb-1665-4465-9df5-e9c30fff6820-kube-api-access-7lmxq\") on node \"crc\" DevicePath \"\"" Jan 22 09:28:19 crc kubenswrapper[4811]: I0122 09:28:19.995377 4811 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5234afb-1665-4465-9df5-e9c30fff6820-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 09:28:19 crc kubenswrapper[4811]: I0122 09:28:19.995386 4811 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d5234afb-1665-4465-9df5-e9c30fff6820-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 22 09:28:19 crc kubenswrapper[4811]: I0122 09:28:19.995394 4811 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d5234afb-1665-4465-9df5-e9c30fff6820-inventory\") on node \"crc\" DevicePath \"\"" Jan 22 09:28:20 crc kubenswrapper[4811]: I0122 09:28:20.477956 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rmqsn" event={"ID":"d5234afb-1665-4465-9df5-e9c30fff6820","Type":"ContainerDied","Data":"26eede9452c51403d1cfd65a0bbdb439a42e26fd54f0df981a3e35024392335a"} Jan 22 09:28:20 crc kubenswrapper[4811]: I0122 09:28:20.478004 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="26eede9452c51403d1cfd65a0bbdb439a42e26fd54f0df981a3e35024392335a" Jan 22 09:28:20 crc kubenswrapper[4811]: I0122 09:28:20.478195 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rmqsn" Jan 22 09:28:20 crc kubenswrapper[4811]: I0122 09:28:20.533703 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5s52q"] Jan 22 09:28:20 crc kubenswrapper[4811]: E0122 09:28:20.534044 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5234afb-1665-4465-9df5-e9c30fff6820" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 22 09:28:20 crc kubenswrapper[4811]: I0122 09:28:20.534061 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5234afb-1665-4465-9df5-e9c30fff6820" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 22 09:28:20 crc kubenswrapper[4811]: I0122 09:28:20.534230 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5234afb-1665-4465-9df5-e9c30fff6820" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 22 09:28:20 crc kubenswrapper[4811]: I0122 09:28:20.534751 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5s52q" Jan 22 09:28:20 crc kubenswrapper[4811]: I0122 09:28:20.537714 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7cbs6" Jan 22 09:28:20 crc kubenswrapper[4811]: I0122 09:28:20.537924 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 22 09:28:20 crc kubenswrapper[4811]: I0122 09:28:20.538367 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 22 09:28:20 crc kubenswrapper[4811]: I0122 09:28:20.538599 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 22 09:28:20 crc kubenswrapper[4811]: I0122 09:28:20.542993 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5s52q"] Jan 22 09:28:20 crc kubenswrapper[4811]: I0122 09:28:20.704208 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b652af36-650f-4442-9555-59e25d3414d7-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-5s52q\" (UID: \"b652af36-650f-4442-9555-59e25d3414d7\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5s52q" Jan 22 09:28:20 crc kubenswrapper[4811]: I0122 09:28:20.704425 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b652af36-650f-4442-9555-59e25d3414d7-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-5s52q\" (UID: \"b652af36-650f-4442-9555-59e25d3414d7\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5s52q" Jan 22 09:28:20 crc kubenswrapper[4811]: I0122 09:28:20.704592 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbzqj\" (UniqueName: \"kubernetes.io/projected/b652af36-650f-4442-9555-59e25d3414d7-kube-api-access-zbzqj\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-5s52q\" (UID: \"b652af36-650f-4442-9555-59e25d3414d7\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5s52q" Jan 22 09:28:20 crc kubenswrapper[4811]: I0122 09:28:20.806591 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b652af36-650f-4442-9555-59e25d3414d7-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-5s52q\" (UID: \"b652af36-650f-4442-9555-59e25d3414d7\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5s52q" Jan 22 09:28:20 crc kubenswrapper[4811]: I0122 09:28:20.807077 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b652af36-650f-4442-9555-59e25d3414d7-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-5s52q\" (UID: \"b652af36-650f-4442-9555-59e25d3414d7\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5s52q" Jan 22 09:28:20 crc kubenswrapper[4811]: I0122 09:28:20.807282 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zbzqj\" (UniqueName: \"kubernetes.io/projected/b652af36-650f-4442-9555-59e25d3414d7-kube-api-access-zbzqj\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-5s52q\" (UID: \"b652af36-650f-4442-9555-59e25d3414d7\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5s52q" Jan 22 09:28:20 crc kubenswrapper[4811]: I0122 09:28:20.809963 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b652af36-650f-4442-9555-59e25d3414d7-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-5s52q\" (UID: \"b652af36-650f-4442-9555-59e25d3414d7\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5s52q" Jan 22 09:28:20 crc kubenswrapper[4811]: I0122 09:28:20.810459 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b652af36-650f-4442-9555-59e25d3414d7-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-5s52q\" (UID: \"b652af36-650f-4442-9555-59e25d3414d7\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5s52q" Jan 22 09:28:20 crc kubenswrapper[4811]: I0122 09:28:20.820222 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbzqj\" (UniqueName: \"kubernetes.io/projected/b652af36-650f-4442-9555-59e25d3414d7-kube-api-access-zbzqj\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-5s52q\" (UID: \"b652af36-650f-4442-9555-59e25d3414d7\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5s52q" Jan 22 09:28:20 crc kubenswrapper[4811]: I0122 09:28:20.846964 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5s52q" Jan 22 09:28:21 crc kubenswrapper[4811]: I0122 09:28:21.209668 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5s52q"] Jan 22 09:28:21 crc kubenswrapper[4811]: W0122 09:28:21.211324 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb652af36_650f_4442_9555_59e25d3414d7.slice/crio-df40ec6b4e0738a5e3f85c64ac0667acc59d663820f701bdd42a76d3ac34eb73 WatchSource:0}: Error finding container df40ec6b4e0738a5e3f85c64ac0667acc59d663820f701bdd42a76d3ac34eb73: Status 404 returned error can't find the container with id df40ec6b4e0738a5e3f85c64ac0667acc59d663820f701bdd42a76d3ac34eb73 Jan 22 09:28:21 crc kubenswrapper[4811]: I0122 09:28:21.486901 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5s52q" event={"ID":"b652af36-650f-4442-9555-59e25d3414d7","Type":"ContainerStarted","Data":"df40ec6b4e0738a5e3f85c64ac0667acc59d663820f701bdd42a76d3ac34eb73"} Jan 22 09:28:22 crc kubenswrapper[4811]: I0122 09:28:22.495205 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5s52q" event={"ID":"b652af36-650f-4442-9555-59e25d3414d7","Type":"ContainerStarted","Data":"bff3018cd2ef0956f5d8160be8aa7d20d7f7e15ae8c379208567bc1b3bf0ba06"} Jan 22 09:28:22 crc kubenswrapper[4811]: I0122 09:28:22.529105 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5s52q" podStartSLOduration=1.8794096759999999 podStartE2EDuration="2.529088115s" podCreationTimestamp="2026-01-22 09:28:20 +0000 UTC" firstStartedPulling="2026-01-22 09:28:21.214251832 +0000 UTC m=+1345.536438955" lastFinishedPulling="2026-01-22 09:28:21.863930271 +0000 UTC m=+1346.186117394" observedRunningTime="2026-01-22 09:28:22.523976157 +0000 UTC m=+1346.846163280" watchObservedRunningTime="2026-01-22 09:28:22.529088115 +0000 UTC m=+1346.851275238" Jan 22 09:29:28 crc kubenswrapper[4811]: I0122 09:29:28.944950 4811 generic.go:334] "Generic (PLEG): container finished" podID="b652af36-650f-4442-9555-59e25d3414d7" containerID="bff3018cd2ef0956f5d8160be8aa7d20d7f7e15ae8c379208567bc1b3bf0ba06" exitCode=0 Jan 22 09:29:28 crc kubenswrapper[4811]: I0122 09:29:28.945026 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5s52q" event={"ID":"b652af36-650f-4442-9555-59e25d3414d7","Type":"ContainerDied","Data":"bff3018cd2ef0956f5d8160be8aa7d20d7f7e15ae8c379208567bc1b3bf0ba06"} Jan 22 09:29:30 crc kubenswrapper[4811]: I0122 09:29:30.251335 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5s52q" Jan 22 09:29:30 crc kubenswrapper[4811]: I0122 09:29:30.390932 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b652af36-650f-4442-9555-59e25d3414d7-inventory\") pod \"b652af36-650f-4442-9555-59e25d3414d7\" (UID: \"b652af36-650f-4442-9555-59e25d3414d7\") " Jan 22 09:29:30 crc kubenswrapper[4811]: I0122 09:29:30.391026 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zbzqj\" (UniqueName: \"kubernetes.io/projected/b652af36-650f-4442-9555-59e25d3414d7-kube-api-access-zbzqj\") pod \"b652af36-650f-4442-9555-59e25d3414d7\" (UID: \"b652af36-650f-4442-9555-59e25d3414d7\") " Jan 22 09:29:30 crc kubenswrapper[4811]: I0122 09:29:30.391090 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b652af36-650f-4442-9555-59e25d3414d7-ssh-key-openstack-edpm-ipam\") pod \"b652af36-650f-4442-9555-59e25d3414d7\" (UID: \"b652af36-650f-4442-9555-59e25d3414d7\") " Jan 22 09:29:30 crc kubenswrapper[4811]: I0122 09:29:30.398442 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b652af36-650f-4442-9555-59e25d3414d7-kube-api-access-zbzqj" (OuterVolumeSpecName: "kube-api-access-zbzqj") pod "b652af36-650f-4442-9555-59e25d3414d7" (UID: "b652af36-650f-4442-9555-59e25d3414d7"). InnerVolumeSpecName "kube-api-access-zbzqj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:29:30 crc kubenswrapper[4811]: I0122 09:29:30.411245 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b652af36-650f-4442-9555-59e25d3414d7-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "b652af36-650f-4442-9555-59e25d3414d7" (UID: "b652af36-650f-4442-9555-59e25d3414d7"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:29:30 crc kubenswrapper[4811]: I0122 09:29:30.416332 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b652af36-650f-4442-9555-59e25d3414d7-inventory" (OuterVolumeSpecName: "inventory") pod "b652af36-650f-4442-9555-59e25d3414d7" (UID: "b652af36-650f-4442-9555-59e25d3414d7"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:29:30 crc kubenswrapper[4811]: I0122 09:29:30.493973 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zbzqj\" (UniqueName: \"kubernetes.io/projected/b652af36-650f-4442-9555-59e25d3414d7-kube-api-access-zbzqj\") on node \"crc\" DevicePath \"\"" Jan 22 09:29:30 crc kubenswrapper[4811]: I0122 09:29:30.493995 4811 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b652af36-650f-4442-9555-59e25d3414d7-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 22 09:29:30 crc kubenswrapper[4811]: I0122 09:29:30.494005 4811 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b652af36-650f-4442-9555-59e25d3414d7-inventory\") on node \"crc\" DevicePath \"\"" Jan 22 09:29:30 crc kubenswrapper[4811]: I0122 09:29:30.958700 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5s52q" event={"ID":"b652af36-650f-4442-9555-59e25d3414d7","Type":"ContainerDied","Data":"df40ec6b4e0738a5e3f85c64ac0667acc59d663820f701bdd42a76d3ac34eb73"} Jan 22 09:29:30 crc kubenswrapper[4811]: I0122 09:29:30.958922 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="df40ec6b4e0738a5e3f85c64ac0667acc59d663820f701bdd42a76d3ac34eb73" Jan 22 09:29:30 crc kubenswrapper[4811]: I0122 09:29:30.958745 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5s52q" Jan 22 09:29:31 crc kubenswrapper[4811]: I0122 09:29:31.019680 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fc64z"] Jan 22 09:29:31 crc kubenswrapper[4811]: E0122 09:29:31.020039 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b652af36-650f-4442-9555-59e25d3414d7" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 22 09:29:31 crc kubenswrapper[4811]: I0122 09:29:31.020058 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="b652af36-650f-4442-9555-59e25d3414d7" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 22 09:29:31 crc kubenswrapper[4811]: I0122 09:29:31.020215 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="b652af36-650f-4442-9555-59e25d3414d7" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 22 09:29:31 crc kubenswrapper[4811]: I0122 09:29:31.020746 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fc64z" Jan 22 09:29:31 crc kubenswrapper[4811]: I0122 09:29:31.022330 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 22 09:29:31 crc kubenswrapper[4811]: I0122 09:29:31.022534 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 22 09:29:31 crc kubenswrapper[4811]: I0122 09:29:31.022670 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7cbs6" Jan 22 09:29:31 crc kubenswrapper[4811]: I0122 09:29:31.022795 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 22 09:29:31 crc kubenswrapper[4811]: I0122 09:29:31.030580 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fc64z"] Jan 22 09:29:31 crc kubenswrapper[4811]: I0122 09:29:31.204618 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9afcfc8c-7328-400a-9e14-4b5e8500ffde-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-fc64z\" (UID: \"9afcfc8c-7328-400a-9e14-4b5e8500ffde\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fc64z" Jan 22 09:29:31 crc kubenswrapper[4811]: I0122 09:29:31.204781 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9afcfc8c-7328-400a-9e14-4b5e8500ffde-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-fc64z\" (UID: \"9afcfc8c-7328-400a-9e14-4b5e8500ffde\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fc64z" Jan 22 09:29:31 crc kubenswrapper[4811]: I0122 09:29:31.204848 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkh82\" (UniqueName: \"kubernetes.io/projected/9afcfc8c-7328-400a-9e14-4b5e8500ffde-kube-api-access-lkh82\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-fc64z\" (UID: \"9afcfc8c-7328-400a-9e14-4b5e8500ffde\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fc64z" Jan 22 09:29:31 crc kubenswrapper[4811]: I0122 09:29:31.306686 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9afcfc8c-7328-400a-9e14-4b5e8500ffde-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-fc64z\" (UID: \"9afcfc8c-7328-400a-9e14-4b5e8500ffde\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fc64z" Jan 22 09:29:31 crc kubenswrapper[4811]: I0122 09:29:31.306771 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lkh82\" (UniqueName: \"kubernetes.io/projected/9afcfc8c-7328-400a-9e14-4b5e8500ffde-kube-api-access-lkh82\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-fc64z\" (UID: \"9afcfc8c-7328-400a-9e14-4b5e8500ffde\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fc64z" Jan 22 09:29:31 crc kubenswrapper[4811]: I0122 09:29:31.306848 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9afcfc8c-7328-400a-9e14-4b5e8500ffde-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-fc64z\" (UID: \"9afcfc8c-7328-400a-9e14-4b5e8500ffde\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fc64z" Jan 22 09:29:31 crc kubenswrapper[4811]: I0122 09:29:31.310749 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9afcfc8c-7328-400a-9e14-4b5e8500ffde-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-fc64z\" (UID: \"9afcfc8c-7328-400a-9e14-4b5e8500ffde\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fc64z" Jan 22 09:29:31 crc kubenswrapper[4811]: I0122 09:29:31.311701 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9afcfc8c-7328-400a-9e14-4b5e8500ffde-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-fc64z\" (UID: \"9afcfc8c-7328-400a-9e14-4b5e8500ffde\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fc64z" Jan 22 09:29:31 crc kubenswrapper[4811]: I0122 09:29:31.319581 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lkh82\" (UniqueName: \"kubernetes.io/projected/9afcfc8c-7328-400a-9e14-4b5e8500ffde-kube-api-access-lkh82\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-fc64z\" (UID: \"9afcfc8c-7328-400a-9e14-4b5e8500ffde\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fc64z" Jan 22 09:29:31 crc kubenswrapper[4811]: I0122 09:29:31.334584 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fc64z" Jan 22 09:29:31 crc kubenswrapper[4811]: I0122 09:29:31.750492 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fc64z"] Jan 22 09:29:31 crc kubenswrapper[4811]: I0122 09:29:31.965820 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fc64z" event={"ID":"9afcfc8c-7328-400a-9e14-4b5e8500ffde","Type":"ContainerStarted","Data":"28f214f313c0bed9ed46af5313f5ea2cf654a5d91ae6671c8aa02f5f15ef64d8"} Jan 22 09:29:32 crc kubenswrapper[4811]: I0122 09:29:32.973466 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fc64z" event={"ID":"9afcfc8c-7328-400a-9e14-4b5e8500ffde","Type":"ContainerStarted","Data":"391bf78aca927b06006de4778580b1c6f4ec773790117b2021e5d0a365cf6447"} Jan 22 09:29:35 crc kubenswrapper[4811]: I0122 09:29:35.501274 4811 patch_prober.go:28] interesting pod/machine-config-daemon-txvcq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 09:29:35 crc kubenswrapper[4811]: I0122 09:29:35.501663 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 09:29:37 crc kubenswrapper[4811]: I0122 09:29:37.000137 4811 generic.go:334] "Generic (PLEG): container finished" podID="9afcfc8c-7328-400a-9e14-4b5e8500ffde" containerID="391bf78aca927b06006de4778580b1c6f4ec773790117b2021e5d0a365cf6447" exitCode=0 Jan 22 09:29:37 crc kubenswrapper[4811]: I0122 09:29:37.000195 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fc64z" event={"ID":"9afcfc8c-7328-400a-9e14-4b5e8500ffde","Type":"ContainerDied","Data":"391bf78aca927b06006de4778580b1c6f4ec773790117b2021e5d0a365cf6447"} Jan 22 09:29:38 crc kubenswrapper[4811]: I0122 09:29:38.283616 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fc64z" Jan 22 09:29:38 crc kubenswrapper[4811]: I0122 09:29:38.313714 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lkh82\" (UniqueName: \"kubernetes.io/projected/9afcfc8c-7328-400a-9e14-4b5e8500ffde-kube-api-access-lkh82\") pod \"9afcfc8c-7328-400a-9e14-4b5e8500ffde\" (UID: \"9afcfc8c-7328-400a-9e14-4b5e8500ffde\") " Jan 22 09:29:38 crc kubenswrapper[4811]: I0122 09:29:38.313800 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9afcfc8c-7328-400a-9e14-4b5e8500ffde-ssh-key-openstack-edpm-ipam\") pod \"9afcfc8c-7328-400a-9e14-4b5e8500ffde\" (UID: \"9afcfc8c-7328-400a-9e14-4b5e8500ffde\") " Jan 22 09:29:38 crc kubenswrapper[4811]: I0122 09:29:38.313845 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9afcfc8c-7328-400a-9e14-4b5e8500ffde-inventory\") pod \"9afcfc8c-7328-400a-9e14-4b5e8500ffde\" (UID: \"9afcfc8c-7328-400a-9e14-4b5e8500ffde\") " Jan 22 09:29:38 crc kubenswrapper[4811]: I0122 09:29:38.319854 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9afcfc8c-7328-400a-9e14-4b5e8500ffde-kube-api-access-lkh82" (OuterVolumeSpecName: "kube-api-access-lkh82") pod "9afcfc8c-7328-400a-9e14-4b5e8500ffde" (UID: "9afcfc8c-7328-400a-9e14-4b5e8500ffde"). InnerVolumeSpecName "kube-api-access-lkh82". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:29:38 crc kubenswrapper[4811]: I0122 09:29:38.333964 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9afcfc8c-7328-400a-9e14-4b5e8500ffde-inventory" (OuterVolumeSpecName: "inventory") pod "9afcfc8c-7328-400a-9e14-4b5e8500ffde" (UID: "9afcfc8c-7328-400a-9e14-4b5e8500ffde"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:29:38 crc kubenswrapper[4811]: I0122 09:29:38.335857 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9afcfc8c-7328-400a-9e14-4b5e8500ffde-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "9afcfc8c-7328-400a-9e14-4b5e8500ffde" (UID: "9afcfc8c-7328-400a-9e14-4b5e8500ffde"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:29:38 crc kubenswrapper[4811]: I0122 09:29:38.415538 4811 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9afcfc8c-7328-400a-9e14-4b5e8500ffde-inventory\") on node \"crc\" DevicePath \"\"" Jan 22 09:29:38 crc kubenswrapper[4811]: I0122 09:29:38.415562 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lkh82\" (UniqueName: \"kubernetes.io/projected/9afcfc8c-7328-400a-9e14-4b5e8500ffde-kube-api-access-lkh82\") on node \"crc\" DevicePath \"\"" Jan 22 09:29:38 crc kubenswrapper[4811]: I0122 09:29:38.415573 4811 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9afcfc8c-7328-400a-9e14-4b5e8500ffde-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 22 09:29:39 crc kubenswrapper[4811]: I0122 09:29:39.013942 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fc64z" event={"ID":"9afcfc8c-7328-400a-9e14-4b5e8500ffde","Type":"ContainerDied","Data":"28f214f313c0bed9ed46af5313f5ea2cf654a5d91ae6671c8aa02f5f15ef64d8"} Jan 22 09:29:39 crc kubenswrapper[4811]: I0122 09:29:39.014158 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="28f214f313c0bed9ed46af5313f5ea2cf654a5d91ae6671c8aa02f5f15ef64d8" Jan 22 09:29:39 crc kubenswrapper[4811]: I0122 09:29:39.014209 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fc64z" Jan 22 09:29:39 crc kubenswrapper[4811]: I0122 09:29:39.063638 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-cb5fm"] Jan 22 09:29:39 crc kubenswrapper[4811]: E0122 09:29:39.063990 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9afcfc8c-7328-400a-9e14-4b5e8500ffde" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 22 09:29:39 crc kubenswrapper[4811]: I0122 09:29:39.064007 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="9afcfc8c-7328-400a-9e14-4b5e8500ffde" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 22 09:29:39 crc kubenswrapper[4811]: I0122 09:29:39.064175 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="9afcfc8c-7328-400a-9e14-4b5e8500ffde" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 22 09:29:39 crc kubenswrapper[4811]: I0122 09:29:39.064668 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-cb5fm" Jan 22 09:29:39 crc kubenswrapper[4811]: I0122 09:29:39.066200 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 22 09:29:39 crc kubenswrapper[4811]: I0122 09:29:39.066334 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 22 09:29:39 crc kubenswrapper[4811]: I0122 09:29:39.069804 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7cbs6" Jan 22 09:29:39 crc kubenswrapper[4811]: I0122 09:29:39.070016 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 22 09:29:39 crc kubenswrapper[4811]: I0122 09:29:39.084234 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-cb5fm"] Jan 22 09:29:39 crc kubenswrapper[4811]: I0122 09:29:39.228293 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3e08656c-3cb7-4ab3-b7db-2a54fe305f6f-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-cb5fm\" (UID: \"3e08656c-3cb7-4ab3-b7db-2a54fe305f6f\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-cb5fm" Jan 22 09:29:39 crc kubenswrapper[4811]: I0122 09:29:39.228926 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3e08656c-3cb7-4ab3-b7db-2a54fe305f6f-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-cb5fm\" (UID: \"3e08656c-3cb7-4ab3-b7db-2a54fe305f6f\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-cb5fm" Jan 22 09:29:39 crc kubenswrapper[4811]: I0122 09:29:39.228981 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbr72\" (UniqueName: \"kubernetes.io/projected/3e08656c-3cb7-4ab3-b7db-2a54fe305f6f-kube-api-access-sbr72\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-cb5fm\" (UID: \"3e08656c-3cb7-4ab3-b7db-2a54fe305f6f\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-cb5fm" Jan 22 09:29:39 crc kubenswrapper[4811]: I0122 09:29:39.330909 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3e08656c-3cb7-4ab3-b7db-2a54fe305f6f-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-cb5fm\" (UID: \"3e08656c-3cb7-4ab3-b7db-2a54fe305f6f\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-cb5fm" Jan 22 09:29:39 crc kubenswrapper[4811]: I0122 09:29:39.330979 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3e08656c-3cb7-4ab3-b7db-2a54fe305f6f-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-cb5fm\" (UID: \"3e08656c-3cb7-4ab3-b7db-2a54fe305f6f\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-cb5fm" Jan 22 09:29:39 crc kubenswrapper[4811]: I0122 09:29:39.331014 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbr72\" (UniqueName: \"kubernetes.io/projected/3e08656c-3cb7-4ab3-b7db-2a54fe305f6f-kube-api-access-sbr72\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-cb5fm\" (UID: \"3e08656c-3cb7-4ab3-b7db-2a54fe305f6f\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-cb5fm" Jan 22 09:29:39 crc kubenswrapper[4811]: I0122 09:29:39.334573 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3e08656c-3cb7-4ab3-b7db-2a54fe305f6f-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-cb5fm\" (UID: \"3e08656c-3cb7-4ab3-b7db-2a54fe305f6f\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-cb5fm" Jan 22 09:29:39 crc kubenswrapper[4811]: I0122 09:29:39.335041 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3e08656c-3cb7-4ab3-b7db-2a54fe305f6f-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-cb5fm\" (UID: \"3e08656c-3cb7-4ab3-b7db-2a54fe305f6f\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-cb5fm" Jan 22 09:29:39 crc kubenswrapper[4811]: I0122 09:29:39.345695 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbr72\" (UniqueName: \"kubernetes.io/projected/3e08656c-3cb7-4ab3-b7db-2a54fe305f6f-kube-api-access-sbr72\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-cb5fm\" (UID: \"3e08656c-3cb7-4ab3-b7db-2a54fe305f6f\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-cb5fm" Jan 22 09:29:39 crc kubenswrapper[4811]: I0122 09:29:39.380640 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-cb5fm" Jan 22 09:29:39 crc kubenswrapper[4811]: I0122 09:29:39.798930 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-cb5fm"] Jan 22 09:29:40 crc kubenswrapper[4811]: I0122 09:29:40.020269 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-cb5fm" event={"ID":"3e08656c-3cb7-4ab3-b7db-2a54fe305f6f","Type":"ContainerStarted","Data":"d563728dd5152c47ca66e197da8651bb4f2e7c85b82128e0f7bd787b704b5923"} Jan 22 09:29:41 crc kubenswrapper[4811]: I0122 09:29:41.026247 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-cb5fm" event={"ID":"3e08656c-3cb7-4ab3-b7db-2a54fe305f6f","Type":"ContainerStarted","Data":"07272b41ecad0370dba50d248542820556fbea0683b7f363b6948fae55987a0d"} Jan 22 09:29:41 crc kubenswrapper[4811]: I0122 09:29:41.048762 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-cb5fm" podStartSLOduration=1.514333733 podStartE2EDuration="2.048735226s" podCreationTimestamp="2026-01-22 09:29:39 +0000 UTC" firstStartedPulling="2026-01-22 09:29:39.799822251 +0000 UTC m=+1424.122009374" lastFinishedPulling="2026-01-22 09:29:40.334223744 +0000 UTC m=+1424.656410867" observedRunningTime="2026-01-22 09:29:41.045040047 +0000 UTC m=+1425.367227170" watchObservedRunningTime="2026-01-22 09:29:41.048735226 +0000 UTC m=+1425.370922348" Jan 22 09:30:00 crc kubenswrapper[4811]: I0122 09:30:00.133955 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484570-hfr5f"] Jan 22 09:30:00 crc kubenswrapper[4811]: I0122 09:30:00.135252 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484570-hfr5f" Jan 22 09:30:00 crc kubenswrapper[4811]: I0122 09:30:00.137133 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 22 09:30:00 crc kubenswrapper[4811]: I0122 09:30:00.137299 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 22 09:30:00 crc kubenswrapper[4811]: I0122 09:30:00.142794 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484570-hfr5f"] Jan 22 09:30:00 crc kubenswrapper[4811]: I0122 09:30:00.187301 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f56432ca-1c38-4f53-884b-77f93f904cc5-config-volume\") pod \"collect-profiles-29484570-hfr5f\" (UID: \"f56432ca-1c38-4f53-884b-77f93f904cc5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484570-hfr5f" Jan 22 09:30:00 crc kubenswrapper[4811]: I0122 09:30:00.187674 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f56432ca-1c38-4f53-884b-77f93f904cc5-secret-volume\") pod \"collect-profiles-29484570-hfr5f\" (UID: \"f56432ca-1c38-4f53-884b-77f93f904cc5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484570-hfr5f" Jan 22 09:30:00 crc kubenswrapper[4811]: I0122 09:30:00.187760 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z24cq\" (UniqueName: \"kubernetes.io/projected/f56432ca-1c38-4f53-884b-77f93f904cc5-kube-api-access-z24cq\") pod \"collect-profiles-29484570-hfr5f\" (UID: \"f56432ca-1c38-4f53-884b-77f93f904cc5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484570-hfr5f" Jan 22 09:30:00 crc kubenswrapper[4811]: I0122 09:30:00.288922 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z24cq\" (UniqueName: \"kubernetes.io/projected/f56432ca-1c38-4f53-884b-77f93f904cc5-kube-api-access-z24cq\") pod \"collect-profiles-29484570-hfr5f\" (UID: \"f56432ca-1c38-4f53-884b-77f93f904cc5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484570-hfr5f" Jan 22 09:30:00 crc kubenswrapper[4811]: I0122 09:30:00.288968 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f56432ca-1c38-4f53-884b-77f93f904cc5-config-volume\") pod \"collect-profiles-29484570-hfr5f\" (UID: \"f56432ca-1c38-4f53-884b-77f93f904cc5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484570-hfr5f" Jan 22 09:30:00 crc kubenswrapper[4811]: I0122 09:30:00.289055 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f56432ca-1c38-4f53-884b-77f93f904cc5-secret-volume\") pod \"collect-profiles-29484570-hfr5f\" (UID: \"f56432ca-1c38-4f53-884b-77f93f904cc5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484570-hfr5f" Jan 22 09:30:00 crc kubenswrapper[4811]: I0122 09:30:00.289927 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f56432ca-1c38-4f53-884b-77f93f904cc5-config-volume\") pod \"collect-profiles-29484570-hfr5f\" (UID: \"f56432ca-1c38-4f53-884b-77f93f904cc5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484570-hfr5f" Jan 22 09:30:00 crc kubenswrapper[4811]: I0122 09:30:00.293254 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f56432ca-1c38-4f53-884b-77f93f904cc5-secret-volume\") pod \"collect-profiles-29484570-hfr5f\" (UID: \"f56432ca-1c38-4f53-884b-77f93f904cc5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484570-hfr5f" Jan 22 09:30:00 crc kubenswrapper[4811]: I0122 09:30:00.303266 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z24cq\" (UniqueName: \"kubernetes.io/projected/f56432ca-1c38-4f53-884b-77f93f904cc5-kube-api-access-z24cq\") pod \"collect-profiles-29484570-hfr5f\" (UID: \"f56432ca-1c38-4f53-884b-77f93f904cc5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484570-hfr5f" Jan 22 09:30:00 crc kubenswrapper[4811]: I0122 09:30:00.460409 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484570-hfr5f" Jan 22 09:30:00 crc kubenswrapper[4811]: I0122 09:30:00.844869 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484570-hfr5f"] Jan 22 09:30:00 crc kubenswrapper[4811]: W0122 09:30:00.845897 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf56432ca_1c38_4f53_884b_77f93f904cc5.slice/crio-23a30d9d81fee36dfaa140c5b671f3bf7ba14b474ba9d655f567ba2243685142 WatchSource:0}: Error finding container 23a30d9d81fee36dfaa140c5b671f3bf7ba14b474ba9d655f567ba2243685142: Status 404 returned error can't find the container with id 23a30d9d81fee36dfaa140c5b671f3bf7ba14b474ba9d655f567ba2243685142 Jan 22 09:30:01 crc kubenswrapper[4811]: I0122 09:30:01.174515 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484570-hfr5f" event={"ID":"f56432ca-1c38-4f53-884b-77f93f904cc5","Type":"ContainerStarted","Data":"90d2ccce569a0ca7e62f8c145184f711438116f9ee4f27afec59b76a25667d2c"} Jan 22 09:30:01 crc kubenswrapper[4811]: I0122 09:30:01.174557 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484570-hfr5f" event={"ID":"f56432ca-1c38-4f53-884b-77f93f904cc5","Type":"ContainerStarted","Data":"23a30d9d81fee36dfaa140c5b671f3bf7ba14b474ba9d655f567ba2243685142"} Jan 22 09:30:01 crc kubenswrapper[4811]: I0122 09:30:01.193251 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29484570-hfr5f" podStartSLOduration=1.193239968 podStartE2EDuration="1.193239968s" podCreationTimestamp="2026-01-22 09:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:30:01.189759594 +0000 UTC m=+1445.511946717" watchObservedRunningTime="2026-01-22 09:30:01.193239968 +0000 UTC m=+1445.515427091" Jan 22 09:30:02 crc kubenswrapper[4811]: I0122 09:30:02.187594 4811 generic.go:334] "Generic (PLEG): container finished" podID="f56432ca-1c38-4f53-884b-77f93f904cc5" containerID="90d2ccce569a0ca7e62f8c145184f711438116f9ee4f27afec59b76a25667d2c" exitCode=0 Jan 22 09:30:02 crc kubenswrapper[4811]: I0122 09:30:02.187664 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484570-hfr5f" event={"ID":"f56432ca-1c38-4f53-884b-77f93f904cc5","Type":"ContainerDied","Data":"90d2ccce569a0ca7e62f8c145184f711438116f9ee4f27afec59b76a25667d2c"} Jan 22 09:30:03 crc kubenswrapper[4811]: I0122 09:30:03.427020 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484570-hfr5f" Jan 22 09:30:03 crc kubenswrapper[4811]: I0122 09:30:03.439096 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f56432ca-1c38-4f53-884b-77f93f904cc5-config-volume\") pod \"f56432ca-1c38-4f53-884b-77f93f904cc5\" (UID: \"f56432ca-1c38-4f53-884b-77f93f904cc5\") " Jan 22 09:30:03 crc kubenswrapper[4811]: I0122 09:30:03.439278 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f56432ca-1c38-4f53-884b-77f93f904cc5-secret-volume\") pod \"f56432ca-1c38-4f53-884b-77f93f904cc5\" (UID: \"f56432ca-1c38-4f53-884b-77f93f904cc5\") " Jan 22 09:30:03 crc kubenswrapper[4811]: I0122 09:30:03.439303 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z24cq\" (UniqueName: \"kubernetes.io/projected/f56432ca-1c38-4f53-884b-77f93f904cc5-kube-api-access-z24cq\") pod \"f56432ca-1c38-4f53-884b-77f93f904cc5\" (UID: \"f56432ca-1c38-4f53-884b-77f93f904cc5\") " Jan 22 09:30:03 crc kubenswrapper[4811]: I0122 09:30:03.439701 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f56432ca-1c38-4f53-884b-77f93f904cc5-config-volume" (OuterVolumeSpecName: "config-volume") pod "f56432ca-1c38-4f53-884b-77f93f904cc5" (UID: "f56432ca-1c38-4f53-884b-77f93f904cc5"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:30:03 crc kubenswrapper[4811]: I0122 09:30:03.440054 4811 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f56432ca-1c38-4f53-884b-77f93f904cc5-config-volume\") on node \"crc\" DevicePath \"\"" Jan 22 09:30:03 crc kubenswrapper[4811]: I0122 09:30:03.451982 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f56432ca-1c38-4f53-884b-77f93f904cc5-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "f56432ca-1c38-4f53-884b-77f93f904cc5" (UID: "f56432ca-1c38-4f53-884b-77f93f904cc5"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:30:03 crc kubenswrapper[4811]: I0122 09:30:03.453738 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f56432ca-1c38-4f53-884b-77f93f904cc5-kube-api-access-z24cq" (OuterVolumeSpecName: "kube-api-access-z24cq") pod "f56432ca-1c38-4f53-884b-77f93f904cc5" (UID: "f56432ca-1c38-4f53-884b-77f93f904cc5"). InnerVolumeSpecName "kube-api-access-z24cq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:30:03 crc kubenswrapper[4811]: I0122 09:30:03.541141 4811 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f56432ca-1c38-4f53-884b-77f93f904cc5-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 22 09:30:03 crc kubenswrapper[4811]: I0122 09:30:03.541167 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z24cq\" (UniqueName: \"kubernetes.io/projected/f56432ca-1c38-4f53-884b-77f93f904cc5-kube-api-access-z24cq\") on node \"crc\" DevicePath \"\"" Jan 22 09:30:04 crc kubenswrapper[4811]: I0122 09:30:04.201782 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484570-hfr5f" event={"ID":"f56432ca-1c38-4f53-884b-77f93f904cc5","Type":"ContainerDied","Data":"23a30d9d81fee36dfaa140c5b671f3bf7ba14b474ba9d655f567ba2243685142"} Jan 22 09:30:04 crc kubenswrapper[4811]: I0122 09:30:04.201826 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="23a30d9d81fee36dfaa140c5b671f3bf7ba14b474ba9d655f567ba2243685142" Jan 22 09:30:04 crc kubenswrapper[4811]: I0122 09:30:04.201994 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484570-hfr5f" Jan 22 09:30:05 crc kubenswrapper[4811]: I0122 09:30:05.501910 4811 patch_prober.go:28] interesting pod/machine-config-daemon-txvcq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 09:30:05 crc kubenswrapper[4811]: I0122 09:30:05.502737 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 09:30:10 crc kubenswrapper[4811]: I0122 09:30:10.239332 4811 generic.go:334] "Generic (PLEG): container finished" podID="3e08656c-3cb7-4ab3-b7db-2a54fe305f6f" containerID="07272b41ecad0370dba50d248542820556fbea0683b7f363b6948fae55987a0d" exitCode=0 Jan 22 09:30:10 crc kubenswrapper[4811]: I0122 09:30:10.239411 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-cb5fm" event={"ID":"3e08656c-3cb7-4ab3-b7db-2a54fe305f6f","Type":"ContainerDied","Data":"07272b41ecad0370dba50d248542820556fbea0683b7f363b6948fae55987a0d"} Jan 22 09:30:11 crc kubenswrapper[4811]: I0122 09:30:11.601129 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-cb5fm" Jan 22 09:30:11 crc kubenswrapper[4811]: I0122 09:30:11.692566 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3e08656c-3cb7-4ab3-b7db-2a54fe305f6f-inventory\") pod \"3e08656c-3cb7-4ab3-b7db-2a54fe305f6f\" (UID: \"3e08656c-3cb7-4ab3-b7db-2a54fe305f6f\") " Jan 22 09:30:11 crc kubenswrapper[4811]: I0122 09:30:11.692860 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3e08656c-3cb7-4ab3-b7db-2a54fe305f6f-ssh-key-openstack-edpm-ipam\") pod \"3e08656c-3cb7-4ab3-b7db-2a54fe305f6f\" (UID: \"3e08656c-3cb7-4ab3-b7db-2a54fe305f6f\") " Jan 22 09:30:11 crc kubenswrapper[4811]: I0122 09:30:11.692889 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbr72\" (UniqueName: \"kubernetes.io/projected/3e08656c-3cb7-4ab3-b7db-2a54fe305f6f-kube-api-access-sbr72\") pod \"3e08656c-3cb7-4ab3-b7db-2a54fe305f6f\" (UID: \"3e08656c-3cb7-4ab3-b7db-2a54fe305f6f\") " Jan 22 09:30:11 crc kubenswrapper[4811]: I0122 09:30:11.697177 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e08656c-3cb7-4ab3-b7db-2a54fe305f6f-kube-api-access-sbr72" (OuterVolumeSpecName: "kube-api-access-sbr72") pod "3e08656c-3cb7-4ab3-b7db-2a54fe305f6f" (UID: "3e08656c-3cb7-4ab3-b7db-2a54fe305f6f"). InnerVolumeSpecName "kube-api-access-sbr72". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:30:11 crc kubenswrapper[4811]: I0122 09:30:11.713743 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e08656c-3cb7-4ab3-b7db-2a54fe305f6f-inventory" (OuterVolumeSpecName: "inventory") pod "3e08656c-3cb7-4ab3-b7db-2a54fe305f6f" (UID: "3e08656c-3cb7-4ab3-b7db-2a54fe305f6f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:30:11 crc kubenswrapper[4811]: I0122 09:30:11.713852 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e08656c-3cb7-4ab3-b7db-2a54fe305f6f-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "3e08656c-3cb7-4ab3-b7db-2a54fe305f6f" (UID: "3e08656c-3cb7-4ab3-b7db-2a54fe305f6f"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:30:11 crc kubenswrapper[4811]: I0122 09:30:11.795486 4811 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3e08656c-3cb7-4ab3-b7db-2a54fe305f6f-inventory\") on node \"crc\" DevicePath \"\"" Jan 22 09:30:11 crc kubenswrapper[4811]: I0122 09:30:11.795527 4811 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3e08656c-3cb7-4ab3-b7db-2a54fe305f6f-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 22 09:30:11 crc kubenswrapper[4811]: I0122 09:30:11.795538 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sbr72\" (UniqueName: \"kubernetes.io/projected/3e08656c-3cb7-4ab3-b7db-2a54fe305f6f-kube-api-access-sbr72\") on node \"crc\" DevicePath \"\"" Jan 22 09:30:12 crc kubenswrapper[4811]: I0122 09:30:12.253615 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-cb5fm" event={"ID":"3e08656c-3cb7-4ab3-b7db-2a54fe305f6f","Type":"ContainerDied","Data":"d563728dd5152c47ca66e197da8651bb4f2e7c85b82128e0f7bd787b704b5923"} Jan 22 09:30:12 crc kubenswrapper[4811]: I0122 09:30:12.253663 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d563728dd5152c47ca66e197da8651bb4f2e7c85b82128e0f7bd787b704b5923" Jan 22 09:30:12 crc kubenswrapper[4811]: I0122 09:30:12.253666 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-cb5fm" Jan 22 09:30:12 crc kubenswrapper[4811]: I0122 09:30:12.307493 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kln4c"] Jan 22 09:30:12 crc kubenswrapper[4811]: E0122 09:30:12.307812 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e08656c-3cb7-4ab3-b7db-2a54fe305f6f" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 22 09:30:12 crc kubenswrapper[4811]: I0122 09:30:12.307828 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e08656c-3cb7-4ab3-b7db-2a54fe305f6f" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 22 09:30:12 crc kubenswrapper[4811]: E0122 09:30:12.307852 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f56432ca-1c38-4f53-884b-77f93f904cc5" containerName="collect-profiles" Jan 22 09:30:12 crc kubenswrapper[4811]: I0122 09:30:12.307859 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="f56432ca-1c38-4f53-884b-77f93f904cc5" containerName="collect-profiles" Jan 22 09:30:12 crc kubenswrapper[4811]: I0122 09:30:12.308018 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="f56432ca-1c38-4f53-884b-77f93f904cc5" containerName="collect-profiles" Jan 22 09:30:12 crc kubenswrapper[4811]: I0122 09:30:12.308041 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e08656c-3cb7-4ab3-b7db-2a54fe305f6f" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 22 09:30:12 crc kubenswrapper[4811]: I0122 09:30:12.308493 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kln4c" Jan 22 09:30:12 crc kubenswrapper[4811]: I0122 09:30:12.310744 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 22 09:30:12 crc kubenswrapper[4811]: I0122 09:30:12.310793 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 22 09:30:12 crc kubenswrapper[4811]: I0122 09:30:12.310854 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7cbs6" Jan 22 09:30:12 crc kubenswrapper[4811]: I0122 09:30:12.311924 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 22 09:30:12 crc kubenswrapper[4811]: I0122 09:30:12.319187 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kln4c"] Jan 22 09:30:12 crc kubenswrapper[4811]: I0122 09:30:12.406265 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52khq\" (UniqueName: \"kubernetes.io/projected/4ca30846-0331-42bc-af30-c7f211657f21-kube-api-access-52khq\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kln4c\" (UID: \"4ca30846-0331-42bc-af30-c7f211657f21\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kln4c" Jan 22 09:30:12 crc kubenswrapper[4811]: I0122 09:30:12.406348 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4ca30846-0331-42bc-af30-c7f211657f21-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kln4c\" (UID: \"4ca30846-0331-42bc-af30-c7f211657f21\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kln4c" Jan 22 09:30:12 crc kubenswrapper[4811]: I0122 09:30:12.406504 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4ca30846-0331-42bc-af30-c7f211657f21-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kln4c\" (UID: \"4ca30846-0331-42bc-af30-c7f211657f21\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kln4c" Jan 22 09:30:12 crc kubenswrapper[4811]: I0122 09:30:12.508325 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-52khq\" (UniqueName: \"kubernetes.io/projected/4ca30846-0331-42bc-af30-c7f211657f21-kube-api-access-52khq\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kln4c\" (UID: \"4ca30846-0331-42bc-af30-c7f211657f21\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kln4c" Jan 22 09:30:12 crc kubenswrapper[4811]: I0122 09:30:12.508390 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4ca30846-0331-42bc-af30-c7f211657f21-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kln4c\" (UID: \"4ca30846-0331-42bc-af30-c7f211657f21\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kln4c" Jan 22 09:30:12 crc kubenswrapper[4811]: I0122 09:30:12.508439 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4ca30846-0331-42bc-af30-c7f211657f21-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kln4c\" (UID: \"4ca30846-0331-42bc-af30-c7f211657f21\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kln4c" Jan 22 09:30:12 crc kubenswrapper[4811]: I0122 09:30:12.511727 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4ca30846-0331-42bc-af30-c7f211657f21-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kln4c\" (UID: \"4ca30846-0331-42bc-af30-c7f211657f21\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kln4c" Jan 22 09:30:12 crc kubenswrapper[4811]: I0122 09:30:12.511765 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4ca30846-0331-42bc-af30-c7f211657f21-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kln4c\" (UID: \"4ca30846-0331-42bc-af30-c7f211657f21\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kln4c" Jan 22 09:30:12 crc kubenswrapper[4811]: I0122 09:30:12.521965 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-52khq\" (UniqueName: \"kubernetes.io/projected/4ca30846-0331-42bc-af30-c7f211657f21-kube-api-access-52khq\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kln4c\" (UID: \"4ca30846-0331-42bc-af30-c7f211657f21\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kln4c" Jan 22 09:30:12 crc kubenswrapper[4811]: I0122 09:30:12.621037 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kln4c" Jan 22 09:30:13 crc kubenswrapper[4811]: I0122 09:30:13.073164 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kln4c"] Jan 22 09:30:13 crc kubenswrapper[4811]: I0122 09:30:13.082715 4811 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 09:30:13 crc kubenswrapper[4811]: I0122 09:30:13.269323 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kln4c" event={"ID":"4ca30846-0331-42bc-af30-c7f211657f21","Type":"ContainerStarted","Data":"0611c618d66ebb56cc5e2973ae92623b152e6615ca318d4226a507e42c10b214"} Jan 22 09:30:14 crc kubenswrapper[4811]: I0122 09:30:14.276945 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kln4c" event={"ID":"4ca30846-0331-42bc-af30-c7f211657f21","Type":"ContainerStarted","Data":"60774f7af279d21f593fcd206a039685e7f60cb6d9b6416fcdb026a066c65e12"} Jan 22 09:30:14 crc kubenswrapper[4811]: I0122 09:30:14.293521 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kln4c" podStartSLOduration=1.767066948 podStartE2EDuration="2.29350809s" podCreationTimestamp="2026-01-22 09:30:12 +0000 UTC" firstStartedPulling="2026-01-22 09:30:13.082462176 +0000 UTC m=+1457.404649299" lastFinishedPulling="2026-01-22 09:30:13.608903327 +0000 UTC m=+1457.931090441" observedRunningTime="2026-01-22 09:30:14.289189495 +0000 UTC m=+1458.611376618" watchObservedRunningTime="2026-01-22 09:30:14.29350809 +0000 UTC m=+1458.615695213" Jan 22 09:30:16 crc kubenswrapper[4811]: I0122 09:30:16.038678 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-bmklk"] Jan 22 09:30:16 crc kubenswrapper[4811]: I0122 09:30:16.045933 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-9e0d-account-create-update-8vx5t"] Jan 22 09:30:16 crc kubenswrapper[4811]: I0122 09:30:16.053097 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-69hvx"] Jan 22 09:30:16 crc kubenswrapper[4811]: I0122 09:30:16.059749 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-bmklk"] Jan 22 09:30:16 crc kubenswrapper[4811]: I0122 09:30:16.066438 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-9e0d-account-create-update-8vx5t"] Jan 22 09:30:16 crc kubenswrapper[4811]: I0122 09:30:16.072462 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-69hvx"] Jan 22 09:30:17 crc kubenswrapper[4811]: I0122 09:30:17.021956 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-9b2c-account-create-update-sgwxp"] Jan 22 09:30:17 crc kubenswrapper[4811]: I0122 09:30:17.028252 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-9b2c-account-create-update-sgwxp"] Jan 22 09:30:17 crc kubenswrapper[4811]: I0122 09:30:17.296724 4811 generic.go:334] "Generic (PLEG): container finished" podID="4ca30846-0331-42bc-af30-c7f211657f21" containerID="60774f7af279d21f593fcd206a039685e7f60cb6d9b6416fcdb026a066c65e12" exitCode=0 Jan 22 09:30:17 crc kubenswrapper[4811]: I0122 09:30:17.296761 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kln4c" event={"ID":"4ca30846-0331-42bc-af30-c7f211657f21","Type":"ContainerDied","Data":"60774f7af279d21f593fcd206a039685e7f60cb6d9b6416fcdb026a066c65e12"} Jan 22 09:30:17 crc kubenswrapper[4811]: I0122 09:30:17.999114 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="00f9806b-10cf-424e-bf6e-9e2e3e66833a" path="/var/lib/kubelet/pods/00f9806b-10cf-424e-bf6e-9e2e3e66833a/volumes" Jan 22 09:30:17 crc kubenswrapper[4811]: I0122 09:30:17.999689 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d4fef27-55fc-4467-91f6-a89ecbae6198" path="/var/lib/kubelet/pods/3d4fef27-55fc-4467-91f6-a89ecbae6198/volumes" Jan 22 09:30:18 crc kubenswrapper[4811]: I0122 09:30:18.000194 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b58a74ba-a371-4f9b-a3e2-ca7a946895af" path="/var/lib/kubelet/pods/b58a74ba-a371-4f9b-a3e2-ca7a946895af/volumes" Jan 22 09:30:18 crc kubenswrapper[4811]: I0122 09:30:18.000666 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5af3fb6-92bb-4f1d-9b7c-d3e4dc44ab35" path="/var/lib/kubelet/pods/d5af3fb6-92bb-4f1d-9b7c-d3e4dc44ab35/volumes" Jan 22 09:30:18 crc kubenswrapper[4811]: I0122 09:30:18.595615 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kln4c" Jan 22 09:30:18 crc kubenswrapper[4811]: I0122 09:30:18.711267 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4ca30846-0331-42bc-af30-c7f211657f21-ssh-key-openstack-edpm-ipam\") pod \"4ca30846-0331-42bc-af30-c7f211657f21\" (UID: \"4ca30846-0331-42bc-af30-c7f211657f21\") " Jan 22 09:30:18 crc kubenswrapper[4811]: I0122 09:30:18.711329 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4ca30846-0331-42bc-af30-c7f211657f21-inventory\") pod \"4ca30846-0331-42bc-af30-c7f211657f21\" (UID: \"4ca30846-0331-42bc-af30-c7f211657f21\") " Jan 22 09:30:18 crc kubenswrapper[4811]: I0122 09:30:18.711517 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-52khq\" (UniqueName: \"kubernetes.io/projected/4ca30846-0331-42bc-af30-c7f211657f21-kube-api-access-52khq\") pod \"4ca30846-0331-42bc-af30-c7f211657f21\" (UID: \"4ca30846-0331-42bc-af30-c7f211657f21\") " Jan 22 09:30:18 crc kubenswrapper[4811]: I0122 09:30:18.716193 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ca30846-0331-42bc-af30-c7f211657f21-kube-api-access-52khq" (OuterVolumeSpecName: "kube-api-access-52khq") pod "4ca30846-0331-42bc-af30-c7f211657f21" (UID: "4ca30846-0331-42bc-af30-c7f211657f21"). InnerVolumeSpecName "kube-api-access-52khq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:30:18 crc kubenswrapper[4811]: I0122 09:30:18.731080 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ca30846-0331-42bc-af30-c7f211657f21-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "4ca30846-0331-42bc-af30-c7f211657f21" (UID: "4ca30846-0331-42bc-af30-c7f211657f21"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:30:18 crc kubenswrapper[4811]: I0122 09:30:18.731352 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ca30846-0331-42bc-af30-c7f211657f21-inventory" (OuterVolumeSpecName: "inventory") pod "4ca30846-0331-42bc-af30-c7f211657f21" (UID: "4ca30846-0331-42bc-af30-c7f211657f21"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:30:18 crc kubenswrapper[4811]: I0122 09:30:18.813655 4811 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4ca30846-0331-42bc-af30-c7f211657f21-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 22 09:30:18 crc kubenswrapper[4811]: I0122 09:30:18.813683 4811 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4ca30846-0331-42bc-af30-c7f211657f21-inventory\") on node \"crc\" DevicePath \"\"" Jan 22 09:30:18 crc kubenswrapper[4811]: I0122 09:30:18.813695 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-52khq\" (UniqueName: \"kubernetes.io/projected/4ca30846-0331-42bc-af30-c7f211657f21-kube-api-access-52khq\") on node \"crc\" DevicePath \"\"" Jan 22 09:30:19 crc kubenswrapper[4811]: I0122 09:30:19.309824 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kln4c" event={"ID":"4ca30846-0331-42bc-af30-c7f211657f21","Type":"ContainerDied","Data":"0611c618d66ebb56cc5e2973ae92623b152e6615ca318d4226a507e42c10b214"} Jan 22 09:30:19 crc kubenswrapper[4811]: I0122 09:30:19.310017 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0611c618d66ebb56cc5e2973ae92623b152e6615ca318d4226a507e42c10b214" Jan 22 09:30:19 crc kubenswrapper[4811]: I0122 09:30:19.309881 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kln4c" Jan 22 09:30:19 crc kubenswrapper[4811]: I0122 09:30:19.373072 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-k8gsd"] Jan 22 09:30:19 crc kubenswrapper[4811]: E0122 09:30:19.373371 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ca30846-0331-42bc-af30-c7f211657f21" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Jan 22 09:30:19 crc kubenswrapper[4811]: I0122 09:30:19.373389 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ca30846-0331-42bc-af30-c7f211657f21" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Jan 22 09:30:19 crc kubenswrapper[4811]: I0122 09:30:19.373561 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ca30846-0331-42bc-af30-c7f211657f21" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Jan 22 09:30:19 crc kubenswrapper[4811]: I0122 09:30:19.374092 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-k8gsd" Jan 22 09:30:19 crc kubenswrapper[4811]: I0122 09:30:19.375904 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 22 09:30:19 crc kubenswrapper[4811]: I0122 09:30:19.376349 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7cbs6" Jan 22 09:30:19 crc kubenswrapper[4811]: I0122 09:30:19.376490 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 22 09:30:19 crc kubenswrapper[4811]: I0122 09:30:19.376643 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 22 09:30:19 crc kubenswrapper[4811]: I0122 09:30:19.387199 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-k8gsd"] Jan 22 09:30:19 crc kubenswrapper[4811]: I0122 09:30:19.422416 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/11821ee8-72c0-4167-b2fe-f1698439f08c-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-k8gsd\" (UID: \"11821ee8-72c0-4167-b2fe-f1698439f08c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-k8gsd" Jan 22 09:30:19 crc kubenswrapper[4811]: I0122 09:30:19.422496 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/11821ee8-72c0-4167-b2fe-f1698439f08c-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-k8gsd\" (UID: \"11821ee8-72c0-4167-b2fe-f1698439f08c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-k8gsd" Jan 22 09:30:19 crc kubenswrapper[4811]: I0122 09:30:19.422676 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlbw6\" (UniqueName: \"kubernetes.io/projected/11821ee8-72c0-4167-b2fe-f1698439f08c-kube-api-access-dlbw6\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-k8gsd\" (UID: \"11821ee8-72c0-4167-b2fe-f1698439f08c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-k8gsd" Jan 22 09:30:19 crc kubenswrapper[4811]: I0122 09:30:19.524077 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dlbw6\" (UniqueName: \"kubernetes.io/projected/11821ee8-72c0-4167-b2fe-f1698439f08c-kube-api-access-dlbw6\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-k8gsd\" (UID: \"11821ee8-72c0-4167-b2fe-f1698439f08c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-k8gsd" Jan 22 09:30:19 crc kubenswrapper[4811]: I0122 09:30:19.524137 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/11821ee8-72c0-4167-b2fe-f1698439f08c-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-k8gsd\" (UID: \"11821ee8-72c0-4167-b2fe-f1698439f08c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-k8gsd" Jan 22 09:30:19 crc kubenswrapper[4811]: I0122 09:30:19.524188 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/11821ee8-72c0-4167-b2fe-f1698439f08c-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-k8gsd\" (UID: \"11821ee8-72c0-4167-b2fe-f1698439f08c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-k8gsd" Jan 22 09:30:19 crc kubenswrapper[4811]: I0122 09:30:19.527830 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/11821ee8-72c0-4167-b2fe-f1698439f08c-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-k8gsd\" (UID: \"11821ee8-72c0-4167-b2fe-f1698439f08c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-k8gsd" Jan 22 09:30:19 crc kubenswrapper[4811]: I0122 09:30:19.528999 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/11821ee8-72c0-4167-b2fe-f1698439f08c-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-k8gsd\" (UID: \"11821ee8-72c0-4167-b2fe-f1698439f08c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-k8gsd" Jan 22 09:30:19 crc kubenswrapper[4811]: I0122 09:30:19.539199 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dlbw6\" (UniqueName: \"kubernetes.io/projected/11821ee8-72c0-4167-b2fe-f1698439f08c-kube-api-access-dlbw6\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-k8gsd\" (UID: \"11821ee8-72c0-4167-b2fe-f1698439f08c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-k8gsd" Jan 22 09:30:19 crc kubenswrapper[4811]: I0122 09:30:19.686934 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-k8gsd" Jan 22 09:30:20 crc kubenswrapper[4811]: I0122 09:30:20.109741 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-k8gsd"] Jan 22 09:30:20 crc kubenswrapper[4811]: I0122 09:30:20.317402 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-k8gsd" event={"ID":"11821ee8-72c0-4167-b2fe-f1698439f08c","Type":"ContainerStarted","Data":"1fa6b82720c173ea7f65181e02b57ba5c65f44c4411f1e80571411f466b15f5f"} Jan 22 09:30:21 crc kubenswrapper[4811]: I0122 09:30:21.027923 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-lnx59"] Jan 22 09:30:21 crc kubenswrapper[4811]: I0122 09:30:21.034760 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-2e4a-account-create-update-j6gzm"] Jan 22 09:30:21 crc kubenswrapper[4811]: I0122 09:30:21.042219 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-lnx59"] Jan 22 09:30:21 crc kubenswrapper[4811]: I0122 09:30:21.054798 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-2e4a-account-create-update-j6gzm"] Jan 22 09:30:21 crc kubenswrapper[4811]: I0122 09:30:21.325771 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-k8gsd" event={"ID":"11821ee8-72c0-4167-b2fe-f1698439f08c","Type":"ContainerStarted","Data":"4dba93963e4469ccb86bebe8aedc506c0151a4344968f24119a271b9240ddafe"} Jan 22 09:30:21 crc kubenswrapper[4811]: I0122 09:30:21.346099 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-k8gsd" podStartSLOduration=1.8535317660000001 podStartE2EDuration="2.346086523s" podCreationTimestamp="2026-01-22 09:30:19 +0000 UTC" firstStartedPulling="2026-01-22 09:30:20.117703925 +0000 UTC m=+1464.439891048" lastFinishedPulling="2026-01-22 09:30:20.610258682 +0000 UTC m=+1464.932445805" observedRunningTime="2026-01-22 09:30:21.342070459 +0000 UTC m=+1465.664257582" watchObservedRunningTime="2026-01-22 09:30:21.346086523 +0000 UTC m=+1465.668273646" Jan 22 09:30:22 crc kubenswrapper[4811]: I0122 09:30:22.000211 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1240e634-6792-476a-b85b-f4034920913e" path="/var/lib/kubelet/pods/1240e634-6792-476a-b85b-f4034920913e/volumes" Jan 22 09:30:22 crc kubenswrapper[4811]: I0122 09:30:22.000750 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba2c649b-b430-4002-bd27-ad8d65dd2137" path="/var/lib/kubelet/pods/ba2c649b-b430-4002-bd27-ad8d65dd2137/volumes" Jan 22 09:30:35 crc kubenswrapper[4811]: I0122 09:30:35.501143 4811 patch_prober.go:28] interesting pod/machine-config-daemon-txvcq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 09:30:35 crc kubenswrapper[4811]: I0122 09:30:35.501493 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 09:30:35 crc kubenswrapper[4811]: I0122 09:30:35.501530 4811 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" Jan 22 09:30:35 crc kubenswrapper[4811]: I0122 09:30:35.502078 4811 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a7045e9e75753498e22be1f49f2ac6b7e1f1858c2287359ab0b186d3e061fbd1"} pod="openshift-machine-config-operator/machine-config-daemon-txvcq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 09:30:35 crc kubenswrapper[4811]: I0122 09:30:35.502129 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" containerName="machine-config-daemon" containerID="cri-o://a7045e9e75753498e22be1f49f2ac6b7e1f1858c2287359ab0b186d3e061fbd1" gracePeriod=600 Jan 22 09:30:35 crc kubenswrapper[4811]: E0122 09:30:35.620125 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-txvcq_openshift-machine-config-operator(84068a6b-e189-419b-87f5-f31428f6eafe)\"" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" Jan 22 09:30:36 crc kubenswrapper[4811]: I0122 09:30:36.426442 4811 generic.go:334] "Generic (PLEG): container finished" podID="84068a6b-e189-419b-87f5-f31428f6eafe" containerID="a7045e9e75753498e22be1f49f2ac6b7e1f1858c2287359ab0b186d3e061fbd1" exitCode=0 Jan 22 09:30:36 crc kubenswrapper[4811]: I0122 09:30:36.426482 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" event={"ID":"84068a6b-e189-419b-87f5-f31428f6eafe","Type":"ContainerDied","Data":"a7045e9e75753498e22be1f49f2ac6b7e1f1858c2287359ab0b186d3e061fbd1"} Jan 22 09:30:36 crc kubenswrapper[4811]: I0122 09:30:36.426518 4811 scope.go:117] "RemoveContainer" containerID="299e4eba2c58f82289fee4b97dbc4b550594fde4ef2f24c1751afd92694d35d2" Jan 22 09:30:36 crc kubenswrapper[4811]: I0122 09:30:36.427142 4811 scope.go:117] "RemoveContainer" containerID="a7045e9e75753498e22be1f49f2ac6b7e1f1858c2287359ab0b186d3e061fbd1" Jan 22 09:30:36 crc kubenswrapper[4811]: E0122 09:30:36.427474 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-txvcq_openshift-machine-config-operator(84068a6b-e189-419b-87f5-f31428f6eafe)\"" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" Jan 22 09:30:38 crc kubenswrapper[4811]: I0122 09:30:38.027605 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-hk9q6"] Jan 22 09:30:38 crc kubenswrapper[4811]: I0122 09:30:38.035457 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-hk9q6"] Jan 22 09:30:40 crc kubenswrapper[4811]: I0122 09:30:40.000457 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76cb01bc-a79f-48c0-af4b-5afe92f49e25" path="/var/lib/kubelet/pods/76cb01bc-a79f-48c0-af4b-5afe92f49e25/volumes" Jan 22 09:30:42 crc kubenswrapper[4811]: I0122 09:30:42.030744 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-fdrpp"] Jan 22 09:30:42 crc kubenswrapper[4811]: I0122 09:30:42.038367 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-fdrpp"] Jan 22 09:30:43 crc kubenswrapper[4811]: I0122 09:30:43.999098 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5f5df03-a29f-4805-b750-8d360d832019" path="/var/lib/kubelet/pods/f5f5df03-a29f-4805-b750-8d360d832019/volumes" Jan 22 09:30:50 crc kubenswrapper[4811]: I0122 09:30:50.991751 4811 scope.go:117] "RemoveContainer" containerID="a7045e9e75753498e22be1f49f2ac6b7e1f1858c2287359ab0b186d3e061fbd1" Jan 22 09:30:50 crc kubenswrapper[4811]: E0122 09:30:50.992376 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-txvcq_openshift-machine-config-operator(84068a6b-e189-419b-87f5-f31428f6eafe)\"" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" Jan 22 09:30:53 crc kubenswrapper[4811]: I0122 09:30:53.022272 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-2s8kx"] Jan 22 09:30:53 crc kubenswrapper[4811]: I0122 09:30:53.028289 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-2s8kx"] Jan 22 09:30:53 crc kubenswrapper[4811]: I0122 09:30:53.999441 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1160427d-3b71-44bc-886d-243991fd40e8" path="/var/lib/kubelet/pods/1160427d-3b71-44bc-886d-243991fd40e8/volumes" Jan 22 09:30:54 crc kubenswrapper[4811]: I0122 09:30:54.025245 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-xj5ql"] Jan 22 09:30:54 crc kubenswrapper[4811]: I0122 09:30:54.031732 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-xj5ql"] Jan 22 09:30:55 crc kubenswrapper[4811]: I0122 09:30:55.025096 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-e374-account-create-update-4m5cv"] Jan 22 09:30:55 crc kubenswrapper[4811]: I0122 09:30:55.033166 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-82d1-account-create-update-jkspk"] Jan 22 09:30:55 crc kubenswrapper[4811]: I0122 09:30:55.039860 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-891d-account-create-update-5vdr7"] Jan 22 09:30:55 crc kubenswrapper[4811]: I0122 09:30:55.047820 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-82d1-account-create-update-jkspk"] Jan 22 09:30:55 crc kubenswrapper[4811]: I0122 09:30:55.053960 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-891d-account-create-update-5vdr7"] Jan 22 09:30:55 crc kubenswrapper[4811]: I0122 09:30:55.059847 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-e374-account-create-update-4m5cv"] Jan 22 09:30:55 crc kubenswrapper[4811]: I0122 09:30:55.065734 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-jxrlt"] Jan 22 09:30:55 crc kubenswrapper[4811]: I0122 09:30:55.070445 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-jxrlt"] Jan 22 09:30:55 crc kubenswrapper[4811]: I0122 09:30:55.564234 4811 generic.go:334] "Generic (PLEG): container finished" podID="11821ee8-72c0-4167-b2fe-f1698439f08c" containerID="4dba93963e4469ccb86bebe8aedc506c0151a4344968f24119a271b9240ddafe" exitCode=0 Jan 22 09:30:55 crc kubenswrapper[4811]: I0122 09:30:55.564304 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-k8gsd" event={"ID":"11821ee8-72c0-4167-b2fe-f1698439f08c","Type":"ContainerDied","Data":"4dba93963e4469ccb86bebe8aedc506c0151a4344968f24119a271b9240ddafe"} Jan 22 09:30:55 crc kubenswrapper[4811]: I0122 09:30:55.999064 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19d930c1-23b2-476a-b176-3ca4d8456549" path="/var/lib/kubelet/pods/19d930c1-23b2-476a-b176-3ca4d8456549/volumes" Jan 22 09:30:55 crc kubenswrapper[4811]: I0122 09:30:55.999773 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ea82e95-edcf-4f93-a288-8b4550842a28" path="/var/lib/kubelet/pods/1ea82e95-edcf-4f93-a288-8b4550842a28/volumes" Jan 22 09:30:56 crc kubenswrapper[4811]: I0122 09:30:56.000414 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="866090ed-1206-4cb4-9b12-2450964dc455" path="/var/lib/kubelet/pods/866090ed-1206-4cb4-9b12-2450964dc455/volumes" Jan 22 09:30:56 crc kubenswrapper[4811]: I0122 09:30:56.001385 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9736e7ec-af7c-4413-96bd-60e84d89fc5b" path="/var/lib/kubelet/pods/9736e7ec-af7c-4413-96bd-60e84d89fc5b/volumes" Jan 22 09:30:56 crc kubenswrapper[4811]: I0122 09:30:56.002742 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="986a4686-46f9-4a8c-9c37-0ffadad37084" path="/var/lib/kubelet/pods/986a4686-46f9-4a8c-9c37-0ffadad37084/volumes" Jan 22 09:30:56 crc kubenswrapper[4811]: I0122 09:30:56.859749 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-k8gsd" Jan 22 09:30:56 crc kubenswrapper[4811]: I0122 09:30:56.929251 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dlbw6\" (UniqueName: \"kubernetes.io/projected/11821ee8-72c0-4167-b2fe-f1698439f08c-kube-api-access-dlbw6\") pod \"11821ee8-72c0-4167-b2fe-f1698439f08c\" (UID: \"11821ee8-72c0-4167-b2fe-f1698439f08c\") " Jan 22 09:30:56 crc kubenswrapper[4811]: I0122 09:30:56.929318 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/11821ee8-72c0-4167-b2fe-f1698439f08c-ssh-key-openstack-edpm-ipam\") pod \"11821ee8-72c0-4167-b2fe-f1698439f08c\" (UID: \"11821ee8-72c0-4167-b2fe-f1698439f08c\") " Jan 22 09:30:56 crc kubenswrapper[4811]: I0122 09:30:56.929347 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/11821ee8-72c0-4167-b2fe-f1698439f08c-inventory\") pod \"11821ee8-72c0-4167-b2fe-f1698439f08c\" (UID: \"11821ee8-72c0-4167-b2fe-f1698439f08c\") " Jan 22 09:30:56 crc kubenswrapper[4811]: I0122 09:30:56.934256 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11821ee8-72c0-4167-b2fe-f1698439f08c-kube-api-access-dlbw6" (OuterVolumeSpecName: "kube-api-access-dlbw6") pod "11821ee8-72c0-4167-b2fe-f1698439f08c" (UID: "11821ee8-72c0-4167-b2fe-f1698439f08c"). InnerVolumeSpecName "kube-api-access-dlbw6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:30:56 crc kubenswrapper[4811]: I0122 09:30:56.948329 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11821ee8-72c0-4167-b2fe-f1698439f08c-inventory" (OuterVolumeSpecName: "inventory") pod "11821ee8-72c0-4167-b2fe-f1698439f08c" (UID: "11821ee8-72c0-4167-b2fe-f1698439f08c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:30:56 crc kubenswrapper[4811]: I0122 09:30:56.949799 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11821ee8-72c0-4167-b2fe-f1698439f08c-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "11821ee8-72c0-4167-b2fe-f1698439f08c" (UID: "11821ee8-72c0-4167-b2fe-f1698439f08c"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:30:57 crc kubenswrapper[4811]: I0122 09:30:57.031758 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dlbw6\" (UniqueName: \"kubernetes.io/projected/11821ee8-72c0-4167-b2fe-f1698439f08c-kube-api-access-dlbw6\") on node \"crc\" DevicePath \"\"" Jan 22 09:30:57 crc kubenswrapper[4811]: I0122 09:30:57.031782 4811 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/11821ee8-72c0-4167-b2fe-f1698439f08c-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 22 09:30:57 crc kubenswrapper[4811]: I0122 09:30:57.031797 4811 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/11821ee8-72c0-4167-b2fe-f1698439f08c-inventory\") on node \"crc\" DevicePath \"\"" Jan 22 09:30:57 crc kubenswrapper[4811]: I0122 09:30:57.578114 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-k8gsd" event={"ID":"11821ee8-72c0-4167-b2fe-f1698439f08c","Type":"ContainerDied","Data":"1fa6b82720c173ea7f65181e02b57ba5c65f44c4411f1e80571411f466b15f5f"} Jan 22 09:30:57 crc kubenswrapper[4811]: I0122 09:30:57.578291 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1fa6b82720c173ea7f65181e02b57ba5c65f44c4411f1e80571411f466b15f5f" Jan 22 09:30:57 crc kubenswrapper[4811]: I0122 09:30:57.578335 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-k8gsd" Jan 22 09:30:57 crc kubenswrapper[4811]: I0122 09:30:57.634619 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-bp45n"] Jan 22 09:30:57 crc kubenswrapper[4811]: E0122 09:30:57.635006 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11821ee8-72c0-4167-b2fe-f1698439f08c" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 22 09:30:57 crc kubenswrapper[4811]: I0122 09:30:57.635026 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="11821ee8-72c0-4167-b2fe-f1698439f08c" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 22 09:30:57 crc kubenswrapper[4811]: I0122 09:30:57.635208 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="11821ee8-72c0-4167-b2fe-f1698439f08c" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 22 09:30:57 crc kubenswrapper[4811]: I0122 09:30:57.635716 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-bp45n" Jan 22 09:30:57 crc kubenswrapper[4811]: I0122 09:30:57.637265 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xc4bh\" (UniqueName: \"kubernetes.io/projected/fe7f0005-8be7-481d-8487-b076d9139612-kube-api-access-xc4bh\") pod \"ssh-known-hosts-edpm-deployment-bp45n\" (UID: \"fe7f0005-8be7-481d-8487-b076d9139612\") " pod="openstack/ssh-known-hosts-edpm-deployment-bp45n" Jan 22 09:30:57 crc kubenswrapper[4811]: I0122 09:30:57.637349 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fe7f0005-8be7-481d-8487-b076d9139612-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-bp45n\" (UID: \"fe7f0005-8be7-481d-8487-b076d9139612\") " pod="openstack/ssh-known-hosts-edpm-deployment-bp45n" Jan 22 09:30:57 crc kubenswrapper[4811]: I0122 09:30:57.637439 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/fe7f0005-8be7-481d-8487-b076d9139612-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-bp45n\" (UID: \"fe7f0005-8be7-481d-8487-b076d9139612\") " pod="openstack/ssh-known-hosts-edpm-deployment-bp45n" Jan 22 09:30:57 crc kubenswrapper[4811]: I0122 09:30:57.641156 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 22 09:30:57 crc kubenswrapper[4811]: I0122 09:30:57.641428 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 22 09:30:57 crc kubenswrapper[4811]: I0122 09:30:57.641599 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 22 09:30:57 crc kubenswrapper[4811]: I0122 09:30:57.641750 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7cbs6" Jan 22 09:30:57 crc kubenswrapper[4811]: I0122 09:30:57.651212 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-bp45n"] Jan 22 09:30:57 crc kubenswrapper[4811]: I0122 09:30:57.738746 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xc4bh\" (UniqueName: \"kubernetes.io/projected/fe7f0005-8be7-481d-8487-b076d9139612-kube-api-access-xc4bh\") pod \"ssh-known-hosts-edpm-deployment-bp45n\" (UID: \"fe7f0005-8be7-481d-8487-b076d9139612\") " pod="openstack/ssh-known-hosts-edpm-deployment-bp45n" Jan 22 09:30:57 crc kubenswrapper[4811]: I0122 09:30:57.738813 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fe7f0005-8be7-481d-8487-b076d9139612-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-bp45n\" (UID: \"fe7f0005-8be7-481d-8487-b076d9139612\") " pod="openstack/ssh-known-hosts-edpm-deployment-bp45n" Jan 22 09:30:57 crc kubenswrapper[4811]: I0122 09:30:57.738896 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/fe7f0005-8be7-481d-8487-b076d9139612-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-bp45n\" (UID: \"fe7f0005-8be7-481d-8487-b076d9139612\") " pod="openstack/ssh-known-hosts-edpm-deployment-bp45n" Jan 22 09:30:57 crc kubenswrapper[4811]: I0122 09:30:57.742993 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fe7f0005-8be7-481d-8487-b076d9139612-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-bp45n\" (UID: \"fe7f0005-8be7-481d-8487-b076d9139612\") " pod="openstack/ssh-known-hosts-edpm-deployment-bp45n" Jan 22 09:30:57 crc kubenswrapper[4811]: I0122 09:30:57.743616 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/fe7f0005-8be7-481d-8487-b076d9139612-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-bp45n\" (UID: \"fe7f0005-8be7-481d-8487-b076d9139612\") " pod="openstack/ssh-known-hosts-edpm-deployment-bp45n" Jan 22 09:30:57 crc kubenswrapper[4811]: I0122 09:30:57.753554 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xc4bh\" (UniqueName: \"kubernetes.io/projected/fe7f0005-8be7-481d-8487-b076d9139612-kube-api-access-xc4bh\") pod \"ssh-known-hosts-edpm-deployment-bp45n\" (UID: \"fe7f0005-8be7-481d-8487-b076d9139612\") " pod="openstack/ssh-known-hosts-edpm-deployment-bp45n" Jan 22 09:30:57 crc kubenswrapper[4811]: I0122 09:30:57.950157 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-bp45n" Jan 22 09:30:58 crc kubenswrapper[4811]: I0122 09:30:58.372262 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-bp45n"] Jan 22 09:30:58 crc kubenswrapper[4811]: I0122 09:30:58.586280 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-bp45n" event={"ID":"fe7f0005-8be7-481d-8487-b076d9139612","Type":"ContainerStarted","Data":"45429941ee769aa1096062b7de79d228cf547f510e639a4cd89bbf22daaba896"} Jan 22 09:30:58 crc kubenswrapper[4811]: I0122 09:30:58.640520 4811 scope.go:117] "RemoveContainer" containerID="d649498385b63b40e10a1edb943c872649ac07c51b8e6c31a10f533d417c210a" Jan 22 09:30:58 crc kubenswrapper[4811]: I0122 09:30:58.664851 4811 scope.go:117] "RemoveContainer" containerID="fad76c9fdddbce877ce0d9659a8357d8c1a7689680dae10bd9991c8174c30d83" Jan 22 09:30:58 crc kubenswrapper[4811]: I0122 09:30:58.685130 4811 scope.go:117] "RemoveContainer" containerID="b8c1fe97bacc8910b8c8f2a115bec91c76666f47da17c2871ccd063f3c054bf4" Jan 22 09:30:58 crc kubenswrapper[4811]: I0122 09:30:58.700118 4811 scope.go:117] "RemoveContainer" containerID="4be8abd3a99dce7c6c000fba5dc42ad42a131229c1bb98105d1f982c5c6a5b5d" Jan 22 09:30:58 crc kubenswrapper[4811]: I0122 09:30:58.724106 4811 scope.go:117] "RemoveContainer" containerID="343f989092dd1ddef9a87795d6d915e224696c34e8340c52cbae291c97aa7d65" Jan 22 09:30:58 crc kubenswrapper[4811]: I0122 09:30:58.739151 4811 scope.go:117] "RemoveContainer" containerID="0ee0b25a364d73fee0dce4d95835f46da987348ccac3ab643026320ed9fa7bf4" Jan 22 09:30:58 crc kubenswrapper[4811]: I0122 09:30:58.753783 4811 scope.go:117] "RemoveContainer" containerID="2b11417e3d1b04cb3b22462d21334268df956b88143f9954bde83dd399fa8c01" Jan 22 09:30:58 crc kubenswrapper[4811]: I0122 09:30:58.769090 4811 scope.go:117] "RemoveContainer" containerID="ac99e9408b664fb496d607294fa868381e7508096b2a6e4150bb53561788b8f3" Jan 22 09:30:58 crc kubenswrapper[4811]: I0122 09:30:58.783271 4811 scope.go:117] "RemoveContainer" containerID="3223af79e75985e05dd1108c36532623409fc29feadf9cc24b0da9b981084399" Jan 22 09:30:58 crc kubenswrapper[4811]: I0122 09:30:58.796845 4811 scope.go:117] "RemoveContainer" containerID="3b72e5a01d881c149a6269ce25589b078a76fc12389e3e6c7e79fe4b114e34be" Jan 22 09:30:58 crc kubenswrapper[4811]: I0122 09:30:58.855551 4811 scope.go:117] "RemoveContainer" containerID="41f7ddb8612eced29a432d621cd3db2d536ae9efece2b4f681e720938d77e73a" Jan 22 09:30:58 crc kubenswrapper[4811]: I0122 09:30:58.877427 4811 scope.go:117] "RemoveContainer" containerID="030988d4601fac4c9219bec8ec55b84b6f13957d6e98de4ebc3ecf58ea829e6c" Jan 22 09:30:58 crc kubenswrapper[4811]: I0122 09:30:58.905567 4811 scope.go:117] "RemoveContainer" containerID="bf64d7494082d1084f4ec0c47e05cfb01ebf8bb907a356d093f4bdbadcb0e0d1" Jan 22 09:30:58 crc kubenswrapper[4811]: I0122 09:30:58.948385 4811 scope.go:117] "RemoveContainer" containerID="f689a28902f2e1a8a1e4e1df4b3892d6cd31ceb754785b27ddb58825203040a4" Jan 22 09:30:59 crc kubenswrapper[4811]: I0122 09:30:59.594795 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-bp45n" event={"ID":"fe7f0005-8be7-481d-8487-b076d9139612","Type":"ContainerStarted","Data":"bcae4ad0a2861a2ac5e02d6663d89635e8253d859dc6c15a918e9561e801389c"} Jan 22 09:30:59 crc kubenswrapper[4811]: I0122 09:30:59.614966 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-bp45n" podStartSLOduration=1.979148766 podStartE2EDuration="2.614951857s" podCreationTimestamp="2026-01-22 09:30:57 +0000 UTC" firstStartedPulling="2026-01-22 09:30:58.380448887 +0000 UTC m=+1502.702636010" lastFinishedPulling="2026-01-22 09:30:59.016251978 +0000 UTC m=+1503.338439101" observedRunningTime="2026-01-22 09:30:59.609202665 +0000 UTC m=+1503.931389788" watchObservedRunningTime="2026-01-22 09:30:59.614951857 +0000 UTC m=+1503.937138980" Jan 22 09:31:01 crc kubenswrapper[4811]: I0122 09:31:01.023583 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-2bx9h"] Jan 22 09:31:01 crc kubenswrapper[4811]: I0122 09:31:01.031301 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-2bx9h"] Jan 22 09:31:01 crc kubenswrapper[4811]: I0122 09:31:01.992246 4811 scope.go:117] "RemoveContainer" containerID="a7045e9e75753498e22be1f49f2ac6b7e1f1858c2287359ab0b186d3e061fbd1" Jan 22 09:31:01 crc kubenswrapper[4811]: E0122 09:31:01.992490 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-txvcq_openshift-machine-config-operator(84068a6b-e189-419b-87f5-f31428f6eafe)\"" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" Jan 22 09:31:01 crc kubenswrapper[4811]: I0122 09:31:01.999789 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc9be5c8-f749-41fa-9c82-7378a3c84569" path="/var/lib/kubelet/pods/dc9be5c8-f749-41fa-9c82-7378a3c84569/volumes" Jan 22 09:31:04 crc kubenswrapper[4811]: I0122 09:31:04.628246 4811 generic.go:334] "Generic (PLEG): container finished" podID="fe7f0005-8be7-481d-8487-b076d9139612" containerID="bcae4ad0a2861a2ac5e02d6663d89635e8253d859dc6c15a918e9561e801389c" exitCode=0 Jan 22 09:31:04 crc kubenswrapper[4811]: I0122 09:31:04.628316 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-bp45n" event={"ID":"fe7f0005-8be7-481d-8487-b076d9139612","Type":"ContainerDied","Data":"bcae4ad0a2861a2ac5e02d6663d89635e8253d859dc6c15a918e9561e801389c"} Jan 22 09:31:05 crc kubenswrapper[4811]: I0122 09:31:05.951900 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-bp45n" Jan 22 09:31:06 crc kubenswrapper[4811]: I0122 09:31:06.090200 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fe7f0005-8be7-481d-8487-b076d9139612-ssh-key-openstack-edpm-ipam\") pod \"fe7f0005-8be7-481d-8487-b076d9139612\" (UID: \"fe7f0005-8be7-481d-8487-b076d9139612\") " Jan 22 09:31:06 crc kubenswrapper[4811]: I0122 09:31:06.090347 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/fe7f0005-8be7-481d-8487-b076d9139612-inventory-0\") pod \"fe7f0005-8be7-481d-8487-b076d9139612\" (UID: \"fe7f0005-8be7-481d-8487-b076d9139612\") " Jan 22 09:31:06 crc kubenswrapper[4811]: I0122 09:31:06.090373 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xc4bh\" (UniqueName: \"kubernetes.io/projected/fe7f0005-8be7-481d-8487-b076d9139612-kube-api-access-xc4bh\") pod \"fe7f0005-8be7-481d-8487-b076d9139612\" (UID: \"fe7f0005-8be7-481d-8487-b076d9139612\") " Jan 22 09:31:06 crc kubenswrapper[4811]: I0122 09:31:06.096383 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe7f0005-8be7-481d-8487-b076d9139612-kube-api-access-xc4bh" (OuterVolumeSpecName: "kube-api-access-xc4bh") pod "fe7f0005-8be7-481d-8487-b076d9139612" (UID: "fe7f0005-8be7-481d-8487-b076d9139612"). InnerVolumeSpecName "kube-api-access-xc4bh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:31:06 crc kubenswrapper[4811]: I0122 09:31:06.114601 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe7f0005-8be7-481d-8487-b076d9139612-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "fe7f0005-8be7-481d-8487-b076d9139612" (UID: "fe7f0005-8be7-481d-8487-b076d9139612"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:31:06 crc kubenswrapper[4811]: I0122 09:31:06.117734 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe7f0005-8be7-481d-8487-b076d9139612-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "fe7f0005-8be7-481d-8487-b076d9139612" (UID: "fe7f0005-8be7-481d-8487-b076d9139612"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:31:06 crc kubenswrapper[4811]: I0122 09:31:06.191403 4811 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/fe7f0005-8be7-481d-8487-b076d9139612-inventory-0\") on node \"crc\" DevicePath \"\"" Jan 22 09:31:06 crc kubenswrapper[4811]: I0122 09:31:06.191427 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xc4bh\" (UniqueName: \"kubernetes.io/projected/fe7f0005-8be7-481d-8487-b076d9139612-kube-api-access-xc4bh\") on node \"crc\" DevicePath \"\"" Jan 22 09:31:06 crc kubenswrapper[4811]: I0122 09:31:06.191439 4811 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fe7f0005-8be7-481d-8487-b076d9139612-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 22 09:31:06 crc kubenswrapper[4811]: I0122 09:31:06.669162 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-bp45n" event={"ID":"fe7f0005-8be7-481d-8487-b076d9139612","Type":"ContainerDied","Data":"45429941ee769aa1096062b7de79d228cf547f510e639a4cd89bbf22daaba896"} Jan 22 09:31:06 crc kubenswrapper[4811]: I0122 09:31:06.669215 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="45429941ee769aa1096062b7de79d228cf547f510e639a4cd89bbf22daaba896" Jan 22 09:31:06 crc kubenswrapper[4811]: I0122 09:31:06.669428 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-bp45n" Jan 22 09:31:06 crc kubenswrapper[4811]: I0122 09:31:06.705973 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-z9qbq"] Jan 22 09:31:06 crc kubenswrapper[4811]: E0122 09:31:06.706502 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe7f0005-8be7-481d-8487-b076d9139612" containerName="ssh-known-hosts-edpm-deployment" Jan 22 09:31:06 crc kubenswrapper[4811]: I0122 09:31:06.706523 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe7f0005-8be7-481d-8487-b076d9139612" containerName="ssh-known-hosts-edpm-deployment" Jan 22 09:31:06 crc kubenswrapper[4811]: I0122 09:31:06.706717 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe7f0005-8be7-481d-8487-b076d9139612" containerName="ssh-known-hosts-edpm-deployment" Jan 22 09:31:06 crc kubenswrapper[4811]: I0122 09:31:06.709541 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-z9qbq" Jan 22 09:31:06 crc kubenswrapper[4811]: I0122 09:31:06.712131 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 22 09:31:06 crc kubenswrapper[4811]: I0122 09:31:06.712220 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7cbs6" Jan 22 09:31:06 crc kubenswrapper[4811]: I0122 09:31:06.712273 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 22 09:31:06 crc kubenswrapper[4811]: I0122 09:31:06.714274 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 22 09:31:06 crc kubenswrapper[4811]: I0122 09:31:06.718775 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-z9qbq"] Jan 22 09:31:06 crc kubenswrapper[4811]: I0122 09:31:06.800450 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e74305de-0ced-49bc-8795-a211907fbb22-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-z9qbq\" (UID: \"e74305de-0ced-49bc-8795-a211907fbb22\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-z9qbq" Jan 22 09:31:06 crc kubenswrapper[4811]: I0122 09:31:06.800791 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e74305de-0ced-49bc-8795-a211907fbb22-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-z9qbq\" (UID: \"e74305de-0ced-49bc-8795-a211907fbb22\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-z9qbq" Jan 22 09:31:06 crc kubenswrapper[4811]: I0122 09:31:06.800981 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmhm2\" (UniqueName: \"kubernetes.io/projected/e74305de-0ced-49bc-8795-a211907fbb22-kube-api-access-mmhm2\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-z9qbq\" (UID: \"e74305de-0ced-49bc-8795-a211907fbb22\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-z9qbq" Jan 22 09:31:06 crc kubenswrapper[4811]: I0122 09:31:06.903101 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e74305de-0ced-49bc-8795-a211907fbb22-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-z9qbq\" (UID: \"e74305de-0ced-49bc-8795-a211907fbb22\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-z9qbq" Jan 22 09:31:06 crc kubenswrapper[4811]: I0122 09:31:06.903197 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e74305de-0ced-49bc-8795-a211907fbb22-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-z9qbq\" (UID: \"e74305de-0ced-49bc-8795-a211907fbb22\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-z9qbq" Jan 22 09:31:06 crc kubenswrapper[4811]: I0122 09:31:06.903259 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mmhm2\" (UniqueName: \"kubernetes.io/projected/e74305de-0ced-49bc-8795-a211907fbb22-kube-api-access-mmhm2\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-z9qbq\" (UID: \"e74305de-0ced-49bc-8795-a211907fbb22\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-z9qbq" Jan 22 09:31:06 crc kubenswrapper[4811]: I0122 09:31:06.907099 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e74305de-0ced-49bc-8795-a211907fbb22-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-z9qbq\" (UID: \"e74305de-0ced-49bc-8795-a211907fbb22\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-z9qbq" Jan 22 09:31:06 crc kubenswrapper[4811]: I0122 09:31:06.907418 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e74305de-0ced-49bc-8795-a211907fbb22-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-z9qbq\" (UID: \"e74305de-0ced-49bc-8795-a211907fbb22\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-z9qbq" Jan 22 09:31:06 crc kubenswrapper[4811]: I0122 09:31:06.916590 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmhm2\" (UniqueName: \"kubernetes.io/projected/e74305de-0ced-49bc-8795-a211907fbb22-kube-api-access-mmhm2\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-z9qbq\" (UID: \"e74305de-0ced-49bc-8795-a211907fbb22\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-z9qbq" Jan 22 09:31:07 crc kubenswrapper[4811]: I0122 09:31:07.025770 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-z9qbq" Jan 22 09:31:07 crc kubenswrapper[4811]: I0122 09:31:07.504567 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-z9qbq"] Jan 22 09:31:07 crc kubenswrapper[4811]: I0122 09:31:07.677139 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-z9qbq" event={"ID":"e74305de-0ced-49bc-8795-a211907fbb22","Type":"ContainerStarted","Data":"81f81e65f630b3c77707b03293ee15a1bd7eabd0419406d531f3590b5cb42269"} Jan 22 09:31:08 crc kubenswrapper[4811]: I0122 09:31:08.684694 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-z9qbq" event={"ID":"e74305de-0ced-49bc-8795-a211907fbb22","Type":"ContainerStarted","Data":"16cfff4cb2a2d295c80d51700457b79fbb4e57e5c6f40283de21fea4fcdfbd98"} Jan 22 09:31:08 crc kubenswrapper[4811]: I0122 09:31:08.710871 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-z9qbq" podStartSLOduration=2.169140469 podStartE2EDuration="2.710852008s" podCreationTimestamp="2026-01-22 09:31:06 +0000 UTC" firstStartedPulling="2026-01-22 09:31:07.507790612 +0000 UTC m=+1511.829977735" lastFinishedPulling="2026-01-22 09:31:08.049502151 +0000 UTC m=+1512.371689274" observedRunningTime="2026-01-22 09:31:08.704922205 +0000 UTC m=+1513.027109328" watchObservedRunningTime="2026-01-22 09:31:08.710852008 +0000 UTC m=+1513.033039131" Jan 22 09:31:14 crc kubenswrapper[4811]: I0122 09:31:14.732619 4811 generic.go:334] "Generic (PLEG): container finished" podID="e74305de-0ced-49bc-8795-a211907fbb22" containerID="16cfff4cb2a2d295c80d51700457b79fbb4e57e5c6f40283de21fea4fcdfbd98" exitCode=0 Jan 22 09:31:14 crc kubenswrapper[4811]: I0122 09:31:14.732729 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-z9qbq" event={"ID":"e74305de-0ced-49bc-8795-a211907fbb22","Type":"ContainerDied","Data":"16cfff4cb2a2d295c80d51700457b79fbb4e57e5c6f40283de21fea4fcdfbd98"} Jan 22 09:31:14 crc kubenswrapper[4811]: I0122 09:31:14.992385 4811 scope.go:117] "RemoveContainer" containerID="a7045e9e75753498e22be1f49f2ac6b7e1f1858c2287359ab0b186d3e061fbd1" Jan 22 09:31:14 crc kubenswrapper[4811]: E0122 09:31:14.992709 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-txvcq_openshift-machine-config-operator(84068a6b-e189-419b-87f5-f31428f6eafe)\"" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" Jan 22 09:31:16 crc kubenswrapper[4811]: I0122 09:31:16.064018 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-z9qbq" Jan 22 09:31:16 crc kubenswrapper[4811]: I0122 09:31:16.086850 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mmhm2\" (UniqueName: \"kubernetes.io/projected/e74305de-0ced-49bc-8795-a211907fbb22-kube-api-access-mmhm2\") pod \"e74305de-0ced-49bc-8795-a211907fbb22\" (UID: \"e74305de-0ced-49bc-8795-a211907fbb22\") " Jan 22 09:31:16 crc kubenswrapper[4811]: I0122 09:31:16.087111 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e74305de-0ced-49bc-8795-a211907fbb22-inventory\") pod \"e74305de-0ced-49bc-8795-a211907fbb22\" (UID: \"e74305de-0ced-49bc-8795-a211907fbb22\") " Jan 22 09:31:16 crc kubenswrapper[4811]: I0122 09:31:16.087170 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e74305de-0ced-49bc-8795-a211907fbb22-ssh-key-openstack-edpm-ipam\") pod \"e74305de-0ced-49bc-8795-a211907fbb22\" (UID: \"e74305de-0ced-49bc-8795-a211907fbb22\") " Jan 22 09:31:16 crc kubenswrapper[4811]: I0122 09:31:16.096768 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e74305de-0ced-49bc-8795-a211907fbb22-kube-api-access-mmhm2" (OuterVolumeSpecName: "kube-api-access-mmhm2") pod "e74305de-0ced-49bc-8795-a211907fbb22" (UID: "e74305de-0ced-49bc-8795-a211907fbb22"). InnerVolumeSpecName "kube-api-access-mmhm2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:31:16 crc kubenswrapper[4811]: I0122 09:31:16.113385 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e74305de-0ced-49bc-8795-a211907fbb22-inventory" (OuterVolumeSpecName: "inventory") pod "e74305de-0ced-49bc-8795-a211907fbb22" (UID: "e74305de-0ced-49bc-8795-a211907fbb22"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:31:16 crc kubenswrapper[4811]: I0122 09:31:16.114946 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e74305de-0ced-49bc-8795-a211907fbb22-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "e74305de-0ced-49bc-8795-a211907fbb22" (UID: "e74305de-0ced-49bc-8795-a211907fbb22"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:31:16 crc kubenswrapper[4811]: I0122 09:31:16.190251 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mmhm2\" (UniqueName: \"kubernetes.io/projected/e74305de-0ced-49bc-8795-a211907fbb22-kube-api-access-mmhm2\") on node \"crc\" DevicePath \"\"" Jan 22 09:31:16 crc kubenswrapper[4811]: I0122 09:31:16.190293 4811 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e74305de-0ced-49bc-8795-a211907fbb22-inventory\") on node \"crc\" DevicePath \"\"" Jan 22 09:31:16 crc kubenswrapper[4811]: I0122 09:31:16.190305 4811 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e74305de-0ced-49bc-8795-a211907fbb22-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 22 09:31:16 crc kubenswrapper[4811]: I0122 09:31:16.753797 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-z9qbq" event={"ID":"e74305de-0ced-49bc-8795-a211907fbb22","Type":"ContainerDied","Data":"81f81e65f630b3c77707b03293ee15a1bd7eabd0419406d531f3590b5cb42269"} Jan 22 09:31:16 crc kubenswrapper[4811]: I0122 09:31:16.754308 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="81f81e65f630b3c77707b03293ee15a1bd7eabd0419406d531f3590b5cb42269" Jan 22 09:31:16 crc kubenswrapper[4811]: I0122 09:31:16.753862 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-z9qbq" Jan 22 09:31:16 crc kubenswrapper[4811]: I0122 09:31:16.812653 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-h6gdw"] Jan 22 09:31:16 crc kubenswrapper[4811]: E0122 09:31:16.813119 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e74305de-0ced-49bc-8795-a211907fbb22" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 22 09:31:16 crc kubenswrapper[4811]: I0122 09:31:16.813141 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="e74305de-0ced-49bc-8795-a211907fbb22" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 22 09:31:16 crc kubenswrapper[4811]: I0122 09:31:16.813352 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="e74305de-0ced-49bc-8795-a211907fbb22" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 22 09:31:16 crc kubenswrapper[4811]: I0122 09:31:16.813967 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-h6gdw" Jan 22 09:31:16 crc kubenswrapper[4811]: I0122 09:31:16.815867 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 22 09:31:16 crc kubenswrapper[4811]: I0122 09:31:16.816546 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7cbs6" Jan 22 09:31:16 crc kubenswrapper[4811]: I0122 09:31:16.817354 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 22 09:31:16 crc kubenswrapper[4811]: I0122 09:31:16.817814 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 22 09:31:16 crc kubenswrapper[4811]: I0122 09:31:16.832024 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-h6gdw"] Jan 22 09:31:16 crc kubenswrapper[4811]: I0122 09:31:16.908578 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b2c2fede-3836-4600-a5d7-a5803623306e-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-h6gdw\" (UID: \"b2c2fede-3836-4600-a5d7-a5803623306e\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-h6gdw" Jan 22 09:31:16 crc kubenswrapper[4811]: I0122 09:31:16.908931 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b2c2fede-3836-4600-a5d7-a5803623306e-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-h6gdw\" (UID: \"b2c2fede-3836-4600-a5d7-a5803623306e\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-h6gdw" Jan 22 09:31:16 crc kubenswrapper[4811]: I0122 09:31:16.909091 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bt6f\" (UniqueName: \"kubernetes.io/projected/b2c2fede-3836-4600-a5d7-a5803623306e-kube-api-access-7bt6f\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-h6gdw\" (UID: \"b2c2fede-3836-4600-a5d7-a5803623306e\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-h6gdw" Jan 22 09:31:17 crc kubenswrapper[4811]: I0122 09:31:17.010344 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b2c2fede-3836-4600-a5d7-a5803623306e-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-h6gdw\" (UID: \"b2c2fede-3836-4600-a5d7-a5803623306e\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-h6gdw" Jan 22 09:31:17 crc kubenswrapper[4811]: I0122 09:31:17.010523 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7bt6f\" (UniqueName: \"kubernetes.io/projected/b2c2fede-3836-4600-a5d7-a5803623306e-kube-api-access-7bt6f\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-h6gdw\" (UID: \"b2c2fede-3836-4600-a5d7-a5803623306e\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-h6gdw" Jan 22 09:31:17 crc kubenswrapper[4811]: I0122 09:31:17.010660 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b2c2fede-3836-4600-a5d7-a5803623306e-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-h6gdw\" (UID: \"b2c2fede-3836-4600-a5d7-a5803623306e\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-h6gdw" Jan 22 09:31:17 crc kubenswrapper[4811]: I0122 09:31:17.015689 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b2c2fede-3836-4600-a5d7-a5803623306e-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-h6gdw\" (UID: \"b2c2fede-3836-4600-a5d7-a5803623306e\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-h6gdw" Jan 22 09:31:17 crc kubenswrapper[4811]: I0122 09:31:17.015701 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b2c2fede-3836-4600-a5d7-a5803623306e-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-h6gdw\" (UID: \"b2c2fede-3836-4600-a5d7-a5803623306e\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-h6gdw" Jan 22 09:31:17 crc kubenswrapper[4811]: I0122 09:31:17.024552 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7bt6f\" (UniqueName: \"kubernetes.io/projected/b2c2fede-3836-4600-a5d7-a5803623306e-kube-api-access-7bt6f\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-h6gdw\" (UID: \"b2c2fede-3836-4600-a5d7-a5803623306e\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-h6gdw" Jan 22 09:31:17 crc kubenswrapper[4811]: I0122 09:31:17.138209 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-h6gdw" Jan 22 09:31:17 crc kubenswrapper[4811]: I0122 09:31:17.601985 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-h6gdw"] Jan 22 09:31:17 crc kubenswrapper[4811]: I0122 09:31:17.762718 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-h6gdw" event={"ID":"b2c2fede-3836-4600-a5d7-a5803623306e","Type":"ContainerStarted","Data":"a7239d1f513df0c12741ff58125e62c0697bc0fa4f484978e001f5fb3d5be5aa"} Jan 22 09:31:18 crc kubenswrapper[4811]: I0122 09:31:18.771590 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-h6gdw" event={"ID":"b2c2fede-3836-4600-a5d7-a5803623306e","Type":"ContainerStarted","Data":"21ce652018d671fb882fb3de6ad53f96a278beac2dcab2764c914cf18674df22"} Jan 22 09:31:18 crc kubenswrapper[4811]: I0122 09:31:18.795159 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-h6gdw" podStartSLOduration=2.2772043650000002 podStartE2EDuration="2.795145086s" podCreationTimestamp="2026-01-22 09:31:16 +0000 UTC" firstStartedPulling="2026-01-22 09:31:17.608832956 +0000 UTC m=+1521.931020079" lastFinishedPulling="2026-01-22 09:31:18.126773678 +0000 UTC m=+1522.448960800" observedRunningTime="2026-01-22 09:31:18.794142506 +0000 UTC m=+1523.116329628" watchObservedRunningTime="2026-01-22 09:31:18.795145086 +0000 UTC m=+1523.117332199" Jan 22 09:31:22 crc kubenswrapper[4811]: I0122 09:31:22.034175 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-bbl7b"] Jan 22 09:31:22 crc kubenswrapper[4811]: I0122 09:31:22.040111 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-bbl7b"] Jan 22 09:31:23 crc kubenswrapper[4811]: I0122 09:31:23.999581 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0847c2e9-9761-4a9a-96fa-a216884fc3dc" path="/var/lib/kubelet/pods/0847c2e9-9761-4a9a-96fa-a216884fc3dc/volumes" Jan 22 09:31:25 crc kubenswrapper[4811]: I0122 09:31:25.822072 4811 generic.go:334] "Generic (PLEG): container finished" podID="b2c2fede-3836-4600-a5d7-a5803623306e" containerID="21ce652018d671fb882fb3de6ad53f96a278beac2dcab2764c914cf18674df22" exitCode=0 Jan 22 09:31:25 crc kubenswrapper[4811]: I0122 09:31:25.822148 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-h6gdw" event={"ID":"b2c2fede-3836-4600-a5d7-a5803623306e","Type":"ContainerDied","Data":"21ce652018d671fb882fb3de6ad53f96a278beac2dcab2764c914cf18674df22"} Jan 22 09:31:26 crc kubenswrapper[4811]: I0122 09:31:26.992443 4811 scope.go:117] "RemoveContainer" containerID="a7045e9e75753498e22be1f49f2ac6b7e1f1858c2287359ab0b186d3e061fbd1" Jan 22 09:31:26 crc kubenswrapper[4811]: E0122 09:31:26.992946 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-txvcq_openshift-machine-config-operator(84068a6b-e189-419b-87f5-f31428f6eafe)\"" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" Jan 22 09:31:27 crc kubenswrapper[4811]: I0122 09:31:27.169219 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-h6gdw" Jan 22 09:31:27 crc kubenswrapper[4811]: I0122 09:31:27.303018 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7bt6f\" (UniqueName: \"kubernetes.io/projected/b2c2fede-3836-4600-a5d7-a5803623306e-kube-api-access-7bt6f\") pod \"b2c2fede-3836-4600-a5d7-a5803623306e\" (UID: \"b2c2fede-3836-4600-a5d7-a5803623306e\") " Jan 22 09:31:27 crc kubenswrapper[4811]: I0122 09:31:27.303212 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b2c2fede-3836-4600-a5d7-a5803623306e-ssh-key-openstack-edpm-ipam\") pod \"b2c2fede-3836-4600-a5d7-a5803623306e\" (UID: \"b2c2fede-3836-4600-a5d7-a5803623306e\") " Jan 22 09:31:27 crc kubenswrapper[4811]: I0122 09:31:27.303401 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b2c2fede-3836-4600-a5d7-a5803623306e-inventory\") pod \"b2c2fede-3836-4600-a5d7-a5803623306e\" (UID: \"b2c2fede-3836-4600-a5d7-a5803623306e\") " Jan 22 09:31:27 crc kubenswrapper[4811]: I0122 09:31:27.309334 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2c2fede-3836-4600-a5d7-a5803623306e-kube-api-access-7bt6f" (OuterVolumeSpecName: "kube-api-access-7bt6f") pod "b2c2fede-3836-4600-a5d7-a5803623306e" (UID: "b2c2fede-3836-4600-a5d7-a5803623306e"). InnerVolumeSpecName "kube-api-access-7bt6f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:31:27 crc kubenswrapper[4811]: I0122 09:31:27.324522 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2c2fede-3836-4600-a5d7-a5803623306e-inventory" (OuterVolumeSpecName: "inventory") pod "b2c2fede-3836-4600-a5d7-a5803623306e" (UID: "b2c2fede-3836-4600-a5d7-a5803623306e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:31:27 crc kubenswrapper[4811]: I0122 09:31:27.327823 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2c2fede-3836-4600-a5d7-a5803623306e-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "b2c2fede-3836-4600-a5d7-a5803623306e" (UID: "b2c2fede-3836-4600-a5d7-a5803623306e"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:31:27 crc kubenswrapper[4811]: I0122 09:31:27.407186 4811 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b2c2fede-3836-4600-a5d7-a5803623306e-inventory\") on node \"crc\" DevicePath \"\"" Jan 22 09:31:27 crc kubenswrapper[4811]: I0122 09:31:27.407716 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7bt6f\" (UniqueName: \"kubernetes.io/projected/b2c2fede-3836-4600-a5d7-a5803623306e-kube-api-access-7bt6f\") on node \"crc\" DevicePath \"\"" Jan 22 09:31:27 crc kubenswrapper[4811]: I0122 09:31:27.407789 4811 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b2c2fede-3836-4600-a5d7-a5803623306e-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 22 09:31:27 crc kubenswrapper[4811]: I0122 09:31:27.841055 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-h6gdw" event={"ID":"b2c2fede-3836-4600-a5d7-a5803623306e","Type":"ContainerDied","Data":"a7239d1f513df0c12741ff58125e62c0697bc0fa4f484978e001f5fb3d5be5aa"} Jan 22 09:31:27 crc kubenswrapper[4811]: I0122 09:31:27.841097 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a7239d1f513df0c12741ff58125e62c0697bc0fa4f484978e001f5fb3d5be5aa" Jan 22 09:31:27 crc kubenswrapper[4811]: I0122 09:31:27.841159 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-h6gdw" Jan 22 09:31:27 crc kubenswrapper[4811]: E0122 09:31:27.956357 4811 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb2c2fede_3836_4600_a5d7_a5803623306e.slice\": RecentStats: unable to find data in memory cache]" Jan 22 09:31:28 crc kubenswrapper[4811]: I0122 09:31:28.026634 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-zmjzd"] Jan 22 09:31:28 crc kubenswrapper[4811]: I0122 09:31:28.033471 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-zmjzd"] Jan 22 09:31:29 crc kubenswrapper[4811]: I0122 09:31:29.999575 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e22a065b-3b3c-41a9-ad35-b1c1e594af9b" path="/var/lib/kubelet/pods/e22a065b-3b3c-41a9-ad35-b1c1e594af9b/volumes" Jan 22 09:31:30 crc kubenswrapper[4811]: I0122 09:31:30.026904 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-zztph"] Jan 22 09:31:30 crc kubenswrapper[4811]: I0122 09:31:30.035694 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-zt2qg"] Jan 22 09:31:30 crc kubenswrapper[4811]: I0122 09:31:30.044993 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-zztph"] Jan 22 09:31:30 crc kubenswrapper[4811]: I0122 09:31:30.051899 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-zt2qg"] Jan 22 09:31:32 crc kubenswrapper[4811]: I0122 09:31:32.000420 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14a1e4fa-5a60-47c5-a2da-e57110ca0b57" path="/var/lib/kubelet/pods/14a1e4fa-5a60-47c5-a2da-e57110ca0b57/volumes" Jan 22 09:31:32 crc kubenswrapper[4811]: I0122 09:31:32.001590 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1cd5889f-63d3-47a0-8b17-e8ffac0011d3" path="/var/lib/kubelet/pods/1cd5889f-63d3-47a0-8b17-e8ffac0011d3/volumes" Jan 22 09:31:36 crc kubenswrapper[4811]: I0122 09:31:36.034037 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-rnmr4"] Jan 22 09:31:36 crc kubenswrapper[4811]: I0122 09:31:36.041132 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-rnmr4"] Jan 22 09:31:37 crc kubenswrapper[4811]: I0122 09:31:37.991748 4811 scope.go:117] "RemoveContainer" containerID="a7045e9e75753498e22be1f49f2ac6b7e1f1858c2287359ab0b186d3e061fbd1" Jan 22 09:31:37 crc kubenswrapper[4811]: E0122 09:31:37.992500 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-txvcq_openshift-machine-config-operator(84068a6b-e189-419b-87f5-f31428f6eafe)\"" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" Jan 22 09:31:37 crc kubenswrapper[4811]: I0122 09:31:37.999263 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b642751-b1e4-4488-b305-fed7f4fcd9fa" path="/var/lib/kubelet/pods/7b642751-b1e4-4488-b305-fed7f4fcd9fa/volumes" Jan 22 09:31:52 crc kubenswrapper[4811]: I0122 09:31:52.991707 4811 scope.go:117] "RemoveContainer" containerID="a7045e9e75753498e22be1f49f2ac6b7e1f1858c2287359ab0b186d3e061fbd1" Jan 22 09:31:52 crc kubenswrapper[4811]: E0122 09:31:52.992331 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-txvcq_openshift-machine-config-operator(84068a6b-e189-419b-87f5-f31428f6eafe)\"" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" Jan 22 09:31:59 crc kubenswrapper[4811]: I0122 09:31:59.141040 4811 scope.go:117] "RemoveContainer" containerID="65a67a4c6384c24670b32b62b88f9d49ab432208740d4873bc567b7af7673bae" Jan 22 09:31:59 crc kubenswrapper[4811]: I0122 09:31:59.162937 4811 scope.go:117] "RemoveContainer" containerID="7eb1d67d8f5fb006e9f4ca0882d1403a9fb92920be629aa2e6f7628cf0d5107e" Jan 22 09:31:59 crc kubenswrapper[4811]: I0122 09:31:59.195731 4811 scope.go:117] "RemoveContainer" containerID="ce01439c4f3c6a661151f644ce887beeb972d579495b1e7c28a458e92c77e19c" Jan 22 09:31:59 crc kubenswrapper[4811]: I0122 09:31:59.222237 4811 scope.go:117] "RemoveContainer" containerID="355d945e984c35fa416eaab7c76e47e82435510a56e52acccadfc3695b1fe0a3" Jan 22 09:31:59 crc kubenswrapper[4811]: I0122 09:31:59.257650 4811 scope.go:117] "RemoveContainer" containerID="726e9b017aa22b710235e164587d5c31326c132ae65bebec4355aa245026cab8" Jan 22 09:31:59 crc kubenswrapper[4811]: I0122 09:31:59.290741 4811 scope.go:117] "RemoveContainer" containerID="ad14d3f990ab1077c0a5c3d8ce964721a1571b7e78fb9cc881b77a70d8d71444" Jan 22 09:32:07 crc kubenswrapper[4811]: I0122 09:32:07.992726 4811 scope.go:117] "RemoveContainer" containerID="a7045e9e75753498e22be1f49f2ac6b7e1f1858c2287359ab0b186d3e061fbd1" Jan 22 09:32:07 crc kubenswrapper[4811]: E0122 09:32:07.993237 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-txvcq_openshift-machine-config-operator(84068a6b-e189-419b-87f5-f31428f6eafe)\"" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" Jan 22 09:32:18 crc kubenswrapper[4811]: I0122 09:32:18.992540 4811 scope.go:117] "RemoveContainer" containerID="a7045e9e75753498e22be1f49f2ac6b7e1f1858c2287359ab0b186d3e061fbd1" Jan 22 09:32:18 crc kubenswrapper[4811]: E0122 09:32:18.994088 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-txvcq_openshift-machine-config-operator(84068a6b-e189-419b-87f5-f31428f6eafe)\"" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" Jan 22 09:32:24 crc kubenswrapper[4811]: I0122 09:32:24.029842 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-bzk5b"] Jan 22 09:32:24 crc kubenswrapper[4811]: I0122 09:32:24.042835 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-bzk5b"] Jan 22 09:32:25 crc kubenswrapper[4811]: I0122 09:32:25.031353 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-2sgkl"] Jan 22 09:32:25 crc kubenswrapper[4811]: I0122 09:32:25.037352 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-zxgc6"] Jan 22 09:32:25 crc kubenswrapper[4811]: I0122 09:32:25.043109 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-7b3d-account-create-update-jq477"] Jan 22 09:32:25 crc kubenswrapper[4811]: I0122 09:32:25.050507 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-65d2-account-create-update-sq4sn"] Jan 22 09:32:25 crc kubenswrapper[4811]: I0122 09:32:25.055824 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-6d99-account-create-update-rwg8r"] Jan 22 09:32:25 crc kubenswrapper[4811]: I0122 09:32:25.060734 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-2sgkl"] Jan 22 09:32:25 crc kubenswrapper[4811]: I0122 09:32:25.065309 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-zxgc6"] Jan 22 09:32:25 crc kubenswrapper[4811]: I0122 09:32:25.069718 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-6d99-account-create-update-rwg8r"] Jan 22 09:32:25 crc kubenswrapper[4811]: I0122 09:32:25.074141 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-65d2-account-create-update-sq4sn"] Jan 22 09:32:25 crc kubenswrapper[4811]: I0122 09:32:25.078434 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-7b3d-account-create-update-jq477"] Jan 22 09:32:26 crc kubenswrapper[4811]: I0122 09:32:26.000159 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="433d526d-dde0-4815-9863-d934d1a30739" path="/var/lib/kubelet/pods/433d526d-dde0-4815-9863-d934d1a30739/volumes" Jan 22 09:32:26 crc kubenswrapper[4811]: I0122 09:32:26.000743 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64241d93-e4db-4880-a25b-2c68cacb0f5c" path="/var/lib/kubelet/pods/64241d93-e4db-4880-a25b-2c68cacb0f5c/volumes" Jan 22 09:32:26 crc kubenswrapper[4811]: I0122 09:32:26.001269 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93312640-d70b-4075-af45-5bf9e6625c73" path="/var/lib/kubelet/pods/93312640-d70b-4075-af45-5bf9e6625c73/volumes" Jan 22 09:32:26 crc kubenswrapper[4811]: I0122 09:32:26.001765 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f255ce7a-ccc6-41ac-901d-92554247b909" path="/var/lib/kubelet/pods/f255ce7a-ccc6-41ac-901d-92554247b909/volumes" Jan 22 09:32:26 crc kubenswrapper[4811]: I0122 09:32:26.002650 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f33f4ea9-6343-42e6-8666-6e34e9926dd9" path="/var/lib/kubelet/pods/f33f4ea9-6343-42e6-8666-6e34e9926dd9/volumes" Jan 22 09:32:26 crc kubenswrapper[4811]: I0122 09:32:26.003113 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff4e524c-e9f6-483d-b8ae-d8abf0a7bf21" path="/var/lib/kubelet/pods/ff4e524c-e9f6-483d-b8ae-d8abf0a7bf21/volumes" Jan 22 09:32:33 crc kubenswrapper[4811]: I0122 09:32:33.992528 4811 scope.go:117] "RemoveContainer" containerID="a7045e9e75753498e22be1f49f2ac6b7e1f1858c2287359ab0b186d3e061fbd1" Jan 22 09:32:33 crc kubenswrapper[4811]: E0122 09:32:33.993061 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-txvcq_openshift-machine-config-operator(84068a6b-e189-419b-87f5-f31428f6eafe)\"" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" Jan 22 09:32:38 crc kubenswrapper[4811]: I0122 09:32:38.433399 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-7jhtz"] Jan 22 09:32:38 crc kubenswrapper[4811]: E0122 09:32:38.434037 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2c2fede-3836-4600-a5d7-a5803623306e" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 22 09:32:38 crc kubenswrapper[4811]: I0122 09:32:38.434060 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2c2fede-3836-4600-a5d7-a5803623306e" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 22 09:32:38 crc kubenswrapper[4811]: I0122 09:32:38.434220 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2c2fede-3836-4600-a5d7-a5803623306e" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 22 09:32:38 crc kubenswrapper[4811]: I0122 09:32:38.435279 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7jhtz" Jan 22 09:32:38 crc kubenswrapper[4811]: I0122 09:32:38.447212 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zllnk\" (UniqueName: \"kubernetes.io/projected/4141ab4d-9179-415d-b20c-95e4de24dd85-kube-api-access-zllnk\") pod \"redhat-operators-7jhtz\" (UID: \"4141ab4d-9179-415d-b20c-95e4de24dd85\") " pod="openshift-marketplace/redhat-operators-7jhtz" Jan 22 09:32:38 crc kubenswrapper[4811]: I0122 09:32:38.447413 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4141ab4d-9179-415d-b20c-95e4de24dd85-utilities\") pod \"redhat-operators-7jhtz\" (UID: \"4141ab4d-9179-415d-b20c-95e4de24dd85\") " pod="openshift-marketplace/redhat-operators-7jhtz" Jan 22 09:32:38 crc kubenswrapper[4811]: I0122 09:32:38.447459 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4141ab4d-9179-415d-b20c-95e4de24dd85-catalog-content\") pod \"redhat-operators-7jhtz\" (UID: \"4141ab4d-9179-415d-b20c-95e4de24dd85\") " pod="openshift-marketplace/redhat-operators-7jhtz" Jan 22 09:32:38 crc kubenswrapper[4811]: I0122 09:32:38.449185 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7jhtz"] Jan 22 09:32:38 crc kubenswrapper[4811]: I0122 09:32:38.548517 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zllnk\" (UniqueName: \"kubernetes.io/projected/4141ab4d-9179-415d-b20c-95e4de24dd85-kube-api-access-zllnk\") pod \"redhat-operators-7jhtz\" (UID: \"4141ab4d-9179-415d-b20c-95e4de24dd85\") " pod="openshift-marketplace/redhat-operators-7jhtz" Jan 22 09:32:38 crc kubenswrapper[4811]: I0122 09:32:38.548844 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4141ab4d-9179-415d-b20c-95e4de24dd85-utilities\") pod \"redhat-operators-7jhtz\" (UID: \"4141ab4d-9179-415d-b20c-95e4de24dd85\") " pod="openshift-marketplace/redhat-operators-7jhtz" Jan 22 09:32:38 crc kubenswrapper[4811]: I0122 09:32:38.548965 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4141ab4d-9179-415d-b20c-95e4de24dd85-catalog-content\") pod \"redhat-operators-7jhtz\" (UID: \"4141ab4d-9179-415d-b20c-95e4de24dd85\") " pod="openshift-marketplace/redhat-operators-7jhtz" Jan 22 09:32:38 crc kubenswrapper[4811]: I0122 09:32:38.549255 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4141ab4d-9179-415d-b20c-95e4de24dd85-utilities\") pod \"redhat-operators-7jhtz\" (UID: \"4141ab4d-9179-415d-b20c-95e4de24dd85\") " pod="openshift-marketplace/redhat-operators-7jhtz" Jan 22 09:32:38 crc kubenswrapper[4811]: I0122 09:32:38.549293 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4141ab4d-9179-415d-b20c-95e4de24dd85-catalog-content\") pod \"redhat-operators-7jhtz\" (UID: \"4141ab4d-9179-415d-b20c-95e4de24dd85\") " pod="openshift-marketplace/redhat-operators-7jhtz" Jan 22 09:32:38 crc kubenswrapper[4811]: I0122 09:32:38.566083 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zllnk\" (UniqueName: \"kubernetes.io/projected/4141ab4d-9179-415d-b20c-95e4de24dd85-kube-api-access-zllnk\") pod \"redhat-operators-7jhtz\" (UID: \"4141ab4d-9179-415d-b20c-95e4de24dd85\") " pod="openshift-marketplace/redhat-operators-7jhtz" Jan 22 09:32:38 crc kubenswrapper[4811]: I0122 09:32:38.749481 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7jhtz" Jan 22 09:32:39 crc kubenswrapper[4811]: I0122 09:32:39.146614 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7jhtz"] Jan 22 09:32:39 crc kubenswrapper[4811]: W0122 09:32:39.148376 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4141ab4d_9179_415d_b20c_95e4de24dd85.slice/crio-996f5d88bb1ee92811e4a7f7656f036f2e48e3b8157350cf7e17e470b829a71c WatchSource:0}: Error finding container 996f5d88bb1ee92811e4a7f7656f036f2e48e3b8157350cf7e17e470b829a71c: Status 404 returned error can't find the container with id 996f5d88bb1ee92811e4a7f7656f036f2e48e3b8157350cf7e17e470b829a71c Jan 22 09:32:39 crc kubenswrapper[4811]: I0122 09:32:39.291278 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7jhtz" event={"ID":"4141ab4d-9179-415d-b20c-95e4de24dd85","Type":"ContainerStarted","Data":"996f5d88bb1ee92811e4a7f7656f036f2e48e3b8157350cf7e17e470b829a71c"} Jan 22 09:32:40 crc kubenswrapper[4811]: I0122 09:32:40.299413 4811 generic.go:334] "Generic (PLEG): container finished" podID="4141ab4d-9179-415d-b20c-95e4de24dd85" containerID="ccd2ee0d8f12f3cb0c807243de5dd477ad0a9d7a25d46340e9b4df7ea9aa4b5c" exitCode=0 Jan 22 09:32:40 crc kubenswrapper[4811]: I0122 09:32:40.299509 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7jhtz" event={"ID":"4141ab4d-9179-415d-b20c-95e4de24dd85","Type":"ContainerDied","Data":"ccd2ee0d8f12f3cb0c807243de5dd477ad0a9d7a25d46340e9b4df7ea9aa4b5c"} Jan 22 09:32:41 crc kubenswrapper[4811]: I0122 09:32:41.308033 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7jhtz" event={"ID":"4141ab4d-9179-415d-b20c-95e4de24dd85","Type":"ContainerStarted","Data":"e1368f524cfb40013888da0c78ed7b28bd493c9440b0c7ca5f27b132b2742831"} Jan 22 09:32:44 crc kubenswrapper[4811]: I0122 09:32:44.326755 4811 generic.go:334] "Generic (PLEG): container finished" podID="4141ab4d-9179-415d-b20c-95e4de24dd85" containerID="e1368f524cfb40013888da0c78ed7b28bd493c9440b0c7ca5f27b132b2742831" exitCode=0 Jan 22 09:32:44 crc kubenswrapper[4811]: I0122 09:32:44.326825 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7jhtz" event={"ID":"4141ab4d-9179-415d-b20c-95e4de24dd85","Type":"ContainerDied","Data":"e1368f524cfb40013888da0c78ed7b28bd493c9440b0c7ca5f27b132b2742831"} Jan 22 09:32:44 crc kubenswrapper[4811]: I0122 09:32:44.992043 4811 scope.go:117] "RemoveContainer" containerID="a7045e9e75753498e22be1f49f2ac6b7e1f1858c2287359ab0b186d3e061fbd1" Jan 22 09:32:44 crc kubenswrapper[4811]: E0122 09:32:44.992559 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-txvcq_openshift-machine-config-operator(84068a6b-e189-419b-87f5-f31428f6eafe)\"" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" Jan 22 09:32:45 crc kubenswrapper[4811]: I0122 09:32:45.334748 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7jhtz" event={"ID":"4141ab4d-9179-415d-b20c-95e4de24dd85","Type":"ContainerStarted","Data":"225bfd215e858a0aeaec026735fac6057a66098b59e10d9419ddf4c67b740001"} Jan 22 09:32:45 crc kubenswrapper[4811]: I0122 09:32:45.351583 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-7jhtz" podStartSLOduration=2.794978453 podStartE2EDuration="7.351569213s" podCreationTimestamp="2026-01-22 09:32:38 +0000 UTC" firstStartedPulling="2026-01-22 09:32:40.300905101 +0000 UTC m=+1604.623092224" lastFinishedPulling="2026-01-22 09:32:44.857495861 +0000 UTC m=+1609.179682984" observedRunningTime="2026-01-22 09:32:45.346000722 +0000 UTC m=+1609.668187845" watchObservedRunningTime="2026-01-22 09:32:45.351569213 +0000 UTC m=+1609.673756335" Jan 22 09:32:48 crc kubenswrapper[4811]: I0122 09:32:48.034098 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-4v42r"] Jan 22 09:32:48 crc kubenswrapper[4811]: I0122 09:32:48.041317 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-4v42r"] Jan 22 09:32:48 crc kubenswrapper[4811]: I0122 09:32:48.750252 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-7jhtz" Jan 22 09:32:48 crc kubenswrapper[4811]: I0122 09:32:48.750304 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-7jhtz" Jan 22 09:32:49 crc kubenswrapper[4811]: I0122 09:32:49.780506 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7jhtz" podUID="4141ab4d-9179-415d-b20c-95e4de24dd85" containerName="registry-server" probeResult="failure" output=< Jan 22 09:32:49 crc kubenswrapper[4811]: timeout: failed to connect service ":50051" within 1s Jan 22 09:32:49 crc kubenswrapper[4811]: > Jan 22 09:32:49 crc kubenswrapper[4811]: I0122 09:32:49.999000 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67f94f00-3db6-4cd9-a4e4-5c466abb76c5" path="/var/lib/kubelet/pods/67f94f00-3db6-4cd9-a4e4-5c466abb76c5/volumes" Jan 22 09:32:58 crc kubenswrapper[4811]: I0122 09:32:58.780350 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-7jhtz" Jan 22 09:32:58 crc kubenswrapper[4811]: I0122 09:32:58.813509 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-7jhtz" Jan 22 09:32:59 crc kubenswrapper[4811]: I0122 09:32:59.006489 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7jhtz"] Jan 22 09:32:59 crc kubenswrapper[4811]: I0122 09:32:59.396158 4811 scope.go:117] "RemoveContainer" containerID="1e97e319c5598a605a81c259d39596181433fbd0299c635029cdb0b2737b162d" Jan 22 09:32:59 crc kubenswrapper[4811]: I0122 09:32:59.425379 4811 scope.go:117] "RemoveContainer" containerID="da609af4ccb38238d155ee06b057e9c863a95f9fa6cba50923c1a5783c7e6cea" Jan 22 09:32:59 crc kubenswrapper[4811]: I0122 09:32:59.442252 4811 scope.go:117] "RemoveContainer" containerID="d11d0a4ad66ab56b3148ac99e51066b6826d4e4fcf9b984d48dcff5d5f0c572b" Jan 22 09:32:59 crc kubenswrapper[4811]: I0122 09:32:59.470648 4811 scope.go:117] "RemoveContainer" containerID="928ec6ee32df2dcc225afd92c486bbeb6f2818bcd79cbfeab616c5b1518ccd9d" Jan 22 09:32:59 crc kubenswrapper[4811]: I0122 09:32:59.512474 4811 scope.go:117] "RemoveContainer" containerID="17d599313a9544d613306230c92244873cca42fc11d1f673aaa176c5284b9387" Jan 22 09:32:59 crc kubenswrapper[4811]: I0122 09:32:59.528419 4811 scope.go:117] "RemoveContainer" containerID="92920163f2f0459ac3ad2fb37d7c79c9692c5ff530f60f2da244826a41ee33d7" Jan 22 09:32:59 crc kubenswrapper[4811]: I0122 09:32:59.557254 4811 scope.go:117] "RemoveContainer" containerID="d5df28b27fb6d0fc5527a4fb36e96463df8a165d16cf3b63d79db681fba8d3a6" Jan 22 09:32:59 crc kubenswrapper[4811]: I0122 09:32:59.991794 4811 scope.go:117] "RemoveContainer" containerID="a7045e9e75753498e22be1f49f2ac6b7e1f1858c2287359ab0b186d3e061fbd1" Jan 22 09:32:59 crc kubenswrapper[4811]: E0122 09:32:59.992039 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-txvcq_openshift-machine-config-operator(84068a6b-e189-419b-87f5-f31428f6eafe)\"" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" Jan 22 09:33:00 crc kubenswrapper[4811]: I0122 09:33:00.426112 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-7jhtz" podUID="4141ab4d-9179-415d-b20c-95e4de24dd85" containerName="registry-server" containerID="cri-o://225bfd215e858a0aeaec026735fac6057a66098b59e10d9419ddf4c67b740001" gracePeriod=2 Jan 22 09:33:00 crc kubenswrapper[4811]: I0122 09:33:00.820784 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7jhtz" Jan 22 09:33:00 crc kubenswrapper[4811]: I0122 09:33:00.975214 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4141ab4d-9179-415d-b20c-95e4de24dd85-utilities\") pod \"4141ab4d-9179-415d-b20c-95e4de24dd85\" (UID: \"4141ab4d-9179-415d-b20c-95e4de24dd85\") " Jan 22 09:33:00 crc kubenswrapper[4811]: I0122 09:33:00.975264 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zllnk\" (UniqueName: \"kubernetes.io/projected/4141ab4d-9179-415d-b20c-95e4de24dd85-kube-api-access-zllnk\") pod \"4141ab4d-9179-415d-b20c-95e4de24dd85\" (UID: \"4141ab4d-9179-415d-b20c-95e4de24dd85\") " Jan 22 09:33:00 crc kubenswrapper[4811]: I0122 09:33:00.975283 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4141ab4d-9179-415d-b20c-95e4de24dd85-catalog-content\") pod \"4141ab4d-9179-415d-b20c-95e4de24dd85\" (UID: \"4141ab4d-9179-415d-b20c-95e4de24dd85\") " Jan 22 09:33:00 crc kubenswrapper[4811]: I0122 09:33:00.976112 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4141ab4d-9179-415d-b20c-95e4de24dd85-utilities" (OuterVolumeSpecName: "utilities") pod "4141ab4d-9179-415d-b20c-95e4de24dd85" (UID: "4141ab4d-9179-415d-b20c-95e4de24dd85"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:33:00 crc kubenswrapper[4811]: I0122 09:33:00.976774 4811 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4141ab4d-9179-415d-b20c-95e4de24dd85-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 09:33:00 crc kubenswrapper[4811]: I0122 09:33:00.980009 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4141ab4d-9179-415d-b20c-95e4de24dd85-kube-api-access-zllnk" (OuterVolumeSpecName: "kube-api-access-zllnk") pod "4141ab4d-9179-415d-b20c-95e4de24dd85" (UID: "4141ab4d-9179-415d-b20c-95e4de24dd85"). InnerVolumeSpecName "kube-api-access-zllnk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:33:01 crc kubenswrapper[4811]: I0122 09:33:01.060297 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4141ab4d-9179-415d-b20c-95e4de24dd85-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4141ab4d-9179-415d-b20c-95e4de24dd85" (UID: "4141ab4d-9179-415d-b20c-95e4de24dd85"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:33:01 crc kubenswrapper[4811]: I0122 09:33:01.079181 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zllnk\" (UniqueName: \"kubernetes.io/projected/4141ab4d-9179-415d-b20c-95e4de24dd85-kube-api-access-zllnk\") on node \"crc\" DevicePath \"\"" Jan 22 09:33:01 crc kubenswrapper[4811]: I0122 09:33:01.079216 4811 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4141ab4d-9179-415d-b20c-95e4de24dd85-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 09:33:01 crc kubenswrapper[4811]: I0122 09:33:01.433447 4811 generic.go:334] "Generic (PLEG): container finished" podID="4141ab4d-9179-415d-b20c-95e4de24dd85" containerID="225bfd215e858a0aeaec026735fac6057a66098b59e10d9419ddf4c67b740001" exitCode=0 Jan 22 09:33:01 crc kubenswrapper[4811]: I0122 09:33:01.433487 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7jhtz" event={"ID":"4141ab4d-9179-415d-b20c-95e4de24dd85","Type":"ContainerDied","Data":"225bfd215e858a0aeaec026735fac6057a66098b59e10d9419ddf4c67b740001"} Jan 22 09:33:01 crc kubenswrapper[4811]: I0122 09:33:01.433511 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7jhtz" event={"ID":"4141ab4d-9179-415d-b20c-95e4de24dd85","Type":"ContainerDied","Data":"996f5d88bb1ee92811e4a7f7656f036f2e48e3b8157350cf7e17e470b829a71c"} Jan 22 09:33:01 crc kubenswrapper[4811]: I0122 09:33:01.433508 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7jhtz" Jan 22 09:33:01 crc kubenswrapper[4811]: I0122 09:33:01.433524 4811 scope.go:117] "RemoveContainer" containerID="225bfd215e858a0aeaec026735fac6057a66098b59e10d9419ddf4c67b740001" Jan 22 09:33:01 crc kubenswrapper[4811]: I0122 09:33:01.446656 4811 scope.go:117] "RemoveContainer" containerID="e1368f524cfb40013888da0c78ed7b28bd493c9440b0c7ca5f27b132b2742831" Jan 22 09:33:01 crc kubenswrapper[4811]: I0122 09:33:01.456760 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7jhtz"] Jan 22 09:33:01 crc kubenswrapper[4811]: I0122 09:33:01.462452 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-7jhtz"] Jan 22 09:33:01 crc kubenswrapper[4811]: I0122 09:33:01.475175 4811 scope.go:117] "RemoveContainer" containerID="ccd2ee0d8f12f3cb0c807243de5dd477ad0a9d7a25d46340e9b4df7ea9aa4b5c" Jan 22 09:33:01 crc kubenswrapper[4811]: I0122 09:33:01.496731 4811 scope.go:117] "RemoveContainer" containerID="225bfd215e858a0aeaec026735fac6057a66098b59e10d9419ddf4c67b740001" Jan 22 09:33:01 crc kubenswrapper[4811]: E0122 09:33:01.497245 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"225bfd215e858a0aeaec026735fac6057a66098b59e10d9419ddf4c67b740001\": container with ID starting with 225bfd215e858a0aeaec026735fac6057a66098b59e10d9419ddf4c67b740001 not found: ID does not exist" containerID="225bfd215e858a0aeaec026735fac6057a66098b59e10d9419ddf4c67b740001" Jan 22 09:33:01 crc kubenswrapper[4811]: I0122 09:33:01.497286 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"225bfd215e858a0aeaec026735fac6057a66098b59e10d9419ddf4c67b740001"} err="failed to get container status \"225bfd215e858a0aeaec026735fac6057a66098b59e10d9419ddf4c67b740001\": rpc error: code = NotFound desc = could not find container \"225bfd215e858a0aeaec026735fac6057a66098b59e10d9419ddf4c67b740001\": container with ID starting with 225bfd215e858a0aeaec026735fac6057a66098b59e10d9419ddf4c67b740001 not found: ID does not exist" Jan 22 09:33:01 crc kubenswrapper[4811]: I0122 09:33:01.497327 4811 scope.go:117] "RemoveContainer" containerID="e1368f524cfb40013888da0c78ed7b28bd493c9440b0c7ca5f27b132b2742831" Jan 22 09:33:01 crc kubenswrapper[4811]: E0122 09:33:01.497904 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1368f524cfb40013888da0c78ed7b28bd493c9440b0c7ca5f27b132b2742831\": container with ID starting with e1368f524cfb40013888da0c78ed7b28bd493c9440b0c7ca5f27b132b2742831 not found: ID does not exist" containerID="e1368f524cfb40013888da0c78ed7b28bd493c9440b0c7ca5f27b132b2742831" Jan 22 09:33:01 crc kubenswrapper[4811]: I0122 09:33:01.497943 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1368f524cfb40013888da0c78ed7b28bd493c9440b0c7ca5f27b132b2742831"} err="failed to get container status \"e1368f524cfb40013888da0c78ed7b28bd493c9440b0c7ca5f27b132b2742831\": rpc error: code = NotFound desc = could not find container \"e1368f524cfb40013888da0c78ed7b28bd493c9440b0c7ca5f27b132b2742831\": container with ID starting with e1368f524cfb40013888da0c78ed7b28bd493c9440b0c7ca5f27b132b2742831 not found: ID does not exist" Jan 22 09:33:01 crc kubenswrapper[4811]: I0122 09:33:01.497959 4811 scope.go:117] "RemoveContainer" containerID="ccd2ee0d8f12f3cb0c807243de5dd477ad0a9d7a25d46340e9b4df7ea9aa4b5c" Jan 22 09:33:01 crc kubenswrapper[4811]: E0122 09:33:01.498243 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ccd2ee0d8f12f3cb0c807243de5dd477ad0a9d7a25d46340e9b4df7ea9aa4b5c\": container with ID starting with ccd2ee0d8f12f3cb0c807243de5dd477ad0a9d7a25d46340e9b4df7ea9aa4b5c not found: ID does not exist" containerID="ccd2ee0d8f12f3cb0c807243de5dd477ad0a9d7a25d46340e9b4df7ea9aa4b5c" Jan 22 09:33:01 crc kubenswrapper[4811]: I0122 09:33:01.498265 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ccd2ee0d8f12f3cb0c807243de5dd477ad0a9d7a25d46340e9b4df7ea9aa4b5c"} err="failed to get container status \"ccd2ee0d8f12f3cb0c807243de5dd477ad0a9d7a25d46340e9b4df7ea9aa4b5c\": rpc error: code = NotFound desc = could not find container \"ccd2ee0d8f12f3cb0c807243de5dd477ad0a9d7a25d46340e9b4df7ea9aa4b5c\": container with ID starting with ccd2ee0d8f12f3cb0c807243de5dd477ad0a9d7a25d46340e9b4df7ea9aa4b5c not found: ID does not exist" Jan 22 09:33:02 crc kubenswrapper[4811]: I0122 09:33:02.000422 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4141ab4d-9179-415d-b20c-95e4de24dd85" path="/var/lib/kubelet/pods/4141ab4d-9179-415d-b20c-95e4de24dd85/volumes" Jan 22 09:33:08 crc kubenswrapper[4811]: I0122 09:33:08.029160 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-zb8nm"] Jan 22 09:33:08 crc kubenswrapper[4811]: I0122 09:33:08.035697 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-zb8nm"] Jan 22 09:33:09 crc kubenswrapper[4811]: I0122 09:33:09.022071 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-fpztn"] Jan 22 09:33:09 crc kubenswrapper[4811]: I0122 09:33:09.028219 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-fpztn"] Jan 22 09:33:10 crc kubenswrapper[4811]: I0122 09:33:10.000012 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ed0ce90-7d83-46a0-9ed5-25f37c5e3dad" path="/var/lib/kubelet/pods/3ed0ce90-7d83-46a0-9ed5-25f37c5e3dad/volumes" Jan 22 09:33:10 crc kubenswrapper[4811]: I0122 09:33:10.000701 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60d099f9-c44f-4d8c-9983-d478c424eff9" path="/var/lib/kubelet/pods/60d099f9-c44f-4d8c-9983-d478c424eff9/volumes" Jan 22 09:33:10 crc kubenswrapper[4811]: I0122 09:33:10.992236 4811 scope.go:117] "RemoveContainer" containerID="a7045e9e75753498e22be1f49f2ac6b7e1f1858c2287359ab0b186d3e061fbd1" Jan 22 09:33:10 crc kubenswrapper[4811]: E0122 09:33:10.992664 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-txvcq_openshift-machine-config-operator(84068a6b-e189-419b-87f5-f31428f6eafe)\"" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" Jan 22 09:33:23 crc kubenswrapper[4811]: I0122 09:33:23.992464 4811 scope.go:117] "RemoveContainer" containerID="a7045e9e75753498e22be1f49f2ac6b7e1f1858c2287359ab0b186d3e061fbd1" Jan 22 09:33:23 crc kubenswrapper[4811]: E0122 09:33:23.993151 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-txvcq_openshift-machine-config-operator(84068a6b-e189-419b-87f5-f31428f6eafe)\"" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" Jan 22 09:33:34 crc kubenswrapper[4811]: I0122 09:33:34.992385 4811 scope.go:117] "RemoveContainer" containerID="a7045e9e75753498e22be1f49f2ac6b7e1f1858c2287359ab0b186d3e061fbd1" Jan 22 09:33:34 crc kubenswrapper[4811]: E0122 09:33:34.993411 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-txvcq_openshift-machine-config-operator(84068a6b-e189-419b-87f5-f31428f6eafe)\"" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" Jan 22 09:33:45 crc kubenswrapper[4811]: I0122 09:33:45.995658 4811 scope.go:117] "RemoveContainer" containerID="a7045e9e75753498e22be1f49f2ac6b7e1f1858c2287359ab0b186d3e061fbd1" Jan 22 09:33:45 crc kubenswrapper[4811]: E0122 09:33:45.996235 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-txvcq_openshift-machine-config-operator(84068a6b-e189-419b-87f5-f31428f6eafe)\"" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" Jan 22 09:33:54 crc kubenswrapper[4811]: I0122 09:33:54.032367 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-8ttct"] Jan 22 09:33:54 crc kubenswrapper[4811]: I0122 09:33:54.040999 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-8ttct"] Jan 22 09:33:56 crc kubenswrapper[4811]: I0122 09:33:56.010782 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b83447d0-bf76-4337-b4c8-26bb45903b5c" path="/var/lib/kubelet/pods/b83447d0-bf76-4337-b4c8-26bb45903b5c/volumes" Jan 22 09:33:57 crc kubenswrapper[4811]: I0122 09:33:57.992536 4811 scope.go:117] "RemoveContainer" containerID="a7045e9e75753498e22be1f49f2ac6b7e1f1858c2287359ab0b186d3e061fbd1" Jan 22 09:33:57 crc kubenswrapper[4811]: E0122 09:33:57.992935 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-txvcq_openshift-machine-config-operator(84068a6b-e189-419b-87f5-f31428f6eafe)\"" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" Jan 22 09:33:59 crc kubenswrapper[4811]: I0122 09:33:59.646130 4811 scope.go:117] "RemoveContainer" containerID="c6900d820993227d70c3c1a43bc81f14c62580d39e2e95ec8433d4296f21dc79" Jan 22 09:33:59 crc kubenswrapper[4811]: I0122 09:33:59.673582 4811 scope.go:117] "RemoveContainer" containerID="2430565ced8c6cab1a1add153fcaef06d155dda63af82c956a753cc878f67596" Jan 22 09:33:59 crc kubenswrapper[4811]: I0122 09:33:59.706287 4811 scope.go:117] "RemoveContainer" containerID="5625955d1ac26393323331bb2f36e93a748a895ca69e7672bf21f4397522774f" Jan 22 09:34:08 crc kubenswrapper[4811]: I0122 09:34:08.265713 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9fb69"] Jan 22 09:34:08 crc kubenswrapper[4811]: E0122 09:34:08.266334 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4141ab4d-9179-415d-b20c-95e4de24dd85" containerName="extract-utilities" Jan 22 09:34:08 crc kubenswrapper[4811]: I0122 09:34:08.266348 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="4141ab4d-9179-415d-b20c-95e4de24dd85" containerName="extract-utilities" Jan 22 09:34:08 crc kubenswrapper[4811]: E0122 09:34:08.266363 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4141ab4d-9179-415d-b20c-95e4de24dd85" containerName="extract-content" Jan 22 09:34:08 crc kubenswrapper[4811]: I0122 09:34:08.266369 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="4141ab4d-9179-415d-b20c-95e4de24dd85" containerName="extract-content" Jan 22 09:34:08 crc kubenswrapper[4811]: E0122 09:34:08.266380 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4141ab4d-9179-415d-b20c-95e4de24dd85" containerName="registry-server" Jan 22 09:34:08 crc kubenswrapper[4811]: I0122 09:34:08.266385 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="4141ab4d-9179-415d-b20c-95e4de24dd85" containerName="registry-server" Jan 22 09:34:08 crc kubenswrapper[4811]: I0122 09:34:08.266571 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="4141ab4d-9179-415d-b20c-95e4de24dd85" containerName="registry-server" Jan 22 09:34:08 crc kubenswrapper[4811]: I0122 09:34:08.267657 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9fb69" Jan 22 09:34:08 crc kubenswrapper[4811]: I0122 09:34:08.274415 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9fb69"] Jan 22 09:34:08 crc kubenswrapper[4811]: I0122 09:34:08.408261 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2856q\" (UniqueName: \"kubernetes.io/projected/33c2ecaa-9e34-4f53-a09b-1567f6817a84-kube-api-access-2856q\") pod \"community-operators-9fb69\" (UID: \"33c2ecaa-9e34-4f53-a09b-1567f6817a84\") " pod="openshift-marketplace/community-operators-9fb69" Jan 22 09:34:08 crc kubenswrapper[4811]: I0122 09:34:08.408499 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33c2ecaa-9e34-4f53-a09b-1567f6817a84-catalog-content\") pod \"community-operators-9fb69\" (UID: \"33c2ecaa-9e34-4f53-a09b-1567f6817a84\") " pod="openshift-marketplace/community-operators-9fb69" Jan 22 09:34:08 crc kubenswrapper[4811]: I0122 09:34:08.408564 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33c2ecaa-9e34-4f53-a09b-1567f6817a84-utilities\") pod \"community-operators-9fb69\" (UID: \"33c2ecaa-9e34-4f53-a09b-1567f6817a84\") " pod="openshift-marketplace/community-operators-9fb69" Jan 22 09:34:08 crc kubenswrapper[4811]: I0122 09:34:08.510358 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2856q\" (UniqueName: \"kubernetes.io/projected/33c2ecaa-9e34-4f53-a09b-1567f6817a84-kube-api-access-2856q\") pod \"community-operators-9fb69\" (UID: \"33c2ecaa-9e34-4f53-a09b-1567f6817a84\") " pod="openshift-marketplace/community-operators-9fb69" Jan 22 09:34:08 crc kubenswrapper[4811]: I0122 09:34:08.510430 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33c2ecaa-9e34-4f53-a09b-1567f6817a84-catalog-content\") pod \"community-operators-9fb69\" (UID: \"33c2ecaa-9e34-4f53-a09b-1567f6817a84\") " pod="openshift-marketplace/community-operators-9fb69" Jan 22 09:34:08 crc kubenswrapper[4811]: I0122 09:34:08.510499 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33c2ecaa-9e34-4f53-a09b-1567f6817a84-utilities\") pod \"community-operators-9fb69\" (UID: \"33c2ecaa-9e34-4f53-a09b-1567f6817a84\") " pod="openshift-marketplace/community-operators-9fb69" Jan 22 09:34:08 crc kubenswrapper[4811]: I0122 09:34:08.510963 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33c2ecaa-9e34-4f53-a09b-1567f6817a84-utilities\") pod \"community-operators-9fb69\" (UID: \"33c2ecaa-9e34-4f53-a09b-1567f6817a84\") " pod="openshift-marketplace/community-operators-9fb69" Jan 22 09:34:08 crc kubenswrapper[4811]: I0122 09:34:08.511011 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33c2ecaa-9e34-4f53-a09b-1567f6817a84-catalog-content\") pod \"community-operators-9fb69\" (UID: \"33c2ecaa-9e34-4f53-a09b-1567f6817a84\") " pod="openshift-marketplace/community-operators-9fb69" Jan 22 09:34:08 crc kubenswrapper[4811]: I0122 09:34:08.532270 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2856q\" (UniqueName: \"kubernetes.io/projected/33c2ecaa-9e34-4f53-a09b-1567f6817a84-kube-api-access-2856q\") pod \"community-operators-9fb69\" (UID: \"33c2ecaa-9e34-4f53-a09b-1567f6817a84\") " pod="openshift-marketplace/community-operators-9fb69" Jan 22 09:34:08 crc kubenswrapper[4811]: I0122 09:34:08.580660 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9fb69" Jan 22 09:34:08 crc kubenswrapper[4811]: I0122 09:34:08.992753 4811 scope.go:117] "RemoveContainer" containerID="a7045e9e75753498e22be1f49f2ac6b7e1f1858c2287359ab0b186d3e061fbd1" Jan 22 09:34:08 crc kubenswrapper[4811]: E0122 09:34:08.993094 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-txvcq_openshift-machine-config-operator(84068a6b-e189-419b-87f5-f31428f6eafe)\"" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" Jan 22 09:34:09 crc kubenswrapper[4811]: I0122 09:34:09.050276 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9fb69"] Jan 22 09:34:09 crc kubenswrapper[4811]: I0122 09:34:09.857032 4811 generic.go:334] "Generic (PLEG): container finished" podID="33c2ecaa-9e34-4f53-a09b-1567f6817a84" containerID="d7cd455d10b6d341bd7c4e37e63a1bcd6147fbb4b1a3613e57b587156c782929" exitCode=0 Jan 22 09:34:09 crc kubenswrapper[4811]: I0122 09:34:09.857134 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9fb69" event={"ID":"33c2ecaa-9e34-4f53-a09b-1567f6817a84","Type":"ContainerDied","Data":"d7cd455d10b6d341bd7c4e37e63a1bcd6147fbb4b1a3613e57b587156c782929"} Jan 22 09:34:09 crc kubenswrapper[4811]: I0122 09:34:09.857336 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9fb69" event={"ID":"33c2ecaa-9e34-4f53-a09b-1567f6817a84","Type":"ContainerStarted","Data":"a74ec48bc0b7536a9c94303e612716068c5d9ef99cb19108b679ebf725598fa3"} Jan 22 09:34:10 crc kubenswrapper[4811]: I0122 09:34:10.864412 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9fb69" event={"ID":"33c2ecaa-9e34-4f53-a09b-1567f6817a84","Type":"ContainerStarted","Data":"5f6fd38b30c85392c571b5a960b7cbdf3f8300f28799e7a1f41f3f238eca3f90"} Jan 22 09:34:11 crc kubenswrapper[4811]: I0122 09:34:11.877267 4811 generic.go:334] "Generic (PLEG): container finished" podID="33c2ecaa-9e34-4f53-a09b-1567f6817a84" containerID="5f6fd38b30c85392c571b5a960b7cbdf3f8300f28799e7a1f41f3f238eca3f90" exitCode=0 Jan 22 09:34:11 crc kubenswrapper[4811]: I0122 09:34:11.877383 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9fb69" event={"ID":"33c2ecaa-9e34-4f53-a09b-1567f6817a84","Type":"ContainerDied","Data":"5f6fd38b30c85392c571b5a960b7cbdf3f8300f28799e7a1f41f3f238eca3f90"} Jan 22 09:34:12 crc kubenswrapper[4811]: I0122 09:34:12.885730 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9fb69" event={"ID":"33c2ecaa-9e34-4f53-a09b-1567f6817a84","Type":"ContainerStarted","Data":"4f0eb6f4b83c5c131dd37a4375cf1273bd3680dd76dc366e4898dce3bb8f5d35"} Jan 22 09:34:12 crc kubenswrapper[4811]: I0122 09:34:12.901944 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9fb69" podStartSLOduration=2.406490255 podStartE2EDuration="4.901929599s" podCreationTimestamp="2026-01-22 09:34:08 +0000 UTC" firstStartedPulling="2026-01-22 09:34:09.85835639 +0000 UTC m=+1694.180543513" lastFinishedPulling="2026-01-22 09:34:12.353795733 +0000 UTC m=+1696.675982857" observedRunningTime="2026-01-22 09:34:12.899085685 +0000 UTC m=+1697.221272808" watchObservedRunningTime="2026-01-22 09:34:12.901929599 +0000 UTC m=+1697.224116721" Jan 22 09:34:18 crc kubenswrapper[4811]: I0122 09:34:18.582146 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9fb69" Jan 22 09:34:18 crc kubenswrapper[4811]: I0122 09:34:18.582496 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9fb69" Jan 22 09:34:18 crc kubenswrapper[4811]: I0122 09:34:18.612164 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9fb69" Jan 22 09:34:18 crc kubenswrapper[4811]: I0122 09:34:18.952361 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9fb69" Jan 22 09:34:18 crc kubenswrapper[4811]: I0122 09:34:18.993076 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9fb69"] Jan 22 09:34:20 crc kubenswrapper[4811]: I0122 09:34:20.932905 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-9fb69" podUID="33c2ecaa-9e34-4f53-a09b-1567f6817a84" containerName="registry-server" containerID="cri-o://4f0eb6f4b83c5c131dd37a4375cf1273bd3680dd76dc366e4898dce3bb8f5d35" gracePeriod=2 Jan 22 09:34:20 crc kubenswrapper[4811]: I0122 09:34:20.992368 4811 scope.go:117] "RemoveContainer" containerID="a7045e9e75753498e22be1f49f2ac6b7e1f1858c2287359ab0b186d3e061fbd1" Jan 22 09:34:20 crc kubenswrapper[4811]: E0122 09:34:20.993158 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-txvcq_openshift-machine-config-operator(84068a6b-e189-419b-87f5-f31428f6eafe)\"" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" Jan 22 09:34:21 crc kubenswrapper[4811]: I0122 09:34:21.278961 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9fb69" Jan 22 09:34:21 crc kubenswrapper[4811]: I0122 09:34:21.302012 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33c2ecaa-9e34-4f53-a09b-1567f6817a84-catalog-content\") pod \"33c2ecaa-9e34-4f53-a09b-1567f6817a84\" (UID: \"33c2ecaa-9e34-4f53-a09b-1567f6817a84\") " Jan 22 09:34:21 crc kubenswrapper[4811]: I0122 09:34:21.302135 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2856q\" (UniqueName: \"kubernetes.io/projected/33c2ecaa-9e34-4f53-a09b-1567f6817a84-kube-api-access-2856q\") pod \"33c2ecaa-9e34-4f53-a09b-1567f6817a84\" (UID: \"33c2ecaa-9e34-4f53-a09b-1567f6817a84\") " Jan 22 09:34:21 crc kubenswrapper[4811]: I0122 09:34:21.302169 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33c2ecaa-9e34-4f53-a09b-1567f6817a84-utilities\") pod \"33c2ecaa-9e34-4f53-a09b-1567f6817a84\" (UID: \"33c2ecaa-9e34-4f53-a09b-1567f6817a84\") " Jan 22 09:34:21 crc kubenswrapper[4811]: I0122 09:34:21.303220 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/33c2ecaa-9e34-4f53-a09b-1567f6817a84-utilities" (OuterVolumeSpecName: "utilities") pod "33c2ecaa-9e34-4f53-a09b-1567f6817a84" (UID: "33c2ecaa-9e34-4f53-a09b-1567f6817a84"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:34:21 crc kubenswrapper[4811]: I0122 09:34:21.307037 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33c2ecaa-9e34-4f53-a09b-1567f6817a84-kube-api-access-2856q" (OuterVolumeSpecName: "kube-api-access-2856q") pod "33c2ecaa-9e34-4f53-a09b-1567f6817a84" (UID: "33c2ecaa-9e34-4f53-a09b-1567f6817a84"). InnerVolumeSpecName "kube-api-access-2856q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:34:21 crc kubenswrapper[4811]: I0122 09:34:21.346101 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/33c2ecaa-9e34-4f53-a09b-1567f6817a84-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "33c2ecaa-9e34-4f53-a09b-1567f6817a84" (UID: "33c2ecaa-9e34-4f53-a09b-1567f6817a84"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:34:21 crc kubenswrapper[4811]: I0122 09:34:21.403805 4811 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33c2ecaa-9e34-4f53-a09b-1567f6817a84-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 09:34:21 crc kubenswrapper[4811]: I0122 09:34:21.403834 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2856q\" (UniqueName: \"kubernetes.io/projected/33c2ecaa-9e34-4f53-a09b-1567f6817a84-kube-api-access-2856q\") on node \"crc\" DevicePath \"\"" Jan 22 09:34:21 crc kubenswrapper[4811]: I0122 09:34:21.403846 4811 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33c2ecaa-9e34-4f53-a09b-1567f6817a84-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 09:34:21 crc kubenswrapper[4811]: I0122 09:34:21.940382 4811 generic.go:334] "Generic (PLEG): container finished" podID="33c2ecaa-9e34-4f53-a09b-1567f6817a84" containerID="4f0eb6f4b83c5c131dd37a4375cf1273bd3680dd76dc366e4898dce3bb8f5d35" exitCode=0 Jan 22 09:34:21 crc kubenswrapper[4811]: I0122 09:34:21.940441 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9fb69" Jan 22 09:34:21 crc kubenswrapper[4811]: I0122 09:34:21.940459 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9fb69" event={"ID":"33c2ecaa-9e34-4f53-a09b-1567f6817a84","Type":"ContainerDied","Data":"4f0eb6f4b83c5c131dd37a4375cf1273bd3680dd76dc366e4898dce3bb8f5d35"} Jan 22 09:34:21 crc kubenswrapper[4811]: I0122 09:34:21.941545 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9fb69" event={"ID":"33c2ecaa-9e34-4f53-a09b-1567f6817a84","Type":"ContainerDied","Data":"a74ec48bc0b7536a9c94303e612716068c5d9ef99cb19108b679ebf725598fa3"} Jan 22 09:34:21 crc kubenswrapper[4811]: I0122 09:34:21.941848 4811 scope.go:117] "RemoveContainer" containerID="4f0eb6f4b83c5c131dd37a4375cf1273bd3680dd76dc366e4898dce3bb8f5d35" Jan 22 09:34:21 crc kubenswrapper[4811]: I0122 09:34:21.956801 4811 scope.go:117] "RemoveContainer" containerID="5f6fd38b30c85392c571b5a960b7cbdf3f8300f28799e7a1f41f3f238eca3f90" Jan 22 09:34:21 crc kubenswrapper[4811]: I0122 09:34:21.970260 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9fb69"] Jan 22 09:34:21 crc kubenswrapper[4811]: I0122 09:34:21.980594 4811 scope.go:117] "RemoveContainer" containerID="d7cd455d10b6d341bd7c4e37e63a1bcd6147fbb4b1a3613e57b587156c782929" Jan 22 09:34:21 crc kubenswrapper[4811]: I0122 09:34:21.982876 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-9fb69"] Jan 22 09:34:22 crc kubenswrapper[4811]: I0122 09:34:22.002224 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33c2ecaa-9e34-4f53-a09b-1567f6817a84" path="/var/lib/kubelet/pods/33c2ecaa-9e34-4f53-a09b-1567f6817a84/volumes" Jan 22 09:34:22 crc kubenswrapper[4811]: I0122 09:34:22.002669 4811 scope.go:117] "RemoveContainer" containerID="4f0eb6f4b83c5c131dd37a4375cf1273bd3680dd76dc366e4898dce3bb8f5d35" Jan 22 09:34:22 crc kubenswrapper[4811]: E0122 09:34:22.003077 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f0eb6f4b83c5c131dd37a4375cf1273bd3680dd76dc366e4898dce3bb8f5d35\": container with ID starting with 4f0eb6f4b83c5c131dd37a4375cf1273bd3680dd76dc366e4898dce3bb8f5d35 not found: ID does not exist" containerID="4f0eb6f4b83c5c131dd37a4375cf1273bd3680dd76dc366e4898dce3bb8f5d35" Jan 22 09:34:22 crc kubenswrapper[4811]: I0122 09:34:22.003108 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f0eb6f4b83c5c131dd37a4375cf1273bd3680dd76dc366e4898dce3bb8f5d35"} err="failed to get container status \"4f0eb6f4b83c5c131dd37a4375cf1273bd3680dd76dc366e4898dce3bb8f5d35\": rpc error: code = NotFound desc = could not find container \"4f0eb6f4b83c5c131dd37a4375cf1273bd3680dd76dc366e4898dce3bb8f5d35\": container with ID starting with 4f0eb6f4b83c5c131dd37a4375cf1273bd3680dd76dc366e4898dce3bb8f5d35 not found: ID does not exist" Jan 22 09:34:22 crc kubenswrapper[4811]: I0122 09:34:22.003131 4811 scope.go:117] "RemoveContainer" containerID="5f6fd38b30c85392c571b5a960b7cbdf3f8300f28799e7a1f41f3f238eca3f90" Jan 22 09:34:22 crc kubenswrapper[4811]: E0122 09:34:22.003505 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f6fd38b30c85392c571b5a960b7cbdf3f8300f28799e7a1f41f3f238eca3f90\": container with ID starting with 5f6fd38b30c85392c571b5a960b7cbdf3f8300f28799e7a1f41f3f238eca3f90 not found: ID does not exist" containerID="5f6fd38b30c85392c571b5a960b7cbdf3f8300f28799e7a1f41f3f238eca3f90" Jan 22 09:34:22 crc kubenswrapper[4811]: I0122 09:34:22.003538 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f6fd38b30c85392c571b5a960b7cbdf3f8300f28799e7a1f41f3f238eca3f90"} err="failed to get container status \"5f6fd38b30c85392c571b5a960b7cbdf3f8300f28799e7a1f41f3f238eca3f90\": rpc error: code = NotFound desc = could not find container \"5f6fd38b30c85392c571b5a960b7cbdf3f8300f28799e7a1f41f3f238eca3f90\": container with ID starting with 5f6fd38b30c85392c571b5a960b7cbdf3f8300f28799e7a1f41f3f238eca3f90 not found: ID does not exist" Jan 22 09:34:22 crc kubenswrapper[4811]: I0122 09:34:22.003563 4811 scope.go:117] "RemoveContainer" containerID="d7cd455d10b6d341bd7c4e37e63a1bcd6147fbb4b1a3613e57b587156c782929" Jan 22 09:34:22 crc kubenswrapper[4811]: E0122 09:34:22.003832 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d7cd455d10b6d341bd7c4e37e63a1bcd6147fbb4b1a3613e57b587156c782929\": container with ID starting with d7cd455d10b6d341bd7c4e37e63a1bcd6147fbb4b1a3613e57b587156c782929 not found: ID does not exist" containerID="d7cd455d10b6d341bd7c4e37e63a1bcd6147fbb4b1a3613e57b587156c782929" Jan 22 09:34:22 crc kubenswrapper[4811]: I0122 09:34:22.003851 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7cd455d10b6d341bd7c4e37e63a1bcd6147fbb4b1a3613e57b587156c782929"} err="failed to get container status \"d7cd455d10b6d341bd7c4e37e63a1bcd6147fbb4b1a3613e57b587156c782929\": rpc error: code = NotFound desc = could not find container \"d7cd455d10b6d341bd7c4e37e63a1bcd6147fbb4b1a3613e57b587156c782929\": container with ID starting with d7cd455d10b6d341bd7c4e37e63a1bcd6147fbb4b1a3613e57b587156c782929 not found: ID does not exist" Jan 22 09:34:31 crc kubenswrapper[4811]: I0122 09:34:31.991729 4811 scope.go:117] "RemoveContainer" containerID="a7045e9e75753498e22be1f49f2ac6b7e1f1858c2287359ab0b186d3e061fbd1" Jan 22 09:34:31 crc kubenswrapper[4811]: E0122 09:34:31.992331 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-txvcq_openshift-machine-config-operator(84068a6b-e189-419b-87f5-f31428f6eafe)\"" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" Jan 22 09:34:43 crc kubenswrapper[4811]: I0122 09:34:43.991752 4811 scope.go:117] "RemoveContainer" containerID="a7045e9e75753498e22be1f49f2ac6b7e1f1858c2287359ab0b186d3e061fbd1" Jan 22 09:34:43 crc kubenswrapper[4811]: E0122 09:34:43.992284 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-txvcq_openshift-machine-config-operator(84068a6b-e189-419b-87f5-f31428f6eafe)\"" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" Jan 22 09:34:57 crc kubenswrapper[4811]: I0122 09:34:57.991566 4811 scope.go:117] "RemoveContainer" containerID="a7045e9e75753498e22be1f49f2ac6b7e1f1858c2287359ab0b186d3e061fbd1" Jan 22 09:34:57 crc kubenswrapper[4811]: E0122 09:34:57.992097 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-txvcq_openshift-machine-config-operator(84068a6b-e189-419b-87f5-f31428f6eafe)\"" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" Jan 22 09:35:12 crc kubenswrapper[4811]: I0122 09:35:12.991955 4811 scope.go:117] "RemoveContainer" containerID="a7045e9e75753498e22be1f49f2ac6b7e1f1858c2287359ab0b186d3e061fbd1" Jan 22 09:35:12 crc kubenswrapper[4811]: E0122 09:35:12.992782 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-txvcq_openshift-machine-config-operator(84068a6b-e189-419b-87f5-f31428f6eafe)\"" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" Jan 22 09:35:26 crc kubenswrapper[4811]: I0122 09:35:26.992151 4811 scope.go:117] "RemoveContainer" containerID="a7045e9e75753498e22be1f49f2ac6b7e1f1858c2287359ab0b186d3e061fbd1" Jan 22 09:35:26 crc kubenswrapper[4811]: E0122 09:35:26.993316 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-txvcq_openshift-machine-config-operator(84068a6b-e189-419b-87f5-f31428f6eafe)\"" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" Jan 22 09:35:38 crc kubenswrapper[4811]: I0122 09:35:38.992438 4811 scope.go:117] "RemoveContainer" containerID="a7045e9e75753498e22be1f49f2ac6b7e1f1858c2287359ab0b186d3e061fbd1" Jan 22 09:35:39 crc kubenswrapper[4811]: I0122 09:35:39.393294 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" event={"ID":"84068a6b-e189-419b-87f5-f31428f6eafe","Type":"ContainerStarted","Data":"9d25b46cfa8348bdd18bcf253f004ae0938dcf272be4899b8026857ee7fbaf9f"} Jan 22 09:35:55 crc kubenswrapper[4811]: I0122 09:35:55.680663 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fc64z"] Jan 22 09:35:55 crc kubenswrapper[4811]: I0122 09:35:55.700149 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-cb5fm"] Jan 22 09:35:55 crc kubenswrapper[4811]: I0122 09:35:55.706543 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5s52q"] Jan 22 09:35:55 crc kubenswrapper[4811]: I0122 09:35:55.721669 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-h6gdw"] Jan 22 09:35:55 crc kubenswrapper[4811]: I0122 09:35:55.734985 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fc64z"] Jan 22 09:35:55 crc kubenswrapper[4811]: I0122 09:35:55.737940 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-bp45n"] Jan 22 09:35:55 crc kubenswrapper[4811]: I0122 09:35:55.751395 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rmqsn"] Jan 22 09:35:55 crc kubenswrapper[4811]: I0122 09:35:55.757329 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-z9qbq"] Jan 22 09:35:55 crc kubenswrapper[4811]: I0122 09:35:55.763079 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kln4c"] Jan 22 09:35:55 crc kubenswrapper[4811]: I0122 09:35:55.767971 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fhg2v"] Jan 22 09:35:55 crc kubenswrapper[4811]: I0122 09:35:55.772735 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5s52q"] Jan 22 09:35:55 crc kubenswrapper[4811]: I0122 09:35:55.776465 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-cb5fm"] Jan 22 09:35:55 crc kubenswrapper[4811]: I0122 09:35:55.781228 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-z9qbq"] Jan 22 09:35:55 crc kubenswrapper[4811]: I0122 09:35:55.785258 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rmqsn"] Jan 22 09:35:55 crc kubenswrapper[4811]: I0122 09:35:55.789314 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-h6gdw"] Jan 22 09:35:55 crc kubenswrapper[4811]: I0122 09:35:55.793542 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fhg2v"] Jan 22 09:35:55 crc kubenswrapper[4811]: I0122 09:35:55.797939 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kln4c"] Jan 22 09:35:55 crc kubenswrapper[4811]: I0122 09:35:55.801919 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-k8gsd"] Jan 22 09:35:55 crc kubenswrapper[4811]: I0122 09:35:55.805932 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-bp45n"] Jan 22 09:35:55 crc kubenswrapper[4811]: I0122 09:35:55.809861 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-k8gsd"] Jan 22 09:35:56 crc kubenswrapper[4811]: I0122 09:35:56.020105 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11821ee8-72c0-4167-b2fe-f1698439f08c" path="/var/lib/kubelet/pods/11821ee8-72c0-4167-b2fe-f1698439f08c/volumes" Jan 22 09:35:56 crc kubenswrapper[4811]: I0122 09:35:56.020663 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e08656c-3cb7-4ab3-b7db-2a54fe305f6f" path="/var/lib/kubelet/pods/3e08656c-3cb7-4ab3-b7db-2a54fe305f6f/volumes" Jan 22 09:35:56 crc kubenswrapper[4811]: I0122 09:35:56.021144 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ca30846-0331-42bc-af30-c7f211657f21" path="/var/lib/kubelet/pods/4ca30846-0331-42bc-af30-c7f211657f21/volumes" Jan 22 09:35:56 crc kubenswrapper[4811]: I0122 09:35:56.021639 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9afcfc8c-7328-400a-9e14-4b5e8500ffde" path="/var/lib/kubelet/pods/9afcfc8c-7328-400a-9e14-4b5e8500ffde/volumes" Jan 22 09:35:56 crc kubenswrapper[4811]: I0122 09:35:56.023948 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2c2fede-3836-4600-a5d7-a5803623306e" path="/var/lib/kubelet/pods/b2c2fede-3836-4600-a5d7-a5803623306e/volumes" Jan 22 09:35:56 crc kubenswrapper[4811]: I0122 09:35:56.024422 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b652af36-650f-4442-9555-59e25d3414d7" path="/var/lib/kubelet/pods/b652af36-650f-4442-9555-59e25d3414d7/volumes" Jan 22 09:35:56 crc kubenswrapper[4811]: I0122 09:35:56.024923 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c178cc43-e3bb-452e-8ed3-7a4b6a9b9006" path="/var/lib/kubelet/pods/c178cc43-e3bb-452e-8ed3-7a4b6a9b9006/volumes" Jan 22 09:35:56 crc kubenswrapper[4811]: I0122 09:35:56.026413 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5234afb-1665-4465-9df5-e9c30fff6820" path="/var/lib/kubelet/pods/d5234afb-1665-4465-9df5-e9c30fff6820/volumes" Jan 22 09:35:56 crc kubenswrapper[4811]: I0122 09:35:56.026971 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e74305de-0ced-49bc-8795-a211907fbb22" path="/var/lib/kubelet/pods/e74305de-0ced-49bc-8795-a211907fbb22/volumes" Jan 22 09:35:56 crc kubenswrapper[4811]: I0122 09:35:56.027468 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe7f0005-8be7-481d-8487-b076d9139612" path="/var/lib/kubelet/pods/fe7f0005-8be7-481d-8487-b076d9139612/volumes" Jan 22 09:35:59 crc kubenswrapper[4811]: I0122 09:35:59.799540 4811 scope.go:117] "RemoveContainer" containerID="bff3018cd2ef0956f5d8160be8aa7d20d7f7e15ae8c379208567bc1b3bf0ba06" Jan 22 09:35:59 crc kubenswrapper[4811]: I0122 09:35:59.885524 4811 scope.go:117] "RemoveContainer" containerID="07272b41ecad0370dba50d248542820556fbea0683b7f363b6948fae55987a0d" Jan 22 09:35:59 crc kubenswrapper[4811]: I0122 09:35:59.966781 4811 scope.go:117] "RemoveContainer" containerID="66d7c54ecbc6718f0a55b15994900a0c8d44dd2eead9d2f185c581dfdaec9485" Jan 22 09:36:00 crc kubenswrapper[4811]: I0122 09:36:00.074322 4811 scope.go:117] "RemoveContainer" containerID="94765f38a5f88220d82b00f33e2f296d153150a8d1f4c3fd98693ec5b266e414" Jan 22 09:36:00 crc kubenswrapper[4811]: I0122 09:36:00.100603 4811 scope.go:117] "RemoveContainer" containerID="391bf78aca927b06006de4778580b1c6f4ec773790117b2021e5d0a365cf6447" Jan 22 09:36:09 crc kubenswrapper[4811]: I0122 09:36:09.881125 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5ddtk"] Jan 22 09:36:09 crc kubenswrapper[4811]: E0122 09:36:09.881784 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33c2ecaa-9e34-4f53-a09b-1567f6817a84" containerName="extract-content" Jan 22 09:36:09 crc kubenswrapper[4811]: I0122 09:36:09.881796 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="33c2ecaa-9e34-4f53-a09b-1567f6817a84" containerName="extract-content" Jan 22 09:36:09 crc kubenswrapper[4811]: E0122 09:36:09.881813 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33c2ecaa-9e34-4f53-a09b-1567f6817a84" containerName="extract-utilities" Jan 22 09:36:09 crc kubenswrapper[4811]: I0122 09:36:09.881818 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="33c2ecaa-9e34-4f53-a09b-1567f6817a84" containerName="extract-utilities" Jan 22 09:36:09 crc kubenswrapper[4811]: E0122 09:36:09.881831 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33c2ecaa-9e34-4f53-a09b-1567f6817a84" containerName="registry-server" Jan 22 09:36:09 crc kubenswrapper[4811]: I0122 09:36:09.881837 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="33c2ecaa-9e34-4f53-a09b-1567f6817a84" containerName="registry-server" Jan 22 09:36:09 crc kubenswrapper[4811]: I0122 09:36:09.881995 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="33c2ecaa-9e34-4f53-a09b-1567f6817a84" containerName="registry-server" Jan 22 09:36:09 crc kubenswrapper[4811]: I0122 09:36:09.882502 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5ddtk" Jan 22 09:36:09 crc kubenswrapper[4811]: I0122 09:36:09.883917 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 22 09:36:09 crc kubenswrapper[4811]: I0122 09:36:09.884177 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 22 09:36:09 crc kubenswrapper[4811]: I0122 09:36:09.885814 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 22 09:36:09 crc kubenswrapper[4811]: I0122 09:36:09.887881 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7cbs6" Jan 22 09:36:09 crc kubenswrapper[4811]: I0122 09:36:09.890473 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 22 09:36:09 crc kubenswrapper[4811]: I0122 09:36:09.893201 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5ddtk"] Jan 22 09:36:10 crc kubenswrapper[4811]: I0122 09:36:10.047359 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/016a5684-671f-4e6a-81dc-15c2a55a6911-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5ddtk\" (UID: \"016a5684-671f-4e6a-81dc-15c2a55a6911\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5ddtk" Jan 22 09:36:10 crc kubenswrapper[4811]: I0122 09:36:10.047508 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/016a5684-671f-4e6a-81dc-15c2a55a6911-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5ddtk\" (UID: \"016a5684-671f-4e6a-81dc-15c2a55a6911\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5ddtk" Jan 22 09:36:10 crc kubenswrapper[4811]: I0122 09:36:10.047705 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/016a5684-671f-4e6a-81dc-15c2a55a6911-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5ddtk\" (UID: \"016a5684-671f-4e6a-81dc-15c2a55a6911\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5ddtk" Jan 22 09:36:10 crc kubenswrapper[4811]: I0122 09:36:10.047758 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/016a5684-671f-4e6a-81dc-15c2a55a6911-ceph\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5ddtk\" (UID: \"016a5684-671f-4e6a-81dc-15c2a55a6911\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5ddtk" Jan 22 09:36:10 crc kubenswrapper[4811]: I0122 09:36:10.047825 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g68c5\" (UniqueName: \"kubernetes.io/projected/016a5684-671f-4e6a-81dc-15c2a55a6911-kube-api-access-g68c5\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5ddtk\" (UID: \"016a5684-671f-4e6a-81dc-15c2a55a6911\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5ddtk" Jan 22 09:36:10 crc kubenswrapper[4811]: I0122 09:36:10.149131 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/016a5684-671f-4e6a-81dc-15c2a55a6911-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5ddtk\" (UID: \"016a5684-671f-4e6a-81dc-15c2a55a6911\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5ddtk" Jan 22 09:36:10 crc kubenswrapper[4811]: I0122 09:36:10.149243 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/016a5684-671f-4e6a-81dc-15c2a55a6911-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5ddtk\" (UID: \"016a5684-671f-4e6a-81dc-15c2a55a6911\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5ddtk" Jan 22 09:36:10 crc kubenswrapper[4811]: I0122 09:36:10.149278 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/016a5684-671f-4e6a-81dc-15c2a55a6911-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5ddtk\" (UID: \"016a5684-671f-4e6a-81dc-15c2a55a6911\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5ddtk" Jan 22 09:36:10 crc kubenswrapper[4811]: I0122 09:36:10.149306 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/016a5684-671f-4e6a-81dc-15c2a55a6911-ceph\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5ddtk\" (UID: \"016a5684-671f-4e6a-81dc-15c2a55a6911\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5ddtk" Jan 22 09:36:10 crc kubenswrapper[4811]: I0122 09:36:10.149345 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g68c5\" (UniqueName: \"kubernetes.io/projected/016a5684-671f-4e6a-81dc-15c2a55a6911-kube-api-access-g68c5\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5ddtk\" (UID: \"016a5684-671f-4e6a-81dc-15c2a55a6911\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5ddtk" Jan 22 09:36:10 crc kubenswrapper[4811]: I0122 09:36:10.157169 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/016a5684-671f-4e6a-81dc-15c2a55a6911-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5ddtk\" (UID: \"016a5684-671f-4e6a-81dc-15c2a55a6911\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5ddtk" Jan 22 09:36:10 crc kubenswrapper[4811]: I0122 09:36:10.157228 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/016a5684-671f-4e6a-81dc-15c2a55a6911-ceph\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5ddtk\" (UID: \"016a5684-671f-4e6a-81dc-15c2a55a6911\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5ddtk" Jan 22 09:36:10 crc kubenswrapper[4811]: I0122 09:36:10.158138 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/016a5684-671f-4e6a-81dc-15c2a55a6911-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5ddtk\" (UID: \"016a5684-671f-4e6a-81dc-15c2a55a6911\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5ddtk" Jan 22 09:36:10 crc kubenswrapper[4811]: I0122 09:36:10.158453 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/016a5684-671f-4e6a-81dc-15c2a55a6911-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5ddtk\" (UID: \"016a5684-671f-4e6a-81dc-15c2a55a6911\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5ddtk" Jan 22 09:36:10 crc kubenswrapper[4811]: I0122 09:36:10.162820 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g68c5\" (UniqueName: \"kubernetes.io/projected/016a5684-671f-4e6a-81dc-15c2a55a6911-kube-api-access-g68c5\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5ddtk\" (UID: \"016a5684-671f-4e6a-81dc-15c2a55a6911\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5ddtk" Jan 22 09:36:10 crc kubenswrapper[4811]: I0122 09:36:10.197311 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5ddtk" Jan 22 09:36:10 crc kubenswrapper[4811]: I0122 09:36:10.617889 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5ddtk"] Jan 22 09:36:10 crc kubenswrapper[4811]: I0122 09:36:10.619852 4811 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 09:36:11 crc kubenswrapper[4811]: I0122 09:36:11.581467 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5ddtk" event={"ID":"016a5684-671f-4e6a-81dc-15c2a55a6911","Type":"ContainerStarted","Data":"9a516ba5da523c2724043c17b9a8d6b4dde6868518dc424b097ed632cbaad432"} Jan 22 09:36:11 crc kubenswrapper[4811]: I0122 09:36:11.581505 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5ddtk" event={"ID":"016a5684-671f-4e6a-81dc-15c2a55a6911","Type":"ContainerStarted","Data":"06c55cbe92741c4ca99047bf659a61b3609b06956bd9e508ae38a1b3ecb995c8"} Jan 22 09:36:11 crc kubenswrapper[4811]: I0122 09:36:11.602098 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5ddtk" podStartSLOduration=2.052961274 podStartE2EDuration="2.602073353s" podCreationTimestamp="2026-01-22 09:36:09 +0000 UTC" firstStartedPulling="2026-01-22 09:36:10.619637731 +0000 UTC m=+1814.941824854" lastFinishedPulling="2026-01-22 09:36:11.168749811 +0000 UTC m=+1815.490936933" observedRunningTime="2026-01-22 09:36:11.591570878 +0000 UTC m=+1815.913758001" watchObservedRunningTime="2026-01-22 09:36:11.602073353 +0000 UTC m=+1815.924260476" Jan 22 09:36:21 crc kubenswrapper[4811]: I0122 09:36:21.650373 4811 generic.go:334] "Generic (PLEG): container finished" podID="016a5684-671f-4e6a-81dc-15c2a55a6911" containerID="9a516ba5da523c2724043c17b9a8d6b4dde6868518dc424b097ed632cbaad432" exitCode=0 Jan 22 09:36:21 crc kubenswrapper[4811]: I0122 09:36:21.650795 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5ddtk" event={"ID":"016a5684-671f-4e6a-81dc-15c2a55a6911","Type":"ContainerDied","Data":"9a516ba5da523c2724043c17b9a8d6b4dde6868518dc424b097ed632cbaad432"} Jan 22 09:36:22 crc kubenswrapper[4811]: I0122 09:36:22.946992 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5ddtk" Jan 22 09:36:23 crc kubenswrapper[4811]: I0122 09:36:23.056860 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/016a5684-671f-4e6a-81dc-15c2a55a6911-ceph\") pod \"016a5684-671f-4e6a-81dc-15c2a55a6911\" (UID: \"016a5684-671f-4e6a-81dc-15c2a55a6911\") " Jan 22 09:36:23 crc kubenswrapper[4811]: I0122 09:36:23.056911 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g68c5\" (UniqueName: \"kubernetes.io/projected/016a5684-671f-4e6a-81dc-15c2a55a6911-kube-api-access-g68c5\") pod \"016a5684-671f-4e6a-81dc-15c2a55a6911\" (UID: \"016a5684-671f-4e6a-81dc-15c2a55a6911\") " Jan 22 09:36:23 crc kubenswrapper[4811]: I0122 09:36:23.057073 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/016a5684-671f-4e6a-81dc-15c2a55a6911-inventory\") pod \"016a5684-671f-4e6a-81dc-15c2a55a6911\" (UID: \"016a5684-671f-4e6a-81dc-15c2a55a6911\") " Jan 22 09:36:23 crc kubenswrapper[4811]: I0122 09:36:23.057158 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/016a5684-671f-4e6a-81dc-15c2a55a6911-repo-setup-combined-ca-bundle\") pod \"016a5684-671f-4e6a-81dc-15c2a55a6911\" (UID: \"016a5684-671f-4e6a-81dc-15c2a55a6911\") " Jan 22 09:36:23 crc kubenswrapper[4811]: I0122 09:36:23.057223 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/016a5684-671f-4e6a-81dc-15c2a55a6911-ssh-key-openstack-edpm-ipam\") pod \"016a5684-671f-4e6a-81dc-15c2a55a6911\" (UID: \"016a5684-671f-4e6a-81dc-15c2a55a6911\") " Jan 22 09:36:23 crc kubenswrapper[4811]: I0122 09:36:23.061411 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/016a5684-671f-4e6a-81dc-15c2a55a6911-kube-api-access-g68c5" (OuterVolumeSpecName: "kube-api-access-g68c5") pod "016a5684-671f-4e6a-81dc-15c2a55a6911" (UID: "016a5684-671f-4e6a-81dc-15c2a55a6911"). InnerVolumeSpecName "kube-api-access-g68c5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:36:23 crc kubenswrapper[4811]: I0122 09:36:23.061604 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/016a5684-671f-4e6a-81dc-15c2a55a6911-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "016a5684-671f-4e6a-81dc-15c2a55a6911" (UID: "016a5684-671f-4e6a-81dc-15c2a55a6911"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:36:23 crc kubenswrapper[4811]: I0122 09:36:23.062747 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/016a5684-671f-4e6a-81dc-15c2a55a6911-ceph" (OuterVolumeSpecName: "ceph") pod "016a5684-671f-4e6a-81dc-15c2a55a6911" (UID: "016a5684-671f-4e6a-81dc-15c2a55a6911"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:36:23 crc kubenswrapper[4811]: I0122 09:36:23.078802 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/016a5684-671f-4e6a-81dc-15c2a55a6911-inventory" (OuterVolumeSpecName: "inventory") pod "016a5684-671f-4e6a-81dc-15c2a55a6911" (UID: "016a5684-671f-4e6a-81dc-15c2a55a6911"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:36:23 crc kubenswrapper[4811]: I0122 09:36:23.079165 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/016a5684-671f-4e6a-81dc-15c2a55a6911-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "016a5684-671f-4e6a-81dc-15c2a55a6911" (UID: "016a5684-671f-4e6a-81dc-15c2a55a6911"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:36:23 crc kubenswrapper[4811]: I0122 09:36:23.159718 4811 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/016a5684-671f-4e6a-81dc-15c2a55a6911-ceph\") on node \"crc\" DevicePath \"\"" Jan 22 09:36:23 crc kubenswrapper[4811]: I0122 09:36:23.159891 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g68c5\" (UniqueName: \"kubernetes.io/projected/016a5684-671f-4e6a-81dc-15c2a55a6911-kube-api-access-g68c5\") on node \"crc\" DevicePath \"\"" Jan 22 09:36:23 crc kubenswrapper[4811]: I0122 09:36:23.159902 4811 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/016a5684-671f-4e6a-81dc-15c2a55a6911-inventory\") on node \"crc\" DevicePath \"\"" Jan 22 09:36:23 crc kubenswrapper[4811]: I0122 09:36:23.159911 4811 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/016a5684-671f-4e6a-81dc-15c2a55a6911-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 09:36:23 crc kubenswrapper[4811]: I0122 09:36:23.159920 4811 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/016a5684-671f-4e6a-81dc-15c2a55a6911-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 22 09:36:23 crc kubenswrapper[4811]: I0122 09:36:23.663417 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5ddtk" event={"ID":"016a5684-671f-4e6a-81dc-15c2a55a6911","Type":"ContainerDied","Data":"06c55cbe92741c4ca99047bf659a61b3609b06956bd9e508ae38a1b3ecb995c8"} Jan 22 09:36:23 crc kubenswrapper[4811]: I0122 09:36:23.663466 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="06c55cbe92741c4ca99047bf659a61b3609b06956bd9e508ae38a1b3ecb995c8" Jan 22 09:36:23 crc kubenswrapper[4811]: I0122 09:36:23.663471 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5ddtk" Jan 22 09:36:23 crc kubenswrapper[4811]: I0122 09:36:23.726747 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mffrf"] Jan 22 09:36:23 crc kubenswrapper[4811]: E0122 09:36:23.727035 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="016a5684-671f-4e6a-81dc-15c2a55a6911" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 22 09:36:23 crc kubenswrapper[4811]: I0122 09:36:23.727052 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="016a5684-671f-4e6a-81dc-15c2a55a6911" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 22 09:36:23 crc kubenswrapper[4811]: I0122 09:36:23.727189 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="016a5684-671f-4e6a-81dc-15c2a55a6911" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 22 09:36:23 crc kubenswrapper[4811]: I0122 09:36:23.727674 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mffrf" Jan 22 09:36:23 crc kubenswrapper[4811]: I0122 09:36:23.732017 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 22 09:36:23 crc kubenswrapper[4811]: I0122 09:36:23.732320 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 22 09:36:23 crc kubenswrapper[4811]: I0122 09:36:23.732329 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 22 09:36:23 crc kubenswrapper[4811]: I0122 09:36:23.732375 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7cbs6" Jan 22 09:36:23 crc kubenswrapper[4811]: I0122 09:36:23.732516 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 22 09:36:23 crc kubenswrapper[4811]: I0122 09:36:23.738092 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mffrf"] Jan 22 09:36:23 crc kubenswrapper[4811]: I0122 09:36:23.769567 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c799f725-4c74-42ab-9217-06e6c0310194-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-mffrf\" (UID: \"c799f725-4c74-42ab-9217-06e6c0310194\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mffrf" Jan 22 09:36:23 crc kubenswrapper[4811]: I0122 09:36:23.769672 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c799f725-4c74-42ab-9217-06e6c0310194-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-mffrf\" (UID: \"c799f725-4c74-42ab-9217-06e6c0310194\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mffrf" Jan 22 09:36:23 crc kubenswrapper[4811]: I0122 09:36:23.769701 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mq6vf\" (UniqueName: \"kubernetes.io/projected/c799f725-4c74-42ab-9217-06e6c0310194-kube-api-access-mq6vf\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-mffrf\" (UID: \"c799f725-4c74-42ab-9217-06e6c0310194\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mffrf" Jan 22 09:36:23 crc kubenswrapper[4811]: I0122 09:36:23.769796 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c799f725-4c74-42ab-9217-06e6c0310194-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-mffrf\" (UID: \"c799f725-4c74-42ab-9217-06e6c0310194\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mffrf" Jan 22 09:36:23 crc kubenswrapper[4811]: I0122 09:36:23.769961 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c799f725-4c74-42ab-9217-06e6c0310194-ceph\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-mffrf\" (UID: \"c799f725-4c74-42ab-9217-06e6c0310194\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mffrf" Jan 22 09:36:23 crc kubenswrapper[4811]: I0122 09:36:23.871405 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c799f725-4c74-42ab-9217-06e6c0310194-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-mffrf\" (UID: \"c799f725-4c74-42ab-9217-06e6c0310194\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mffrf" Jan 22 09:36:23 crc kubenswrapper[4811]: I0122 09:36:23.871583 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mq6vf\" (UniqueName: \"kubernetes.io/projected/c799f725-4c74-42ab-9217-06e6c0310194-kube-api-access-mq6vf\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-mffrf\" (UID: \"c799f725-4c74-42ab-9217-06e6c0310194\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mffrf" Jan 22 09:36:23 crc kubenswrapper[4811]: I0122 09:36:23.872016 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c799f725-4c74-42ab-9217-06e6c0310194-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-mffrf\" (UID: \"c799f725-4c74-42ab-9217-06e6c0310194\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mffrf" Jan 22 09:36:23 crc kubenswrapper[4811]: I0122 09:36:23.872417 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c799f725-4c74-42ab-9217-06e6c0310194-ceph\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-mffrf\" (UID: \"c799f725-4c74-42ab-9217-06e6c0310194\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mffrf" Jan 22 09:36:23 crc kubenswrapper[4811]: I0122 09:36:23.872692 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c799f725-4c74-42ab-9217-06e6c0310194-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-mffrf\" (UID: \"c799f725-4c74-42ab-9217-06e6c0310194\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mffrf" Jan 22 09:36:23 crc kubenswrapper[4811]: I0122 09:36:23.875416 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c799f725-4c74-42ab-9217-06e6c0310194-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-mffrf\" (UID: \"c799f725-4c74-42ab-9217-06e6c0310194\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mffrf" Jan 22 09:36:23 crc kubenswrapper[4811]: I0122 09:36:23.875620 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c799f725-4c74-42ab-9217-06e6c0310194-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-mffrf\" (UID: \"c799f725-4c74-42ab-9217-06e6c0310194\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mffrf" Jan 22 09:36:23 crc kubenswrapper[4811]: I0122 09:36:23.875858 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c799f725-4c74-42ab-9217-06e6c0310194-ceph\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-mffrf\" (UID: \"c799f725-4c74-42ab-9217-06e6c0310194\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mffrf" Jan 22 09:36:23 crc kubenswrapper[4811]: I0122 09:36:23.876763 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c799f725-4c74-42ab-9217-06e6c0310194-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-mffrf\" (UID: \"c799f725-4c74-42ab-9217-06e6c0310194\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mffrf" Jan 22 09:36:23 crc kubenswrapper[4811]: I0122 09:36:23.884442 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mq6vf\" (UniqueName: \"kubernetes.io/projected/c799f725-4c74-42ab-9217-06e6c0310194-kube-api-access-mq6vf\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-mffrf\" (UID: \"c799f725-4c74-42ab-9217-06e6c0310194\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mffrf" Jan 22 09:36:24 crc kubenswrapper[4811]: I0122 09:36:24.049057 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mffrf" Jan 22 09:36:24 crc kubenswrapper[4811]: I0122 09:36:24.480665 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mffrf"] Jan 22 09:36:24 crc kubenswrapper[4811]: I0122 09:36:24.669743 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mffrf" event={"ID":"c799f725-4c74-42ab-9217-06e6c0310194","Type":"ContainerStarted","Data":"c10cacbcc2c13f5411c46447086e45016b5899ac9d0a0cbc9901155f8d064065"} Jan 22 09:36:25 crc kubenswrapper[4811]: I0122 09:36:25.677219 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mffrf" event={"ID":"c799f725-4c74-42ab-9217-06e6c0310194","Type":"ContainerStarted","Data":"97a0be33ecbc0b767082828c86b60963dc911cf62cdc748817410c0c2333b7dc"} Jan 22 09:36:25 crc kubenswrapper[4811]: I0122 09:36:25.691541 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mffrf" podStartSLOduration=1.956661244 podStartE2EDuration="2.691523817s" podCreationTimestamp="2026-01-22 09:36:23 +0000 UTC" firstStartedPulling="2026-01-22 09:36:24.488518919 +0000 UTC m=+1828.810706043" lastFinishedPulling="2026-01-22 09:36:25.223381493 +0000 UTC m=+1829.545568616" observedRunningTime="2026-01-22 09:36:25.689151813 +0000 UTC m=+1830.011338937" watchObservedRunningTime="2026-01-22 09:36:25.691523817 +0000 UTC m=+1830.013710940" Jan 22 09:37:00 crc kubenswrapper[4811]: I0122 09:37:00.201055 4811 scope.go:117] "RemoveContainer" containerID="60774f7af279d21f593fcd206a039685e7f60cb6d9b6416fcdb026a066c65e12" Jan 22 09:37:00 crc kubenswrapper[4811]: I0122 09:37:00.233310 4811 scope.go:117] "RemoveContainer" containerID="bcae4ad0a2861a2ac5e02d6663d89635e8253d859dc6c15a918e9561e801389c" Jan 22 09:37:00 crc kubenswrapper[4811]: I0122 09:37:00.256549 4811 scope.go:117] "RemoveContainer" containerID="4dba93963e4469ccb86bebe8aedc506c0151a4344968f24119a271b9240ddafe" Jan 22 09:38:00 crc kubenswrapper[4811]: I0122 09:38:00.340568 4811 scope.go:117] "RemoveContainer" containerID="21ce652018d671fb882fb3de6ad53f96a278beac2dcab2764c914cf18674df22" Jan 22 09:38:00 crc kubenswrapper[4811]: I0122 09:38:00.366280 4811 scope.go:117] "RemoveContainer" containerID="16cfff4cb2a2d295c80d51700457b79fbb4e57e5c6f40283de21fea4fcdfbd98" Jan 22 09:38:01 crc kubenswrapper[4811]: I0122 09:38:01.232735 4811 generic.go:334] "Generic (PLEG): container finished" podID="c799f725-4c74-42ab-9217-06e6c0310194" containerID="97a0be33ecbc0b767082828c86b60963dc911cf62cdc748817410c0c2333b7dc" exitCode=0 Jan 22 09:38:01 crc kubenswrapper[4811]: I0122 09:38:01.232817 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mffrf" event={"ID":"c799f725-4c74-42ab-9217-06e6c0310194","Type":"ContainerDied","Data":"97a0be33ecbc0b767082828c86b60963dc911cf62cdc748817410c0c2333b7dc"} Jan 22 09:38:02 crc kubenswrapper[4811]: I0122 09:38:02.526955 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mffrf" Jan 22 09:38:02 crc kubenswrapper[4811]: I0122 09:38:02.682721 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mq6vf\" (UniqueName: \"kubernetes.io/projected/c799f725-4c74-42ab-9217-06e6c0310194-kube-api-access-mq6vf\") pod \"c799f725-4c74-42ab-9217-06e6c0310194\" (UID: \"c799f725-4c74-42ab-9217-06e6c0310194\") " Jan 22 09:38:02 crc kubenswrapper[4811]: I0122 09:38:02.682776 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c799f725-4c74-42ab-9217-06e6c0310194-bootstrap-combined-ca-bundle\") pod \"c799f725-4c74-42ab-9217-06e6c0310194\" (UID: \"c799f725-4c74-42ab-9217-06e6c0310194\") " Jan 22 09:38:02 crc kubenswrapper[4811]: I0122 09:38:02.682813 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c799f725-4c74-42ab-9217-06e6c0310194-ssh-key-openstack-edpm-ipam\") pod \"c799f725-4c74-42ab-9217-06e6c0310194\" (UID: \"c799f725-4c74-42ab-9217-06e6c0310194\") " Jan 22 09:38:02 crc kubenswrapper[4811]: I0122 09:38:02.682837 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c799f725-4c74-42ab-9217-06e6c0310194-ceph\") pod \"c799f725-4c74-42ab-9217-06e6c0310194\" (UID: \"c799f725-4c74-42ab-9217-06e6c0310194\") " Jan 22 09:38:02 crc kubenswrapper[4811]: I0122 09:38:02.682883 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c799f725-4c74-42ab-9217-06e6c0310194-inventory\") pod \"c799f725-4c74-42ab-9217-06e6c0310194\" (UID: \"c799f725-4c74-42ab-9217-06e6c0310194\") " Jan 22 09:38:02 crc kubenswrapper[4811]: I0122 09:38:02.688773 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c799f725-4c74-42ab-9217-06e6c0310194-ceph" (OuterVolumeSpecName: "ceph") pod "c799f725-4c74-42ab-9217-06e6c0310194" (UID: "c799f725-4c74-42ab-9217-06e6c0310194"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:38:02 crc kubenswrapper[4811]: I0122 09:38:02.688864 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c799f725-4c74-42ab-9217-06e6c0310194-kube-api-access-mq6vf" (OuterVolumeSpecName: "kube-api-access-mq6vf") pod "c799f725-4c74-42ab-9217-06e6c0310194" (UID: "c799f725-4c74-42ab-9217-06e6c0310194"). InnerVolumeSpecName "kube-api-access-mq6vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:38:02 crc kubenswrapper[4811]: I0122 09:38:02.688791 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c799f725-4c74-42ab-9217-06e6c0310194-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "c799f725-4c74-42ab-9217-06e6c0310194" (UID: "c799f725-4c74-42ab-9217-06e6c0310194"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:38:02 crc kubenswrapper[4811]: I0122 09:38:02.702480 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c799f725-4c74-42ab-9217-06e6c0310194-inventory" (OuterVolumeSpecName: "inventory") pod "c799f725-4c74-42ab-9217-06e6c0310194" (UID: "c799f725-4c74-42ab-9217-06e6c0310194"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:38:02 crc kubenswrapper[4811]: I0122 09:38:02.703988 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c799f725-4c74-42ab-9217-06e6c0310194-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "c799f725-4c74-42ab-9217-06e6c0310194" (UID: "c799f725-4c74-42ab-9217-06e6c0310194"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:38:02 crc kubenswrapper[4811]: I0122 09:38:02.784524 4811 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c799f725-4c74-42ab-9217-06e6c0310194-inventory\") on node \"crc\" DevicePath \"\"" Jan 22 09:38:02 crc kubenswrapper[4811]: I0122 09:38:02.784549 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mq6vf\" (UniqueName: \"kubernetes.io/projected/c799f725-4c74-42ab-9217-06e6c0310194-kube-api-access-mq6vf\") on node \"crc\" DevicePath \"\"" Jan 22 09:38:02 crc kubenswrapper[4811]: I0122 09:38:02.784560 4811 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c799f725-4c74-42ab-9217-06e6c0310194-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 09:38:02 crc kubenswrapper[4811]: I0122 09:38:02.784569 4811 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c799f725-4c74-42ab-9217-06e6c0310194-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 22 09:38:02 crc kubenswrapper[4811]: I0122 09:38:02.784578 4811 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c799f725-4c74-42ab-9217-06e6c0310194-ceph\") on node \"crc\" DevicePath \"\"" Jan 22 09:38:03 crc kubenswrapper[4811]: I0122 09:38:03.244480 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mffrf" event={"ID":"c799f725-4c74-42ab-9217-06e6c0310194","Type":"ContainerDied","Data":"c10cacbcc2c13f5411c46447086e45016b5899ac9d0a0cbc9901155f8d064065"} Jan 22 09:38:03 crc kubenswrapper[4811]: I0122 09:38:03.244511 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c10cacbcc2c13f5411c46447086e45016b5899ac9d0a0cbc9901155f8d064065" Jan 22 09:38:03 crc kubenswrapper[4811]: I0122 09:38:03.244534 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mffrf" Jan 22 09:38:03 crc kubenswrapper[4811]: I0122 09:38:03.313494 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8jwn7"] Jan 22 09:38:03 crc kubenswrapper[4811]: E0122 09:38:03.313808 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c799f725-4c74-42ab-9217-06e6c0310194" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 22 09:38:03 crc kubenswrapper[4811]: I0122 09:38:03.313825 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="c799f725-4c74-42ab-9217-06e6c0310194" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 22 09:38:03 crc kubenswrapper[4811]: I0122 09:38:03.313969 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="c799f725-4c74-42ab-9217-06e6c0310194" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 22 09:38:03 crc kubenswrapper[4811]: I0122 09:38:03.314452 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8jwn7" Jan 22 09:38:03 crc kubenswrapper[4811]: I0122 09:38:03.315933 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 22 09:38:03 crc kubenswrapper[4811]: I0122 09:38:03.316135 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 22 09:38:03 crc kubenswrapper[4811]: I0122 09:38:03.316292 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 22 09:38:03 crc kubenswrapper[4811]: I0122 09:38:03.316435 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 22 09:38:03 crc kubenswrapper[4811]: I0122 09:38:03.316544 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7cbs6" Jan 22 09:38:03 crc kubenswrapper[4811]: I0122 09:38:03.324993 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8jwn7"] Jan 22 09:38:03 crc kubenswrapper[4811]: I0122 09:38:03.393203 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6251494a-e332-4222-b95c-80c7205dc4ce-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-8jwn7\" (UID: \"6251494a-e332-4222-b95c-80c7205dc4ce\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8jwn7" Jan 22 09:38:03 crc kubenswrapper[4811]: I0122 09:38:03.393330 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6251494a-e332-4222-b95c-80c7205dc4ce-ceph\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-8jwn7\" (UID: \"6251494a-e332-4222-b95c-80c7205dc4ce\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8jwn7" Jan 22 09:38:03 crc kubenswrapper[4811]: I0122 09:38:03.393368 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9t4q9\" (UniqueName: \"kubernetes.io/projected/6251494a-e332-4222-b95c-80c7205dc4ce-kube-api-access-9t4q9\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-8jwn7\" (UID: \"6251494a-e332-4222-b95c-80c7205dc4ce\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8jwn7" Jan 22 09:38:03 crc kubenswrapper[4811]: I0122 09:38:03.393404 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6251494a-e332-4222-b95c-80c7205dc4ce-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-8jwn7\" (UID: \"6251494a-e332-4222-b95c-80c7205dc4ce\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8jwn7" Jan 22 09:38:03 crc kubenswrapper[4811]: I0122 09:38:03.494674 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6251494a-e332-4222-b95c-80c7205dc4ce-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-8jwn7\" (UID: \"6251494a-e332-4222-b95c-80c7205dc4ce\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8jwn7" Jan 22 09:38:03 crc kubenswrapper[4811]: I0122 09:38:03.494757 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6251494a-e332-4222-b95c-80c7205dc4ce-ceph\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-8jwn7\" (UID: \"6251494a-e332-4222-b95c-80c7205dc4ce\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8jwn7" Jan 22 09:38:03 crc kubenswrapper[4811]: I0122 09:38:03.494786 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9t4q9\" (UniqueName: \"kubernetes.io/projected/6251494a-e332-4222-b95c-80c7205dc4ce-kube-api-access-9t4q9\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-8jwn7\" (UID: \"6251494a-e332-4222-b95c-80c7205dc4ce\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8jwn7" Jan 22 09:38:03 crc kubenswrapper[4811]: I0122 09:38:03.494815 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6251494a-e332-4222-b95c-80c7205dc4ce-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-8jwn7\" (UID: \"6251494a-e332-4222-b95c-80c7205dc4ce\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8jwn7" Jan 22 09:38:03 crc kubenswrapper[4811]: I0122 09:38:03.499533 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6251494a-e332-4222-b95c-80c7205dc4ce-ceph\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-8jwn7\" (UID: \"6251494a-e332-4222-b95c-80c7205dc4ce\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8jwn7" Jan 22 09:38:03 crc kubenswrapper[4811]: I0122 09:38:03.501461 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6251494a-e332-4222-b95c-80c7205dc4ce-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-8jwn7\" (UID: \"6251494a-e332-4222-b95c-80c7205dc4ce\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8jwn7" Jan 22 09:38:03 crc kubenswrapper[4811]: I0122 09:38:03.501523 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6251494a-e332-4222-b95c-80c7205dc4ce-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-8jwn7\" (UID: \"6251494a-e332-4222-b95c-80c7205dc4ce\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8jwn7" Jan 22 09:38:03 crc kubenswrapper[4811]: I0122 09:38:03.515360 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9t4q9\" (UniqueName: \"kubernetes.io/projected/6251494a-e332-4222-b95c-80c7205dc4ce-kube-api-access-9t4q9\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-8jwn7\" (UID: \"6251494a-e332-4222-b95c-80c7205dc4ce\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8jwn7" Jan 22 09:38:03 crc kubenswrapper[4811]: I0122 09:38:03.626806 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8jwn7" Jan 22 09:38:04 crc kubenswrapper[4811]: I0122 09:38:04.049162 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8jwn7"] Jan 22 09:38:04 crc kubenswrapper[4811]: W0122 09:38:04.057910 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6251494a_e332_4222_b95c_80c7205dc4ce.slice/crio-ec7b60da6b1b4cc63af99967fbe72adff3bbff50f231a86b8511356a963affe5 WatchSource:0}: Error finding container ec7b60da6b1b4cc63af99967fbe72adff3bbff50f231a86b8511356a963affe5: Status 404 returned error can't find the container with id ec7b60da6b1b4cc63af99967fbe72adff3bbff50f231a86b8511356a963affe5 Jan 22 09:38:04 crc kubenswrapper[4811]: I0122 09:38:04.250493 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8jwn7" event={"ID":"6251494a-e332-4222-b95c-80c7205dc4ce","Type":"ContainerStarted","Data":"ec7b60da6b1b4cc63af99967fbe72adff3bbff50f231a86b8511356a963affe5"} Jan 22 09:38:05 crc kubenswrapper[4811]: I0122 09:38:05.256860 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8jwn7" event={"ID":"6251494a-e332-4222-b95c-80c7205dc4ce","Type":"ContainerStarted","Data":"ce98c575dab88bc373e2f4a74293aa3ed8f46df4cf06c1bc1bc630fe8f09f474"} Jan 22 09:38:05 crc kubenswrapper[4811]: I0122 09:38:05.272075 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8jwn7" podStartSLOduration=1.747522917 podStartE2EDuration="2.272061709s" podCreationTimestamp="2026-01-22 09:38:03 +0000 UTC" firstStartedPulling="2026-01-22 09:38:04.059269573 +0000 UTC m=+1928.381456697" lastFinishedPulling="2026-01-22 09:38:04.583808366 +0000 UTC m=+1928.905995489" observedRunningTime="2026-01-22 09:38:05.266466697 +0000 UTC m=+1929.588653821" watchObservedRunningTime="2026-01-22 09:38:05.272061709 +0000 UTC m=+1929.594248833" Jan 22 09:38:05 crc kubenswrapper[4811]: I0122 09:38:05.500943 4811 patch_prober.go:28] interesting pod/machine-config-daemon-txvcq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 09:38:05 crc kubenswrapper[4811]: I0122 09:38:05.501149 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 09:38:25 crc kubenswrapper[4811]: I0122 09:38:25.380176 4811 generic.go:334] "Generic (PLEG): container finished" podID="6251494a-e332-4222-b95c-80c7205dc4ce" containerID="ce98c575dab88bc373e2f4a74293aa3ed8f46df4cf06c1bc1bc630fe8f09f474" exitCode=0 Jan 22 09:38:25 crc kubenswrapper[4811]: I0122 09:38:25.380549 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8jwn7" event={"ID":"6251494a-e332-4222-b95c-80c7205dc4ce","Type":"ContainerDied","Data":"ce98c575dab88bc373e2f4a74293aa3ed8f46df4cf06c1bc1bc630fe8f09f474"} Jan 22 09:38:26 crc kubenswrapper[4811]: I0122 09:38:26.713061 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8jwn7" Jan 22 09:38:26 crc kubenswrapper[4811]: I0122 09:38:26.721241 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6251494a-e332-4222-b95c-80c7205dc4ce-ceph\") pod \"6251494a-e332-4222-b95c-80c7205dc4ce\" (UID: \"6251494a-e332-4222-b95c-80c7205dc4ce\") " Jan 22 09:38:26 crc kubenswrapper[4811]: I0122 09:38:26.721300 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6251494a-e332-4222-b95c-80c7205dc4ce-ssh-key-openstack-edpm-ipam\") pod \"6251494a-e332-4222-b95c-80c7205dc4ce\" (UID: \"6251494a-e332-4222-b95c-80c7205dc4ce\") " Jan 22 09:38:26 crc kubenswrapper[4811]: I0122 09:38:26.721370 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6251494a-e332-4222-b95c-80c7205dc4ce-inventory\") pod \"6251494a-e332-4222-b95c-80c7205dc4ce\" (UID: \"6251494a-e332-4222-b95c-80c7205dc4ce\") " Jan 22 09:38:26 crc kubenswrapper[4811]: I0122 09:38:26.721466 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9t4q9\" (UniqueName: \"kubernetes.io/projected/6251494a-e332-4222-b95c-80c7205dc4ce-kube-api-access-9t4q9\") pod \"6251494a-e332-4222-b95c-80c7205dc4ce\" (UID: \"6251494a-e332-4222-b95c-80c7205dc4ce\") " Jan 22 09:38:26 crc kubenswrapper[4811]: I0122 09:38:26.725589 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6251494a-e332-4222-b95c-80c7205dc4ce-ceph" (OuterVolumeSpecName: "ceph") pod "6251494a-e332-4222-b95c-80c7205dc4ce" (UID: "6251494a-e332-4222-b95c-80c7205dc4ce"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:38:26 crc kubenswrapper[4811]: I0122 09:38:26.735247 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6251494a-e332-4222-b95c-80c7205dc4ce-kube-api-access-9t4q9" (OuterVolumeSpecName: "kube-api-access-9t4q9") pod "6251494a-e332-4222-b95c-80c7205dc4ce" (UID: "6251494a-e332-4222-b95c-80c7205dc4ce"). InnerVolumeSpecName "kube-api-access-9t4q9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:38:26 crc kubenswrapper[4811]: I0122 09:38:26.749666 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6251494a-e332-4222-b95c-80c7205dc4ce-inventory" (OuterVolumeSpecName: "inventory") pod "6251494a-e332-4222-b95c-80c7205dc4ce" (UID: "6251494a-e332-4222-b95c-80c7205dc4ce"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:38:26 crc kubenswrapper[4811]: I0122 09:38:26.755834 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6251494a-e332-4222-b95c-80c7205dc4ce-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "6251494a-e332-4222-b95c-80c7205dc4ce" (UID: "6251494a-e332-4222-b95c-80c7205dc4ce"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:38:26 crc kubenswrapper[4811]: I0122 09:38:26.823065 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9t4q9\" (UniqueName: \"kubernetes.io/projected/6251494a-e332-4222-b95c-80c7205dc4ce-kube-api-access-9t4q9\") on node \"crc\" DevicePath \"\"" Jan 22 09:38:26 crc kubenswrapper[4811]: I0122 09:38:26.823091 4811 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6251494a-e332-4222-b95c-80c7205dc4ce-ceph\") on node \"crc\" DevicePath \"\"" Jan 22 09:38:26 crc kubenswrapper[4811]: I0122 09:38:26.823117 4811 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6251494a-e332-4222-b95c-80c7205dc4ce-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 22 09:38:26 crc kubenswrapper[4811]: I0122 09:38:26.823125 4811 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6251494a-e332-4222-b95c-80c7205dc4ce-inventory\") on node \"crc\" DevicePath \"\"" Jan 22 09:38:27 crc kubenswrapper[4811]: I0122 09:38:27.392362 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8jwn7" event={"ID":"6251494a-e332-4222-b95c-80c7205dc4ce","Type":"ContainerDied","Data":"ec7b60da6b1b4cc63af99967fbe72adff3bbff50f231a86b8511356a963affe5"} Jan 22 09:38:27 crc kubenswrapper[4811]: I0122 09:38:27.392574 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ec7b60da6b1b4cc63af99967fbe72adff3bbff50f231a86b8511356a963affe5" Jan 22 09:38:27 crc kubenswrapper[4811]: I0122 09:38:27.392405 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8jwn7" Jan 22 09:38:27 crc kubenswrapper[4811]: I0122 09:38:27.450062 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-f92wm"] Jan 22 09:38:27 crc kubenswrapper[4811]: E0122 09:38:27.450359 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6251494a-e332-4222-b95c-80c7205dc4ce" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 22 09:38:27 crc kubenswrapper[4811]: I0122 09:38:27.450378 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="6251494a-e332-4222-b95c-80c7205dc4ce" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 22 09:38:27 crc kubenswrapper[4811]: I0122 09:38:27.450555 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="6251494a-e332-4222-b95c-80c7205dc4ce" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 22 09:38:27 crc kubenswrapper[4811]: I0122 09:38:27.451061 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-f92wm" Jan 22 09:38:27 crc kubenswrapper[4811]: I0122 09:38:27.452398 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 22 09:38:27 crc kubenswrapper[4811]: I0122 09:38:27.453022 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 22 09:38:27 crc kubenswrapper[4811]: I0122 09:38:27.453450 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 22 09:38:27 crc kubenswrapper[4811]: I0122 09:38:27.454739 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7cbs6" Jan 22 09:38:27 crc kubenswrapper[4811]: I0122 09:38:27.457303 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 22 09:38:27 crc kubenswrapper[4811]: I0122 09:38:27.458184 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-f92wm"] Jan 22 09:38:27 crc kubenswrapper[4811]: I0122 09:38:27.533542 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ca9b0d63-2524-406e-bd65-36224327f50f-ceph\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-f92wm\" (UID: \"ca9b0d63-2524-406e-bd65-36224327f50f\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-f92wm" Jan 22 09:38:27 crc kubenswrapper[4811]: I0122 09:38:27.533764 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xv97p\" (UniqueName: \"kubernetes.io/projected/ca9b0d63-2524-406e-bd65-36224327f50f-kube-api-access-xv97p\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-f92wm\" (UID: \"ca9b0d63-2524-406e-bd65-36224327f50f\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-f92wm" Jan 22 09:38:27 crc kubenswrapper[4811]: I0122 09:38:27.533864 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ca9b0d63-2524-406e-bd65-36224327f50f-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-f92wm\" (UID: \"ca9b0d63-2524-406e-bd65-36224327f50f\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-f92wm" Jan 22 09:38:27 crc kubenswrapper[4811]: I0122 09:38:27.533941 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ca9b0d63-2524-406e-bd65-36224327f50f-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-f92wm\" (UID: \"ca9b0d63-2524-406e-bd65-36224327f50f\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-f92wm" Jan 22 09:38:27 crc kubenswrapper[4811]: I0122 09:38:27.635052 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ca9b0d63-2524-406e-bd65-36224327f50f-ceph\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-f92wm\" (UID: \"ca9b0d63-2524-406e-bd65-36224327f50f\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-f92wm" Jan 22 09:38:27 crc kubenswrapper[4811]: I0122 09:38:27.635112 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xv97p\" (UniqueName: \"kubernetes.io/projected/ca9b0d63-2524-406e-bd65-36224327f50f-kube-api-access-xv97p\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-f92wm\" (UID: \"ca9b0d63-2524-406e-bd65-36224327f50f\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-f92wm" Jan 22 09:38:27 crc kubenswrapper[4811]: I0122 09:38:27.635163 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ca9b0d63-2524-406e-bd65-36224327f50f-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-f92wm\" (UID: \"ca9b0d63-2524-406e-bd65-36224327f50f\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-f92wm" Jan 22 09:38:27 crc kubenswrapper[4811]: I0122 09:38:27.635190 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ca9b0d63-2524-406e-bd65-36224327f50f-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-f92wm\" (UID: \"ca9b0d63-2524-406e-bd65-36224327f50f\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-f92wm" Jan 22 09:38:27 crc kubenswrapper[4811]: I0122 09:38:27.639831 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ca9b0d63-2524-406e-bd65-36224327f50f-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-f92wm\" (UID: \"ca9b0d63-2524-406e-bd65-36224327f50f\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-f92wm" Jan 22 09:38:27 crc kubenswrapper[4811]: I0122 09:38:27.640255 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ca9b0d63-2524-406e-bd65-36224327f50f-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-f92wm\" (UID: \"ca9b0d63-2524-406e-bd65-36224327f50f\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-f92wm" Jan 22 09:38:27 crc kubenswrapper[4811]: I0122 09:38:27.640569 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ca9b0d63-2524-406e-bd65-36224327f50f-ceph\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-f92wm\" (UID: \"ca9b0d63-2524-406e-bd65-36224327f50f\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-f92wm" Jan 22 09:38:27 crc kubenswrapper[4811]: I0122 09:38:27.648110 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xv97p\" (UniqueName: \"kubernetes.io/projected/ca9b0d63-2524-406e-bd65-36224327f50f-kube-api-access-xv97p\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-f92wm\" (UID: \"ca9b0d63-2524-406e-bd65-36224327f50f\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-f92wm" Jan 22 09:38:27 crc kubenswrapper[4811]: I0122 09:38:27.763906 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-f92wm" Jan 22 09:38:28 crc kubenswrapper[4811]: I0122 09:38:28.173309 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-f92wm"] Jan 22 09:38:28 crc kubenswrapper[4811]: I0122 09:38:28.398158 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-f92wm" event={"ID":"ca9b0d63-2524-406e-bd65-36224327f50f","Type":"ContainerStarted","Data":"9205e1db91a05f0fb08a7c0ee25fd42c779bb6010b5fefe449d1a96b5ca8a033"} Jan 22 09:38:29 crc kubenswrapper[4811]: I0122 09:38:29.411797 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-f92wm" event={"ID":"ca9b0d63-2524-406e-bd65-36224327f50f","Type":"ContainerStarted","Data":"63db4b66843879ad8c3ed8ef9ac75d50d784d95c04235c99bd773817238b26ae"} Jan 22 09:38:29 crc kubenswrapper[4811]: I0122 09:38:29.436673 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-f92wm" podStartSLOduration=1.853037027 podStartE2EDuration="2.436647957s" podCreationTimestamp="2026-01-22 09:38:27 +0000 UTC" firstStartedPulling="2026-01-22 09:38:28.178990037 +0000 UTC m=+1952.501177161" lastFinishedPulling="2026-01-22 09:38:28.762600958 +0000 UTC m=+1953.084788091" observedRunningTime="2026-01-22 09:38:29.427696165 +0000 UTC m=+1953.749883287" watchObservedRunningTime="2026-01-22 09:38:29.436647957 +0000 UTC m=+1953.758835081" Jan 22 09:38:33 crc kubenswrapper[4811]: I0122 09:38:33.436241 4811 generic.go:334] "Generic (PLEG): container finished" podID="ca9b0d63-2524-406e-bd65-36224327f50f" containerID="63db4b66843879ad8c3ed8ef9ac75d50d784d95c04235c99bd773817238b26ae" exitCode=0 Jan 22 09:38:33 crc kubenswrapper[4811]: I0122 09:38:33.436338 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-f92wm" event={"ID":"ca9b0d63-2524-406e-bd65-36224327f50f","Type":"ContainerDied","Data":"63db4b66843879ad8c3ed8ef9ac75d50d784d95c04235c99bd773817238b26ae"} Jan 22 09:38:34 crc kubenswrapper[4811]: I0122 09:38:34.764056 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-f92wm" Jan 22 09:38:34 crc kubenswrapper[4811]: I0122 09:38:34.942652 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ca9b0d63-2524-406e-bd65-36224327f50f-ssh-key-openstack-edpm-ipam\") pod \"ca9b0d63-2524-406e-bd65-36224327f50f\" (UID: \"ca9b0d63-2524-406e-bd65-36224327f50f\") " Jan 22 09:38:34 crc kubenswrapper[4811]: I0122 09:38:34.942794 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ca9b0d63-2524-406e-bd65-36224327f50f-ceph\") pod \"ca9b0d63-2524-406e-bd65-36224327f50f\" (UID: \"ca9b0d63-2524-406e-bd65-36224327f50f\") " Jan 22 09:38:34 crc kubenswrapper[4811]: I0122 09:38:34.942831 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xv97p\" (UniqueName: \"kubernetes.io/projected/ca9b0d63-2524-406e-bd65-36224327f50f-kube-api-access-xv97p\") pod \"ca9b0d63-2524-406e-bd65-36224327f50f\" (UID: \"ca9b0d63-2524-406e-bd65-36224327f50f\") " Jan 22 09:38:34 crc kubenswrapper[4811]: I0122 09:38:34.942854 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ca9b0d63-2524-406e-bd65-36224327f50f-inventory\") pod \"ca9b0d63-2524-406e-bd65-36224327f50f\" (UID: \"ca9b0d63-2524-406e-bd65-36224327f50f\") " Jan 22 09:38:34 crc kubenswrapper[4811]: I0122 09:38:34.949452 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca9b0d63-2524-406e-bd65-36224327f50f-kube-api-access-xv97p" (OuterVolumeSpecName: "kube-api-access-xv97p") pod "ca9b0d63-2524-406e-bd65-36224327f50f" (UID: "ca9b0d63-2524-406e-bd65-36224327f50f"). InnerVolumeSpecName "kube-api-access-xv97p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:38:34 crc kubenswrapper[4811]: I0122 09:38:34.949908 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca9b0d63-2524-406e-bd65-36224327f50f-ceph" (OuterVolumeSpecName: "ceph") pod "ca9b0d63-2524-406e-bd65-36224327f50f" (UID: "ca9b0d63-2524-406e-bd65-36224327f50f"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:38:34 crc kubenswrapper[4811]: I0122 09:38:34.961656 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca9b0d63-2524-406e-bd65-36224327f50f-inventory" (OuterVolumeSpecName: "inventory") pod "ca9b0d63-2524-406e-bd65-36224327f50f" (UID: "ca9b0d63-2524-406e-bd65-36224327f50f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:38:34 crc kubenswrapper[4811]: I0122 09:38:34.962600 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca9b0d63-2524-406e-bd65-36224327f50f-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ca9b0d63-2524-406e-bd65-36224327f50f" (UID: "ca9b0d63-2524-406e-bd65-36224327f50f"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:38:35 crc kubenswrapper[4811]: I0122 09:38:35.044545 4811 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ca9b0d63-2524-406e-bd65-36224327f50f-ceph\") on node \"crc\" DevicePath \"\"" Jan 22 09:38:35 crc kubenswrapper[4811]: I0122 09:38:35.044685 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xv97p\" (UniqueName: \"kubernetes.io/projected/ca9b0d63-2524-406e-bd65-36224327f50f-kube-api-access-xv97p\") on node \"crc\" DevicePath \"\"" Jan 22 09:38:35 crc kubenswrapper[4811]: I0122 09:38:35.044749 4811 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ca9b0d63-2524-406e-bd65-36224327f50f-inventory\") on node \"crc\" DevicePath \"\"" Jan 22 09:38:35 crc kubenswrapper[4811]: I0122 09:38:35.044801 4811 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ca9b0d63-2524-406e-bd65-36224327f50f-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 22 09:38:35 crc kubenswrapper[4811]: I0122 09:38:35.449501 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-f92wm" event={"ID":"ca9b0d63-2524-406e-bd65-36224327f50f","Type":"ContainerDied","Data":"9205e1db91a05f0fb08a7c0ee25fd42c779bb6010b5fefe449d1a96b5ca8a033"} Jan 22 09:38:35 crc kubenswrapper[4811]: I0122 09:38:35.449561 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9205e1db91a05f0fb08a7c0ee25fd42c779bb6010b5fefe449d1a96b5ca8a033" Jan 22 09:38:35 crc kubenswrapper[4811]: I0122 09:38:35.449583 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-f92wm" Jan 22 09:38:35 crc kubenswrapper[4811]: I0122 09:38:35.501987 4811 patch_prober.go:28] interesting pod/machine-config-daemon-txvcq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 09:38:35 crc kubenswrapper[4811]: I0122 09:38:35.502054 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 09:38:35 crc kubenswrapper[4811]: I0122 09:38:35.505377 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-kctxk"] Jan 22 09:38:35 crc kubenswrapper[4811]: E0122 09:38:35.505730 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca9b0d63-2524-406e-bd65-36224327f50f" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 22 09:38:35 crc kubenswrapper[4811]: I0122 09:38:35.505748 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca9b0d63-2524-406e-bd65-36224327f50f" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 22 09:38:35 crc kubenswrapper[4811]: I0122 09:38:35.505915 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca9b0d63-2524-406e-bd65-36224327f50f" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 22 09:38:35 crc kubenswrapper[4811]: I0122 09:38:35.506390 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kctxk" Jan 22 09:38:35 crc kubenswrapper[4811]: I0122 09:38:35.510129 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 22 09:38:35 crc kubenswrapper[4811]: I0122 09:38:35.510294 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 22 09:38:35 crc kubenswrapper[4811]: I0122 09:38:35.510615 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7cbs6" Jan 22 09:38:35 crc kubenswrapper[4811]: I0122 09:38:35.510774 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 22 09:38:35 crc kubenswrapper[4811]: I0122 09:38:35.511504 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 22 09:38:35 crc kubenswrapper[4811]: I0122 09:38:35.513984 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-kctxk"] Jan 22 09:38:35 crc kubenswrapper[4811]: I0122 09:38:35.551484 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1d8c1630-ca31-4da8-a66d-54d6649558d4-ceph\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-kctxk\" (UID: \"1d8c1630-ca31-4da8-a66d-54d6649558d4\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kctxk" Jan 22 09:38:35 crc kubenswrapper[4811]: I0122 09:38:35.551686 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9m2h\" (UniqueName: \"kubernetes.io/projected/1d8c1630-ca31-4da8-a66d-54d6649558d4-kube-api-access-n9m2h\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-kctxk\" (UID: \"1d8c1630-ca31-4da8-a66d-54d6649558d4\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kctxk" Jan 22 09:38:35 crc kubenswrapper[4811]: I0122 09:38:35.551763 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1d8c1630-ca31-4da8-a66d-54d6649558d4-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-kctxk\" (UID: \"1d8c1630-ca31-4da8-a66d-54d6649558d4\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kctxk" Jan 22 09:38:35 crc kubenswrapper[4811]: I0122 09:38:35.551786 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1d8c1630-ca31-4da8-a66d-54d6649558d4-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-kctxk\" (UID: \"1d8c1630-ca31-4da8-a66d-54d6649558d4\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kctxk" Jan 22 09:38:35 crc kubenswrapper[4811]: I0122 09:38:35.654101 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1d8c1630-ca31-4da8-a66d-54d6649558d4-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-kctxk\" (UID: \"1d8c1630-ca31-4da8-a66d-54d6649558d4\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kctxk" Jan 22 09:38:35 crc kubenswrapper[4811]: I0122 09:38:35.654135 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1d8c1630-ca31-4da8-a66d-54d6649558d4-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-kctxk\" (UID: \"1d8c1630-ca31-4da8-a66d-54d6649558d4\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kctxk" Jan 22 09:38:35 crc kubenswrapper[4811]: I0122 09:38:35.654250 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1d8c1630-ca31-4da8-a66d-54d6649558d4-ceph\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-kctxk\" (UID: \"1d8c1630-ca31-4da8-a66d-54d6649558d4\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kctxk" Jan 22 09:38:35 crc kubenswrapper[4811]: I0122 09:38:35.654329 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9m2h\" (UniqueName: \"kubernetes.io/projected/1d8c1630-ca31-4da8-a66d-54d6649558d4-kube-api-access-n9m2h\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-kctxk\" (UID: \"1d8c1630-ca31-4da8-a66d-54d6649558d4\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kctxk" Jan 22 09:38:35 crc kubenswrapper[4811]: I0122 09:38:35.657877 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1d8c1630-ca31-4da8-a66d-54d6649558d4-ceph\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-kctxk\" (UID: \"1d8c1630-ca31-4da8-a66d-54d6649558d4\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kctxk" Jan 22 09:38:35 crc kubenswrapper[4811]: I0122 09:38:35.657910 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1d8c1630-ca31-4da8-a66d-54d6649558d4-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-kctxk\" (UID: \"1d8c1630-ca31-4da8-a66d-54d6649558d4\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kctxk" Jan 22 09:38:35 crc kubenswrapper[4811]: I0122 09:38:35.658065 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1d8c1630-ca31-4da8-a66d-54d6649558d4-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-kctxk\" (UID: \"1d8c1630-ca31-4da8-a66d-54d6649558d4\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kctxk" Jan 22 09:38:35 crc kubenswrapper[4811]: I0122 09:38:35.668432 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9m2h\" (UniqueName: \"kubernetes.io/projected/1d8c1630-ca31-4da8-a66d-54d6649558d4-kube-api-access-n9m2h\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-kctxk\" (UID: \"1d8c1630-ca31-4da8-a66d-54d6649558d4\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kctxk" Jan 22 09:38:35 crc kubenswrapper[4811]: I0122 09:38:35.819638 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kctxk" Jan 22 09:38:36 crc kubenswrapper[4811]: I0122 09:38:36.233347 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-kctxk"] Jan 22 09:38:36 crc kubenswrapper[4811]: I0122 09:38:36.457517 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kctxk" event={"ID":"1d8c1630-ca31-4da8-a66d-54d6649558d4","Type":"ContainerStarted","Data":"f1fb87f704191d395887f4b0d3432c5631ec41df6fd28e8ddc72a707f52f8737"} Jan 22 09:38:37 crc kubenswrapper[4811]: I0122 09:38:37.464070 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kctxk" event={"ID":"1d8c1630-ca31-4da8-a66d-54d6649558d4","Type":"ContainerStarted","Data":"bac8e8cb4f03712c0a7e1b8fbc86b6de6f3e3a04ccbd6d9a1d50bf688a7b23f4"} Jan 22 09:39:05 crc kubenswrapper[4811]: I0122 09:39:05.501094 4811 patch_prober.go:28] interesting pod/machine-config-daemon-txvcq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 09:39:05 crc kubenswrapper[4811]: I0122 09:39:05.502286 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 09:39:05 crc kubenswrapper[4811]: I0122 09:39:05.502395 4811 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" Jan 22 09:39:05 crc kubenswrapper[4811]: I0122 09:39:05.503099 4811 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9d25b46cfa8348bdd18bcf253f004ae0938dcf272be4899b8026857ee7fbaf9f"} pod="openshift-machine-config-operator/machine-config-daemon-txvcq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 09:39:05 crc kubenswrapper[4811]: I0122 09:39:05.503234 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" containerName="machine-config-daemon" containerID="cri-o://9d25b46cfa8348bdd18bcf253f004ae0938dcf272be4899b8026857ee7fbaf9f" gracePeriod=600 Jan 22 09:39:05 crc kubenswrapper[4811]: I0122 09:39:05.630716 4811 generic.go:334] "Generic (PLEG): container finished" podID="84068a6b-e189-419b-87f5-f31428f6eafe" containerID="9d25b46cfa8348bdd18bcf253f004ae0938dcf272be4899b8026857ee7fbaf9f" exitCode=0 Jan 22 09:39:05 crc kubenswrapper[4811]: I0122 09:39:05.630941 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" event={"ID":"84068a6b-e189-419b-87f5-f31428f6eafe","Type":"ContainerDied","Data":"9d25b46cfa8348bdd18bcf253f004ae0938dcf272be4899b8026857ee7fbaf9f"} Jan 22 09:39:05 crc kubenswrapper[4811]: I0122 09:39:05.630971 4811 scope.go:117] "RemoveContainer" containerID="a7045e9e75753498e22be1f49f2ac6b7e1f1858c2287359ab0b186d3e061fbd1" Jan 22 09:39:05 crc kubenswrapper[4811]: I0122 09:39:05.632326 4811 generic.go:334] "Generic (PLEG): container finished" podID="1d8c1630-ca31-4da8-a66d-54d6649558d4" containerID="bac8e8cb4f03712c0a7e1b8fbc86b6de6f3e3a04ccbd6d9a1d50bf688a7b23f4" exitCode=0 Jan 22 09:39:05 crc kubenswrapper[4811]: I0122 09:39:05.632353 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kctxk" event={"ID":"1d8c1630-ca31-4da8-a66d-54d6649558d4","Type":"ContainerDied","Data":"bac8e8cb4f03712c0a7e1b8fbc86b6de6f3e3a04ccbd6d9a1d50bf688a7b23f4"} Jan 22 09:39:06 crc kubenswrapper[4811]: I0122 09:39:06.644051 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" event={"ID":"84068a6b-e189-419b-87f5-f31428f6eafe","Type":"ContainerStarted","Data":"a3eaf3fca355ac62b01468032411a18e7dabb820be02232f4becb3d78ad2b686"} Jan 22 09:39:06 crc kubenswrapper[4811]: I0122 09:39:06.994497 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kctxk" Jan 22 09:39:07 crc kubenswrapper[4811]: I0122 09:39:07.143182 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n9m2h\" (UniqueName: \"kubernetes.io/projected/1d8c1630-ca31-4da8-a66d-54d6649558d4-kube-api-access-n9m2h\") pod \"1d8c1630-ca31-4da8-a66d-54d6649558d4\" (UID: \"1d8c1630-ca31-4da8-a66d-54d6649558d4\") " Jan 22 09:39:07 crc kubenswrapper[4811]: I0122 09:39:07.143253 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1d8c1630-ca31-4da8-a66d-54d6649558d4-ceph\") pod \"1d8c1630-ca31-4da8-a66d-54d6649558d4\" (UID: \"1d8c1630-ca31-4da8-a66d-54d6649558d4\") " Jan 22 09:39:07 crc kubenswrapper[4811]: I0122 09:39:07.143473 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1d8c1630-ca31-4da8-a66d-54d6649558d4-ssh-key-openstack-edpm-ipam\") pod \"1d8c1630-ca31-4da8-a66d-54d6649558d4\" (UID: \"1d8c1630-ca31-4da8-a66d-54d6649558d4\") " Jan 22 09:39:07 crc kubenswrapper[4811]: I0122 09:39:07.143544 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1d8c1630-ca31-4da8-a66d-54d6649558d4-inventory\") pod \"1d8c1630-ca31-4da8-a66d-54d6649558d4\" (UID: \"1d8c1630-ca31-4da8-a66d-54d6649558d4\") " Jan 22 09:39:07 crc kubenswrapper[4811]: I0122 09:39:07.147395 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d8c1630-ca31-4da8-a66d-54d6649558d4-ceph" (OuterVolumeSpecName: "ceph") pod "1d8c1630-ca31-4da8-a66d-54d6649558d4" (UID: "1d8c1630-ca31-4da8-a66d-54d6649558d4"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:39:07 crc kubenswrapper[4811]: I0122 09:39:07.147734 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d8c1630-ca31-4da8-a66d-54d6649558d4-kube-api-access-n9m2h" (OuterVolumeSpecName: "kube-api-access-n9m2h") pod "1d8c1630-ca31-4da8-a66d-54d6649558d4" (UID: "1d8c1630-ca31-4da8-a66d-54d6649558d4"). InnerVolumeSpecName "kube-api-access-n9m2h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:39:07 crc kubenswrapper[4811]: I0122 09:39:07.166266 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d8c1630-ca31-4da8-a66d-54d6649558d4-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "1d8c1630-ca31-4da8-a66d-54d6649558d4" (UID: "1d8c1630-ca31-4da8-a66d-54d6649558d4"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:39:07 crc kubenswrapper[4811]: I0122 09:39:07.173445 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d8c1630-ca31-4da8-a66d-54d6649558d4-inventory" (OuterVolumeSpecName: "inventory") pod "1d8c1630-ca31-4da8-a66d-54d6649558d4" (UID: "1d8c1630-ca31-4da8-a66d-54d6649558d4"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:39:07 crc kubenswrapper[4811]: I0122 09:39:07.246457 4811 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1d8c1630-ca31-4da8-a66d-54d6649558d4-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 22 09:39:07 crc kubenswrapper[4811]: I0122 09:39:07.246489 4811 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1d8c1630-ca31-4da8-a66d-54d6649558d4-inventory\") on node \"crc\" DevicePath \"\"" Jan 22 09:39:07 crc kubenswrapper[4811]: I0122 09:39:07.246499 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n9m2h\" (UniqueName: \"kubernetes.io/projected/1d8c1630-ca31-4da8-a66d-54d6649558d4-kube-api-access-n9m2h\") on node \"crc\" DevicePath \"\"" Jan 22 09:39:07 crc kubenswrapper[4811]: I0122 09:39:07.246507 4811 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1d8c1630-ca31-4da8-a66d-54d6649558d4-ceph\") on node \"crc\" DevicePath \"\"" Jan 22 09:39:07 crc kubenswrapper[4811]: I0122 09:39:07.651682 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kctxk" event={"ID":"1d8c1630-ca31-4da8-a66d-54d6649558d4","Type":"ContainerDied","Data":"f1fb87f704191d395887f4b0d3432c5631ec41df6fd28e8ddc72a707f52f8737"} Jan 22 09:39:07 crc kubenswrapper[4811]: I0122 09:39:07.651716 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kctxk" Jan 22 09:39:07 crc kubenswrapper[4811]: I0122 09:39:07.651738 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f1fb87f704191d395887f4b0d3432c5631ec41df6fd28e8ddc72a707f52f8737" Jan 22 09:39:07 crc kubenswrapper[4811]: I0122 09:39:07.727295 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-mk9fn"] Jan 22 09:39:07 crc kubenswrapper[4811]: E0122 09:39:07.727661 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d8c1630-ca31-4da8-a66d-54d6649558d4" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 22 09:39:07 crc kubenswrapper[4811]: I0122 09:39:07.727679 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d8c1630-ca31-4da8-a66d-54d6649558d4" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 22 09:39:07 crc kubenswrapper[4811]: I0122 09:39:07.727888 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d8c1630-ca31-4da8-a66d-54d6649558d4" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 22 09:39:07 crc kubenswrapper[4811]: I0122 09:39:07.728387 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-mk9fn" Jan 22 09:39:07 crc kubenswrapper[4811]: I0122 09:39:07.730322 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 22 09:39:07 crc kubenswrapper[4811]: I0122 09:39:07.730737 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 22 09:39:07 crc kubenswrapper[4811]: I0122 09:39:07.731388 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 22 09:39:07 crc kubenswrapper[4811]: I0122 09:39:07.731776 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7cbs6" Jan 22 09:39:07 crc kubenswrapper[4811]: I0122 09:39:07.731778 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 22 09:39:07 crc kubenswrapper[4811]: I0122 09:39:07.733755 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-mk9fn"] Jan 22 09:39:07 crc kubenswrapper[4811]: I0122 09:39:07.856021 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fs2b4\" (UniqueName: \"kubernetes.io/projected/2994821b-e7da-4315-a718-9cc885e55fa4-kube-api-access-fs2b4\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-mk9fn\" (UID: \"2994821b-e7da-4315-a718-9cc885e55fa4\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-mk9fn" Jan 22 09:39:07 crc kubenswrapper[4811]: I0122 09:39:07.856066 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2994821b-e7da-4315-a718-9cc885e55fa4-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-mk9fn\" (UID: \"2994821b-e7da-4315-a718-9cc885e55fa4\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-mk9fn" Jan 22 09:39:07 crc kubenswrapper[4811]: I0122 09:39:07.856189 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2994821b-e7da-4315-a718-9cc885e55fa4-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-mk9fn\" (UID: \"2994821b-e7da-4315-a718-9cc885e55fa4\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-mk9fn" Jan 22 09:39:07 crc kubenswrapper[4811]: I0122 09:39:07.856260 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/2994821b-e7da-4315-a718-9cc885e55fa4-ceph\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-mk9fn\" (UID: \"2994821b-e7da-4315-a718-9cc885e55fa4\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-mk9fn" Jan 22 09:39:07 crc kubenswrapper[4811]: I0122 09:39:07.958295 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fs2b4\" (UniqueName: \"kubernetes.io/projected/2994821b-e7da-4315-a718-9cc885e55fa4-kube-api-access-fs2b4\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-mk9fn\" (UID: \"2994821b-e7da-4315-a718-9cc885e55fa4\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-mk9fn" Jan 22 09:39:07 crc kubenswrapper[4811]: I0122 09:39:07.958348 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2994821b-e7da-4315-a718-9cc885e55fa4-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-mk9fn\" (UID: \"2994821b-e7da-4315-a718-9cc885e55fa4\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-mk9fn" Jan 22 09:39:07 crc kubenswrapper[4811]: I0122 09:39:07.958467 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2994821b-e7da-4315-a718-9cc885e55fa4-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-mk9fn\" (UID: \"2994821b-e7da-4315-a718-9cc885e55fa4\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-mk9fn" Jan 22 09:39:07 crc kubenswrapper[4811]: I0122 09:39:07.958967 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/2994821b-e7da-4315-a718-9cc885e55fa4-ceph\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-mk9fn\" (UID: \"2994821b-e7da-4315-a718-9cc885e55fa4\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-mk9fn" Jan 22 09:39:07 crc kubenswrapper[4811]: I0122 09:39:07.961960 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2994821b-e7da-4315-a718-9cc885e55fa4-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-mk9fn\" (UID: \"2994821b-e7da-4315-a718-9cc885e55fa4\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-mk9fn" Jan 22 09:39:07 crc kubenswrapper[4811]: I0122 09:39:07.962561 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2994821b-e7da-4315-a718-9cc885e55fa4-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-mk9fn\" (UID: \"2994821b-e7da-4315-a718-9cc885e55fa4\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-mk9fn" Jan 22 09:39:07 crc kubenswrapper[4811]: I0122 09:39:07.962959 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/2994821b-e7da-4315-a718-9cc885e55fa4-ceph\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-mk9fn\" (UID: \"2994821b-e7da-4315-a718-9cc885e55fa4\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-mk9fn" Jan 22 09:39:07 crc kubenswrapper[4811]: I0122 09:39:07.972398 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fs2b4\" (UniqueName: \"kubernetes.io/projected/2994821b-e7da-4315-a718-9cc885e55fa4-kube-api-access-fs2b4\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-mk9fn\" (UID: \"2994821b-e7da-4315-a718-9cc885e55fa4\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-mk9fn" Jan 22 09:39:08 crc kubenswrapper[4811]: I0122 09:39:08.041300 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-mk9fn" Jan 22 09:39:08 crc kubenswrapper[4811]: I0122 09:39:08.469543 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-mk9fn"] Jan 22 09:39:08 crc kubenswrapper[4811]: W0122 09:39:08.478113 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2994821b_e7da_4315_a718_9cc885e55fa4.slice/crio-89407875e83d5ad0b508d08976f621ef4e3f7cc84a4eac1edc0697651bf64ec3 WatchSource:0}: Error finding container 89407875e83d5ad0b508d08976f621ef4e3f7cc84a4eac1edc0697651bf64ec3: Status 404 returned error can't find the container with id 89407875e83d5ad0b508d08976f621ef4e3f7cc84a4eac1edc0697651bf64ec3 Jan 22 09:39:08 crc kubenswrapper[4811]: I0122 09:39:08.661129 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-mk9fn" event={"ID":"2994821b-e7da-4315-a718-9cc885e55fa4","Type":"ContainerStarted","Data":"89407875e83d5ad0b508d08976f621ef4e3f7cc84a4eac1edc0697651bf64ec3"} Jan 22 09:39:09 crc kubenswrapper[4811]: I0122 09:39:09.721384 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-mk9fn" event={"ID":"2994821b-e7da-4315-a718-9cc885e55fa4","Type":"ContainerStarted","Data":"c97e80fd2addf957ff129d34b328f6f08667930f29015775347a8a0e1663b112"} Jan 22 09:39:09 crc kubenswrapper[4811]: I0122 09:39:09.741955 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-mk9fn" podStartSLOduration=2.166467969 podStartE2EDuration="2.741936023s" podCreationTimestamp="2026-01-22 09:39:07 +0000 UTC" firstStartedPulling="2026-01-22 09:39:08.48004568 +0000 UTC m=+1992.802232803" lastFinishedPulling="2026-01-22 09:39:09.055513734 +0000 UTC m=+1993.377700857" observedRunningTime="2026-01-22 09:39:09.737986375 +0000 UTC m=+1994.060173518" watchObservedRunningTime="2026-01-22 09:39:09.741936023 +0000 UTC m=+1994.064123147" Jan 22 09:39:12 crc kubenswrapper[4811]: I0122 09:39:12.739688 4811 generic.go:334] "Generic (PLEG): container finished" podID="2994821b-e7da-4315-a718-9cc885e55fa4" containerID="c97e80fd2addf957ff129d34b328f6f08667930f29015775347a8a0e1663b112" exitCode=0 Jan 22 09:39:12 crc kubenswrapper[4811]: I0122 09:39:12.739781 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-mk9fn" event={"ID":"2994821b-e7da-4315-a718-9cc885e55fa4","Type":"ContainerDied","Data":"c97e80fd2addf957ff129d34b328f6f08667930f29015775347a8a0e1663b112"} Jan 22 09:39:14 crc kubenswrapper[4811]: I0122 09:39:14.222463 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-mk9fn" Jan 22 09:39:14 crc kubenswrapper[4811]: I0122 09:39:14.408060 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fs2b4\" (UniqueName: \"kubernetes.io/projected/2994821b-e7da-4315-a718-9cc885e55fa4-kube-api-access-fs2b4\") pod \"2994821b-e7da-4315-a718-9cc885e55fa4\" (UID: \"2994821b-e7da-4315-a718-9cc885e55fa4\") " Jan 22 09:39:14 crc kubenswrapper[4811]: I0122 09:39:14.408116 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/2994821b-e7da-4315-a718-9cc885e55fa4-ceph\") pod \"2994821b-e7da-4315-a718-9cc885e55fa4\" (UID: \"2994821b-e7da-4315-a718-9cc885e55fa4\") " Jan 22 09:39:14 crc kubenswrapper[4811]: I0122 09:39:14.408151 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2994821b-e7da-4315-a718-9cc885e55fa4-inventory\") pod \"2994821b-e7da-4315-a718-9cc885e55fa4\" (UID: \"2994821b-e7da-4315-a718-9cc885e55fa4\") " Jan 22 09:39:14 crc kubenswrapper[4811]: I0122 09:39:14.408427 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2994821b-e7da-4315-a718-9cc885e55fa4-ssh-key-openstack-edpm-ipam\") pod \"2994821b-e7da-4315-a718-9cc885e55fa4\" (UID: \"2994821b-e7da-4315-a718-9cc885e55fa4\") " Jan 22 09:39:14 crc kubenswrapper[4811]: I0122 09:39:14.414775 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2994821b-e7da-4315-a718-9cc885e55fa4-kube-api-access-fs2b4" (OuterVolumeSpecName: "kube-api-access-fs2b4") pod "2994821b-e7da-4315-a718-9cc885e55fa4" (UID: "2994821b-e7da-4315-a718-9cc885e55fa4"). InnerVolumeSpecName "kube-api-access-fs2b4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:39:14 crc kubenswrapper[4811]: I0122 09:39:14.414883 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2994821b-e7da-4315-a718-9cc885e55fa4-ceph" (OuterVolumeSpecName: "ceph") pod "2994821b-e7da-4315-a718-9cc885e55fa4" (UID: "2994821b-e7da-4315-a718-9cc885e55fa4"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:39:14 crc kubenswrapper[4811]: I0122 09:39:14.430724 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2994821b-e7da-4315-a718-9cc885e55fa4-inventory" (OuterVolumeSpecName: "inventory") pod "2994821b-e7da-4315-a718-9cc885e55fa4" (UID: "2994821b-e7da-4315-a718-9cc885e55fa4"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:39:14 crc kubenswrapper[4811]: I0122 09:39:14.434855 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2994821b-e7da-4315-a718-9cc885e55fa4-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "2994821b-e7da-4315-a718-9cc885e55fa4" (UID: "2994821b-e7da-4315-a718-9cc885e55fa4"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:39:14 crc kubenswrapper[4811]: I0122 09:39:14.511621 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fs2b4\" (UniqueName: \"kubernetes.io/projected/2994821b-e7da-4315-a718-9cc885e55fa4-kube-api-access-fs2b4\") on node \"crc\" DevicePath \"\"" Jan 22 09:39:14 crc kubenswrapper[4811]: I0122 09:39:14.511668 4811 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/2994821b-e7da-4315-a718-9cc885e55fa4-ceph\") on node \"crc\" DevicePath \"\"" Jan 22 09:39:14 crc kubenswrapper[4811]: I0122 09:39:14.511680 4811 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2994821b-e7da-4315-a718-9cc885e55fa4-inventory\") on node \"crc\" DevicePath \"\"" Jan 22 09:39:14 crc kubenswrapper[4811]: I0122 09:39:14.511690 4811 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2994821b-e7da-4315-a718-9cc885e55fa4-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 22 09:39:14 crc kubenswrapper[4811]: I0122 09:39:14.757031 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-mk9fn" event={"ID":"2994821b-e7da-4315-a718-9cc885e55fa4","Type":"ContainerDied","Data":"89407875e83d5ad0b508d08976f621ef4e3f7cc84a4eac1edc0697651bf64ec3"} Jan 22 09:39:14 crc kubenswrapper[4811]: I0122 09:39:14.757074 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="89407875e83d5ad0b508d08976f621ef4e3f7cc84a4eac1edc0697651bf64ec3" Jan 22 09:39:14 crc kubenswrapper[4811]: I0122 09:39:14.757147 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-mk9fn" Jan 22 09:39:14 crc kubenswrapper[4811]: I0122 09:39:14.824504 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-hnjt7"] Jan 22 09:39:14 crc kubenswrapper[4811]: E0122 09:39:14.824883 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2994821b-e7da-4315-a718-9cc885e55fa4" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Jan 22 09:39:14 crc kubenswrapper[4811]: I0122 09:39:14.824902 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="2994821b-e7da-4315-a718-9cc885e55fa4" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Jan 22 09:39:14 crc kubenswrapper[4811]: I0122 09:39:14.825071 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="2994821b-e7da-4315-a718-9cc885e55fa4" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Jan 22 09:39:14 crc kubenswrapper[4811]: I0122 09:39:14.825652 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-hnjt7" Jan 22 09:39:14 crc kubenswrapper[4811]: I0122 09:39:14.830295 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 22 09:39:14 crc kubenswrapper[4811]: I0122 09:39:14.830381 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 22 09:39:14 crc kubenswrapper[4811]: I0122 09:39:14.830771 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7cbs6" Jan 22 09:39:14 crc kubenswrapper[4811]: I0122 09:39:14.831133 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 22 09:39:14 crc kubenswrapper[4811]: I0122 09:39:14.831397 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 22 09:39:14 crc kubenswrapper[4811]: I0122 09:39:14.889833 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-hnjt7"] Jan 22 09:39:14 crc kubenswrapper[4811]: I0122 09:39:14.917792 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4baf2862-a8ca-4314-a70a-67e087e5c897-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-hnjt7\" (UID: \"4baf2862-a8ca-4314-a70a-67e087e5c897\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-hnjt7" Jan 22 09:39:14 crc kubenswrapper[4811]: I0122 09:39:14.917941 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4baf2862-a8ca-4314-a70a-67e087e5c897-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-hnjt7\" (UID: \"4baf2862-a8ca-4314-a70a-67e087e5c897\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-hnjt7" Jan 22 09:39:14 crc kubenswrapper[4811]: I0122 09:39:14.918013 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7frn\" (UniqueName: \"kubernetes.io/projected/4baf2862-a8ca-4314-a70a-67e087e5c897-kube-api-access-d7frn\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-hnjt7\" (UID: \"4baf2862-a8ca-4314-a70a-67e087e5c897\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-hnjt7" Jan 22 09:39:14 crc kubenswrapper[4811]: I0122 09:39:14.918044 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4baf2862-a8ca-4314-a70a-67e087e5c897-ceph\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-hnjt7\" (UID: \"4baf2862-a8ca-4314-a70a-67e087e5c897\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-hnjt7" Jan 22 09:39:15 crc kubenswrapper[4811]: I0122 09:39:15.019605 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4baf2862-a8ca-4314-a70a-67e087e5c897-ceph\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-hnjt7\" (UID: \"4baf2862-a8ca-4314-a70a-67e087e5c897\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-hnjt7" Jan 22 09:39:15 crc kubenswrapper[4811]: I0122 09:39:15.019748 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4baf2862-a8ca-4314-a70a-67e087e5c897-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-hnjt7\" (UID: \"4baf2862-a8ca-4314-a70a-67e087e5c897\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-hnjt7" Jan 22 09:39:15 crc kubenswrapper[4811]: I0122 09:39:15.019904 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4baf2862-a8ca-4314-a70a-67e087e5c897-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-hnjt7\" (UID: \"4baf2862-a8ca-4314-a70a-67e087e5c897\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-hnjt7" Jan 22 09:39:15 crc kubenswrapper[4811]: I0122 09:39:15.019980 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d7frn\" (UniqueName: \"kubernetes.io/projected/4baf2862-a8ca-4314-a70a-67e087e5c897-kube-api-access-d7frn\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-hnjt7\" (UID: \"4baf2862-a8ca-4314-a70a-67e087e5c897\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-hnjt7" Jan 22 09:39:15 crc kubenswrapper[4811]: I0122 09:39:15.024108 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4baf2862-a8ca-4314-a70a-67e087e5c897-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-hnjt7\" (UID: \"4baf2862-a8ca-4314-a70a-67e087e5c897\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-hnjt7" Jan 22 09:39:15 crc kubenswrapper[4811]: I0122 09:39:15.024272 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4baf2862-a8ca-4314-a70a-67e087e5c897-ceph\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-hnjt7\" (UID: \"4baf2862-a8ca-4314-a70a-67e087e5c897\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-hnjt7" Jan 22 09:39:15 crc kubenswrapper[4811]: I0122 09:39:15.025947 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4baf2862-a8ca-4314-a70a-67e087e5c897-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-hnjt7\" (UID: \"4baf2862-a8ca-4314-a70a-67e087e5c897\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-hnjt7" Jan 22 09:39:15 crc kubenswrapper[4811]: I0122 09:39:15.034183 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7frn\" (UniqueName: \"kubernetes.io/projected/4baf2862-a8ca-4314-a70a-67e087e5c897-kube-api-access-d7frn\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-hnjt7\" (UID: \"4baf2862-a8ca-4314-a70a-67e087e5c897\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-hnjt7" Jan 22 09:39:15 crc kubenswrapper[4811]: I0122 09:39:15.146727 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-hnjt7" Jan 22 09:39:15 crc kubenswrapper[4811]: I0122 09:39:15.592288 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-hnjt7"] Jan 22 09:39:15 crc kubenswrapper[4811]: I0122 09:39:15.766258 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-hnjt7" event={"ID":"4baf2862-a8ca-4314-a70a-67e087e5c897","Type":"ContainerStarted","Data":"57bd85074bd4964c04e254ece0c7542e369c6a6b7e2bd2bde514b040bb7151db"} Jan 22 09:39:16 crc kubenswrapper[4811]: I0122 09:39:16.774079 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-hnjt7" event={"ID":"4baf2862-a8ca-4314-a70a-67e087e5c897","Type":"ContainerStarted","Data":"e37a19f108eb179e9d83dcf91d5ddf85b4c988f0b9da107386a4b4ed8fc07c47"} Jan 22 09:39:16 crc kubenswrapper[4811]: I0122 09:39:16.797906 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-hnjt7" podStartSLOduration=2.239788829 podStartE2EDuration="2.797893115s" podCreationTimestamp="2026-01-22 09:39:14 +0000 UTC" firstStartedPulling="2026-01-22 09:39:15.597703838 +0000 UTC m=+1999.919890961" lastFinishedPulling="2026-01-22 09:39:16.155808124 +0000 UTC m=+2000.477995247" observedRunningTime="2026-01-22 09:39:16.79071624 +0000 UTC m=+2001.112903363" watchObservedRunningTime="2026-01-22 09:39:16.797893115 +0000 UTC m=+2001.120080238" Jan 22 09:39:22 crc kubenswrapper[4811]: I0122 09:39:22.317886 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-lvlp6"] Jan 22 09:39:22 crc kubenswrapper[4811]: I0122 09:39:22.319681 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lvlp6" Jan 22 09:39:22 crc kubenswrapper[4811]: I0122 09:39:22.334580 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lvlp6"] Jan 22 09:39:22 crc kubenswrapper[4811]: I0122 09:39:22.463985 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5eca884a-dbab-4163-919b-15af29e6740c-utilities\") pod \"redhat-marketplace-lvlp6\" (UID: \"5eca884a-dbab-4163-919b-15af29e6740c\") " pod="openshift-marketplace/redhat-marketplace-lvlp6" Jan 22 09:39:22 crc kubenswrapper[4811]: I0122 09:39:22.464095 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5eca884a-dbab-4163-919b-15af29e6740c-catalog-content\") pod \"redhat-marketplace-lvlp6\" (UID: \"5eca884a-dbab-4163-919b-15af29e6740c\") " pod="openshift-marketplace/redhat-marketplace-lvlp6" Jan 22 09:39:22 crc kubenswrapper[4811]: I0122 09:39:22.464122 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n925p\" (UniqueName: \"kubernetes.io/projected/5eca884a-dbab-4163-919b-15af29e6740c-kube-api-access-n925p\") pod \"redhat-marketplace-lvlp6\" (UID: \"5eca884a-dbab-4163-919b-15af29e6740c\") " pod="openshift-marketplace/redhat-marketplace-lvlp6" Jan 22 09:39:22 crc kubenswrapper[4811]: I0122 09:39:22.526180 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-d2tmd"] Jan 22 09:39:22 crc kubenswrapper[4811]: I0122 09:39:22.527840 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d2tmd" Jan 22 09:39:22 crc kubenswrapper[4811]: I0122 09:39:22.551964 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-d2tmd"] Jan 22 09:39:22 crc kubenswrapper[4811]: I0122 09:39:22.571891 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghxpx\" (UniqueName: \"kubernetes.io/projected/e6bbf2a1-0ee5-47db-ae6e-485cadd0b802-kube-api-access-ghxpx\") pod \"certified-operators-d2tmd\" (UID: \"e6bbf2a1-0ee5-47db-ae6e-485cadd0b802\") " pod="openshift-marketplace/certified-operators-d2tmd" Jan 22 09:39:22 crc kubenswrapper[4811]: I0122 09:39:22.572070 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5eca884a-dbab-4163-919b-15af29e6740c-utilities\") pod \"redhat-marketplace-lvlp6\" (UID: \"5eca884a-dbab-4163-919b-15af29e6740c\") " pod="openshift-marketplace/redhat-marketplace-lvlp6" Jan 22 09:39:22 crc kubenswrapper[4811]: I0122 09:39:22.572204 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5eca884a-dbab-4163-919b-15af29e6740c-catalog-content\") pod \"redhat-marketplace-lvlp6\" (UID: \"5eca884a-dbab-4163-919b-15af29e6740c\") " pod="openshift-marketplace/redhat-marketplace-lvlp6" Jan 22 09:39:22 crc kubenswrapper[4811]: I0122 09:39:22.572230 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n925p\" (UniqueName: \"kubernetes.io/projected/5eca884a-dbab-4163-919b-15af29e6740c-kube-api-access-n925p\") pod \"redhat-marketplace-lvlp6\" (UID: \"5eca884a-dbab-4163-919b-15af29e6740c\") " pod="openshift-marketplace/redhat-marketplace-lvlp6" Jan 22 09:39:22 crc kubenswrapper[4811]: I0122 09:39:22.572278 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6bbf2a1-0ee5-47db-ae6e-485cadd0b802-utilities\") pod \"certified-operators-d2tmd\" (UID: \"e6bbf2a1-0ee5-47db-ae6e-485cadd0b802\") " pod="openshift-marketplace/certified-operators-d2tmd" Jan 22 09:39:22 crc kubenswrapper[4811]: I0122 09:39:22.572337 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6bbf2a1-0ee5-47db-ae6e-485cadd0b802-catalog-content\") pod \"certified-operators-d2tmd\" (UID: \"e6bbf2a1-0ee5-47db-ae6e-485cadd0b802\") " pod="openshift-marketplace/certified-operators-d2tmd" Jan 22 09:39:22 crc kubenswrapper[4811]: I0122 09:39:22.572699 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5eca884a-dbab-4163-919b-15af29e6740c-utilities\") pod \"redhat-marketplace-lvlp6\" (UID: \"5eca884a-dbab-4163-919b-15af29e6740c\") " pod="openshift-marketplace/redhat-marketplace-lvlp6" Jan 22 09:39:22 crc kubenswrapper[4811]: I0122 09:39:22.572735 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5eca884a-dbab-4163-919b-15af29e6740c-catalog-content\") pod \"redhat-marketplace-lvlp6\" (UID: \"5eca884a-dbab-4163-919b-15af29e6740c\") " pod="openshift-marketplace/redhat-marketplace-lvlp6" Jan 22 09:39:22 crc kubenswrapper[4811]: I0122 09:39:22.589009 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n925p\" (UniqueName: \"kubernetes.io/projected/5eca884a-dbab-4163-919b-15af29e6740c-kube-api-access-n925p\") pod \"redhat-marketplace-lvlp6\" (UID: \"5eca884a-dbab-4163-919b-15af29e6740c\") " pod="openshift-marketplace/redhat-marketplace-lvlp6" Jan 22 09:39:22 crc kubenswrapper[4811]: I0122 09:39:22.643654 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lvlp6" Jan 22 09:39:22 crc kubenswrapper[4811]: I0122 09:39:22.674689 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6bbf2a1-0ee5-47db-ae6e-485cadd0b802-utilities\") pod \"certified-operators-d2tmd\" (UID: \"e6bbf2a1-0ee5-47db-ae6e-485cadd0b802\") " pod="openshift-marketplace/certified-operators-d2tmd" Jan 22 09:39:22 crc kubenswrapper[4811]: I0122 09:39:22.674781 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6bbf2a1-0ee5-47db-ae6e-485cadd0b802-catalog-content\") pod \"certified-operators-d2tmd\" (UID: \"e6bbf2a1-0ee5-47db-ae6e-485cadd0b802\") " pod="openshift-marketplace/certified-operators-d2tmd" Jan 22 09:39:22 crc kubenswrapper[4811]: I0122 09:39:22.674828 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ghxpx\" (UniqueName: \"kubernetes.io/projected/e6bbf2a1-0ee5-47db-ae6e-485cadd0b802-kube-api-access-ghxpx\") pod \"certified-operators-d2tmd\" (UID: \"e6bbf2a1-0ee5-47db-ae6e-485cadd0b802\") " pod="openshift-marketplace/certified-operators-d2tmd" Jan 22 09:39:22 crc kubenswrapper[4811]: I0122 09:39:22.675530 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6bbf2a1-0ee5-47db-ae6e-485cadd0b802-utilities\") pod \"certified-operators-d2tmd\" (UID: \"e6bbf2a1-0ee5-47db-ae6e-485cadd0b802\") " pod="openshift-marketplace/certified-operators-d2tmd" Jan 22 09:39:22 crc kubenswrapper[4811]: I0122 09:39:22.675664 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6bbf2a1-0ee5-47db-ae6e-485cadd0b802-catalog-content\") pod \"certified-operators-d2tmd\" (UID: \"e6bbf2a1-0ee5-47db-ae6e-485cadd0b802\") " pod="openshift-marketplace/certified-operators-d2tmd" Jan 22 09:39:22 crc kubenswrapper[4811]: I0122 09:39:22.701258 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghxpx\" (UniqueName: \"kubernetes.io/projected/e6bbf2a1-0ee5-47db-ae6e-485cadd0b802-kube-api-access-ghxpx\") pod \"certified-operators-d2tmd\" (UID: \"e6bbf2a1-0ee5-47db-ae6e-485cadd0b802\") " pod="openshift-marketplace/certified-operators-d2tmd" Jan 22 09:39:22 crc kubenswrapper[4811]: I0122 09:39:22.841341 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d2tmd" Jan 22 09:39:23 crc kubenswrapper[4811]: I0122 09:39:23.177338 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lvlp6"] Jan 22 09:39:23 crc kubenswrapper[4811]: I0122 09:39:23.327246 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-d2tmd"] Jan 22 09:39:23 crc kubenswrapper[4811]: W0122 09:39:23.330288 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode6bbf2a1_0ee5_47db_ae6e_485cadd0b802.slice/crio-8dc06d3b52f65f2a78a5ba22ad9f9222104c687bef3c377f01c91007ba06e1b5 WatchSource:0}: Error finding container 8dc06d3b52f65f2a78a5ba22ad9f9222104c687bef3c377f01c91007ba06e1b5: Status 404 returned error can't find the container with id 8dc06d3b52f65f2a78a5ba22ad9f9222104c687bef3c377f01c91007ba06e1b5 Jan 22 09:39:23 crc kubenswrapper[4811]: I0122 09:39:23.820313 4811 generic.go:334] "Generic (PLEG): container finished" podID="5eca884a-dbab-4163-919b-15af29e6740c" containerID="ced32301e5ef1d8b3552438c38496f7e13f260331f8d9552ec85b135f617881c" exitCode=0 Jan 22 09:39:23 crc kubenswrapper[4811]: I0122 09:39:23.820354 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lvlp6" event={"ID":"5eca884a-dbab-4163-919b-15af29e6740c","Type":"ContainerDied","Data":"ced32301e5ef1d8b3552438c38496f7e13f260331f8d9552ec85b135f617881c"} Jan 22 09:39:23 crc kubenswrapper[4811]: I0122 09:39:23.820722 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lvlp6" event={"ID":"5eca884a-dbab-4163-919b-15af29e6740c","Type":"ContainerStarted","Data":"a10623d798126df12f859b648741ad3606e1bcefbdd0ccde0b1365386c677637"} Jan 22 09:39:23 crc kubenswrapper[4811]: I0122 09:39:23.822845 4811 generic.go:334] "Generic (PLEG): container finished" podID="e6bbf2a1-0ee5-47db-ae6e-485cadd0b802" containerID="092638de032eb71c2242a535dc11d186a59d6327951ff345ea1ab322aa482372" exitCode=0 Jan 22 09:39:23 crc kubenswrapper[4811]: I0122 09:39:23.822892 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d2tmd" event={"ID":"e6bbf2a1-0ee5-47db-ae6e-485cadd0b802","Type":"ContainerDied","Data":"092638de032eb71c2242a535dc11d186a59d6327951ff345ea1ab322aa482372"} Jan 22 09:39:23 crc kubenswrapper[4811]: I0122 09:39:23.822917 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d2tmd" event={"ID":"e6bbf2a1-0ee5-47db-ae6e-485cadd0b802","Type":"ContainerStarted","Data":"8dc06d3b52f65f2a78a5ba22ad9f9222104c687bef3c377f01c91007ba06e1b5"} Jan 22 09:39:24 crc kubenswrapper[4811]: I0122 09:39:24.832228 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lvlp6" event={"ID":"5eca884a-dbab-4163-919b-15af29e6740c","Type":"ContainerStarted","Data":"40654ce34427dd5aa99d6c5ffc964b6baf4c5a7cece7731537f8a5ae6fd88275"} Jan 22 09:39:25 crc kubenswrapper[4811]: I0122 09:39:25.847867 4811 generic.go:334] "Generic (PLEG): container finished" podID="5eca884a-dbab-4163-919b-15af29e6740c" containerID="40654ce34427dd5aa99d6c5ffc964b6baf4c5a7cece7731537f8a5ae6fd88275" exitCode=0 Jan 22 09:39:25 crc kubenswrapper[4811]: I0122 09:39:25.848058 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lvlp6" event={"ID":"5eca884a-dbab-4163-919b-15af29e6740c","Type":"ContainerDied","Data":"40654ce34427dd5aa99d6c5ffc964b6baf4c5a7cece7731537f8a5ae6fd88275"} Jan 22 09:39:25 crc kubenswrapper[4811]: I0122 09:39:25.850417 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d2tmd" event={"ID":"e6bbf2a1-0ee5-47db-ae6e-485cadd0b802","Type":"ContainerStarted","Data":"bd0d2030f7c4ba91c502922f1ee696ab6d91cf42d54d0711b67cdfb8efed2d8e"} Jan 22 09:39:26 crc kubenswrapper[4811]: I0122 09:39:26.866311 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lvlp6" event={"ID":"5eca884a-dbab-4163-919b-15af29e6740c","Type":"ContainerStarted","Data":"083d94bbd3b5a53495aa7612f3cfbd25c1c8a8ab7d391fad0c1abe48960a6afe"} Jan 22 09:39:26 crc kubenswrapper[4811]: I0122 09:39:26.868944 4811 generic.go:334] "Generic (PLEG): container finished" podID="e6bbf2a1-0ee5-47db-ae6e-485cadd0b802" containerID="bd0d2030f7c4ba91c502922f1ee696ab6d91cf42d54d0711b67cdfb8efed2d8e" exitCode=0 Jan 22 09:39:26 crc kubenswrapper[4811]: I0122 09:39:26.869004 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d2tmd" event={"ID":"e6bbf2a1-0ee5-47db-ae6e-485cadd0b802","Type":"ContainerDied","Data":"bd0d2030f7c4ba91c502922f1ee696ab6d91cf42d54d0711b67cdfb8efed2d8e"} Jan 22 09:39:26 crc kubenswrapper[4811]: I0122 09:39:26.899932 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-lvlp6" podStartSLOduration=2.372171693 podStartE2EDuration="4.899914432s" podCreationTimestamp="2026-01-22 09:39:22 +0000 UTC" firstStartedPulling="2026-01-22 09:39:23.82184218 +0000 UTC m=+2008.144029303" lastFinishedPulling="2026-01-22 09:39:26.34958492 +0000 UTC m=+2010.671772042" observedRunningTime="2026-01-22 09:39:26.891947297 +0000 UTC m=+2011.214134420" watchObservedRunningTime="2026-01-22 09:39:26.899914432 +0000 UTC m=+2011.222101555" Jan 22 09:39:27 crc kubenswrapper[4811]: I0122 09:39:27.876901 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d2tmd" event={"ID":"e6bbf2a1-0ee5-47db-ae6e-485cadd0b802","Type":"ContainerStarted","Data":"7ad4ee4e29e4160acce9b43bf51a049c4db8639af097152a47f48c1c59ceba2c"} Jan 22 09:39:27 crc kubenswrapper[4811]: I0122 09:39:27.898202 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-d2tmd" podStartSLOduration=2.394229603 podStartE2EDuration="5.898178064s" podCreationTimestamp="2026-01-22 09:39:22 +0000 UTC" firstStartedPulling="2026-01-22 09:39:23.824241955 +0000 UTC m=+2008.146429078" lastFinishedPulling="2026-01-22 09:39:27.328190417 +0000 UTC m=+2011.650377539" observedRunningTime="2026-01-22 09:39:27.892206653 +0000 UTC m=+2012.214393786" watchObservedRunningTime="2026-01-22 09:39:27.898178064 +0000 UTC m=+2012.220365187" Jan 22 09:39:32 crc kubenswrapper[4811]: I0122 09:39:32.643925 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-lvlp6" Jan 22 09:39:32 crc kubenswrapper[4811]: I0122 09:39:32.644205 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-lvlp6" Jan 22 09:39:32 crc kubenswrapper[4811]: I0122 09:39:32.678887 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-lvlp6" Jan 22 09:39:32 crc kubenswrapper[4811]: I0122 09:39:32.842439 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-d2tmd" Jan 22 09:39:32 crc kubenswrapper[4811]: I0122 09:39:32.842489 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-d2tmd" Jan 22 09:39:32 crc kubenswrapper[4811]: I0122 09:39:32.883857 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-d2tmd" Jan 22 09:39:32 crc kubenswrapper[4811]: I0122 09:39:32.943933 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-d2tmd" Jan 22 09:39:32 crc kubenswrapper[4811]: I0122 09:39:32.949013 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-lvlp6" Jan 22 09:39:34 crc kubenswrapper[4811]: I0122 09:39:34.708488 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-d2tmd"] Jan 22 09:39:34 crc kubenswrapper[4811]: I0122 09:39:34.925019 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-d2tmd" podUID="e6bbf2a1-0ee5-47db-ae6e-485cadd0b802" containerName="registry-server" containerID="cri-o://7ad4ee4e29e4160acce9b43bf51a049c4db8639af097152a47f48c1c59ceba2c" gracePeriod=2 Jan 22 09:39:35 crc kubenswrapper[4811]: I0122 09:39:35.317204 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lvlp6"] Jan 22 09:39:35 crc kubenswrapper[4811]: I0122 09:39:35.317471 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-lvlp6" podUID="5eca884a-dbab-4163-919b-15af29e6740c" containerName="registry-server" containerID="cri-o://083d94bbd3b5a53495aa7612f3cfbd25c1c8a8ab7d391fad0c1abe48960a6afe" gracePeriod=2 Jan 22 09:39:35 crc kubenswrapper[4811]: I0122 09:39:35.507023 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d2tmd" Jan 22 09:39:35 crc kubenswrapper[4811]: I0122 09:39:35.568120 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6bbf2a1-0ee5-47db-ae6e-485cadd0b802-utilities\") pod \"e6bbf2a1-0ee5-47db-ae6e-485cadd0b802\" (UID: \"e6bbf2a1-0ee5-47db-ae6e-485cadd0b802\") " Jan 22 09:39:35 crc kubenswrapper[4811]: I0122 09:39:35.568166 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6bbf2a1-0ee5-47db-ae6e-485cadd0b802-catalog-content\") pod \"e6bbf2a1-0ee5-47db-ae6e-485cadd0b802\" (UID: \"e6bbf2a1-0ee5-47db-ae6e-485cadd0b802\") " Jan 22 09:39:35 crc kubenswrapper[4811]: I0122 09:39:35.568210 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ghxpx\" (UniqueName: \"kubernetes.io/projected/e6bbf2a1-0ee5-47db-ae6e-485cadd0b802-kube-api-access-ghxpx\") pod \"e6bbf2a1-0ee5-47db-ae6e-485cadd0b802\" (UID: \"e6bbf2a1-0ee5-47db-ae6e-485cadd0b802\") " Jan 22 09:39:35 crc kubenswrapper[4811]: I0122 09:39:35.568936 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e6bbf2a1-0ee5-47db-ae6e-485cadd0b802-utilities" (OuterVolumeSpecName: "utilities") pod "e6bbf2a1-0ee5-47db-ae6e-485cadd0b802" (UID: "e6bbf2a1-0ee5-47db-ae6e-485cadd0b802"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:39:35 crc kubenswrapper[4811]: I0122 09:39:35.586188 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6bbf2a1-0ee5-47db-ae6e-485cadd0b802-kube-api-access-ghxpx" (OuterVolumeSpecName: "kube-api-access-ghxpx") pod "e6bbf2a1-0ee5-47db-ae6e-485cadd0b802" (UID: "e6bbf2a1-0ee5-47db-ae6e-485cadd0b802"). InnerVolumeSpecName "kube-api-access-ghxpx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:39:35 crc kubenswrapper[4811]: I0122 09:39:35.648099 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e6bbf2a1-0ee5-47db-ae6e-485cadd0b802-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e6bbf2a1-0ee5-47db-ae6e-485cadd0b802" (UID: "e6bbf2a1-0ee5-47db-ae6e-485cadd0b802"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:39:35 crc kubenswrapper[4811]: I0122 09:39:35.670598 4811 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6bbf2a1-0ee5-47db-ae6e-485cadd0b802-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 09:39:35 crc kubenswrapper[4811]: I0122 09:39:35.670645 4811 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6bbf2a1-0ee5-47db-ae6e-485cadd0b802-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 09:39:35 crc kubenswrapper[4811]: I0122 09:39:35.670658 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ghxpx\" (UniqueName: \"kubernetes.io/projected/e6bbf2a1-0ee5-47db-ae6e-485cadd0b802-kube-api-access-ghxpx\") on node \"crc\" DevicePath \"\"" Jan 22 09:39:35 crc kubenswrapper[4811]: I0122 09:39:35.715104 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lvlp6" Jan 22 09:39:35 crc kubenswrapper[4811]: I0122 09:39:35.772373 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5eca884a-dbab-4163-919b-15af29e6740c-utilities\") pod \"5eca884a-dbab-4163-919b-15af29e6740c\" (UID: \"5eca884a-dbab-4163-919b-15af29e6740c\") " Jan 22 09:39:35 crc kubenswrapper[4811]: I0122 09:39:35.772434 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5eca884a-dbab-4163-919b-15af29e6740c-catalog-content\") pod \"5eca884a-dbab-4163-919b-15af29e6740c\" (UID: \"5eca884a-dbab-4163-919b-15af29e6740c\") " Jan 22 09:39:35 crc kubenswrapper[4811]: I0122 09:39:35.772539 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n925p\" (UniqueName: \"kubernetes.io/projected/5eca884a-dbab-4163-919b-15af29e6740c-kube-api-access-n925p\") pod \"5eca884a-dbab-4163-919b-15af29e6740c\" (UID: \"5eca884a-dbab-4163-919b-15af29e6740c\") " Jan 22 09:39:35 crc kubenswrapper[4811]: I0122 09:39:35.773029 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5eca884a-dbab-4163-919b-15af29e6740c-utilities" (OuterVolumeSpecName: "utilities") pod "5eca884a-dbab-4163-919b-15af29e6740c" (UID: "5eca884a-dbab-4163-919b-15af29e6740c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:39:35 crc kubenswrapper[4811]: I0122 09:39:35.773320 4811 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5eca884a-dbab-4163-919b-15af29e6740c-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 09:39:35 crc kubenswrapper[4811]: I0122 09:39:35.776763 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5eca884a-dbab-4163-919b-15af29e6740c-kube-api-access-n925p" (OuterVolumeSpecName: "kube-api-access-n925p") pod "5eca884a-dbab-4163-919b-15af29e6740c" (UID: "5eca884a-dbab-4163-919b-15af29e6740c"). InnerVolumeSpecName "kube-api-access-n925p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:39:35 crc kubenswrapper[4811]: I0122 09:39:35.790840 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5eca884a-dbab-4163-919b-15af29e6740c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5eca884a-dbab-4163-919b-15af29e6740c" (UID: "5eca884a-dbab-4163-919b-15af29e6740c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:39:35 crc kubenswrapper[4811]: I0122 09:39:35.875120 4811 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5eca884a-dbab-4163-919b-15af29e6740c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 09:39:35 crc kubenswrapper[4811]: I0122 09:39:35.875363 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n925p\" (UniqueName: \"kubernetes.io/projected/5eca884a-dbab-4163-919b-15af29e6740c-kube-api-access-n925p\") on node \"crc\" DevicePath \"\"" Jan 22 09:39:35 crc kubenswrapper[4811]: I0122 09:39:35.933696 4811 generic.go:334] "Generic (PLEG): container finished" podID="5eca884a-dbab-4163-919b-15af29e6740c" containerID="083d94bbd3b5a53495aa7612f3cfbd25c1c8a8ab7d391fad0c1abe48960a6afe" exitCode=0 Jan 22 09:39:35 crc kubenswrapper[4811]: I0122 09:39:35.933748 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lvlp6" event={"ID":"5eca884a-dbab-4163-919b-15af29e6740c","Type":"ContainerDied","Data":"083d94bbd3b5a53495aa7612f3cfbd25c1c8a8ab7d391fad0c1abe48960a6afe"} Jan 22 09:39:35 crc kubenswrapper[4811]: I0122 09:39:35.933788 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lvlp6" event={"ID":"5eca884a-dbab-4163-919b-15af29e6740c","Type":"ContainerDied","Data":"a10623d798126df12f859b648741ad3606e1bcefbdd0ccde0b1365386c677637"} Jan 22 09:39:35 crc kubenswrapper[4811]: I0122 09:39:35.933807 4811 scope.go:117] "RemoveContainer" containerID="083d94bbd3b5a53495aa7612f3cfbd25c1c8a8ab7d391fad0c1abe48960a6afe" Jan 22 09:39:35 crc kubenswrapper[4811]: I0122 09:39:35.933907 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lvlp6" Jan 22 09:39:35 crc kubenswrapper[4811]: I0122 09:39:35.942906 4811 generic.go:334] "Generic (PLEG): container finished" podID="e6bbf2a1-0ee5-47db-ae6e-485cadd0b802" containerID="7ad4ee4e29e4160acce9b43bf51a049c4db8639af097152a47f48c1c59ceba2c" exitCode=0 Jan 22 09:39:35 crc kubenswrapper[4811]: I0122 09:39:35.942943 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d2tmd" event={"ID":"e6bbf2a1-0ee5-47db-ae6e-485cadd0b802","Type":"ContainerDied","Data":"7ad4ee4e29e4160acce9b43bf51a049c4db8639af097152a47f48c1c59ceba2c"} Jan 22 09:39:35 crc kubenswrapper[4811]: I0122 09:39:35.942978 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d2tmd" event={"ID":"e6bbf2a1-0ee5-47db-ae6e-485cadd0b802","Type":"ContainerDied","Data":"8dc06d3b52f65f2a78a5ba22ad9f9222104c687bef3c377f01c91007ba06e1b5"} Jan 22 09:39:35 crc kubenswrapper[4811]: I0122 09:39:35.943073 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d2tmd" Jan 22 09:39:35 crc kubenswrapper[4811]: I0122 09:39:35.965343 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lvlp6"] Jan 22 09:39:35 crc kubenswrapper[4811]: I0122 09:39:35.968944 4811 scope.go:117] "RemoveContainer" containerID="40654ce34427dd5aa99d6c5ffc964b6baf4c5a7cece7731537f8a5ae6fd88275" Jan 22 09:39:35 crc kubenswrapper[4811]: I0122 09:39:35.970653 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-lvlp6"] Jan 22 09:39:35 crc kubenswrapper[4811]: I0122 09:39:35.987256 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-d2tmd"] Jan 22 09:39:35 crc kubenswrapper[4811]: I0122 09:39:35.990387 4811 scope.go:117] "RemoveContainer" containerID="ced32301e5ef1d8b3552438c38496f7e13f260331f8d9552ec85b135f617881c" Jan 22 09:39:36 crc kubenswrapper[4811]: I0122 09:39:36.002207 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5eca884a-dbab-4163-919b-15af29e6740c" path="/var/lib/kubelet/pods/5eca884a-dbab-4163-919b-15af29e6740c/volumes" Jan 22 09:39:36 crc kubenswrapper[4811]: I0122 09:39:36.003266 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-d2tmd"] Jan 22 09:39:36 crc kubenswrapper[4811]: I0122 09:39:36.022337 4811 scope.go:117] "RemoveContainer" containerID="083d94bbd3b5a53495aa7612f3cfbd25c1c8a8ab7d391fad0c1abe48960a6afe" Jan 22 09:39:36 crc kubenswrapper[4811]: E0122 09:39:36.022658 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"083d94bbd3b5a53495aa7612f3cfbd25c1c8a8ab7d391fad0c1abe48960a6afe\": container with ID starting with 083d94bbd3b5a53495aa7612f3cfbd25c1c8a8ab7d391fad0c1abe48960a6afe not found: ID does not exist" containerID="083d94bbd3b5a53495aa7612f3cfbd25c1c8a8ab7d391fad0c1abe48960a6afe" Jan 22 09:39:36 crc kubenswrapper[4811]: I0122 09:39:36.022699 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"083d94bbd3b5a53495aa7612f3cfbd25c1c8a8ab7d391fad0c1abe48960a6afe"} err="failed to get container status \"083d94bbd3b5a53495aa7612f3cfbd25c1c8a8ab7d391fad0c1abe48960a6afe\": rpc error: code = NotFound desc = could not find container \"083d94bbd3b5a53495aa7612f3cfbd25c1c8a8ab7d391fad0c1abe48960a6afe\": container with ID starting with 083d94bbd3b5a53495aa7612f3cfbd25c1c8a8ab7d391fad0c1abe48960a6afe not found: ID does not exist" Jan 22 09:39:36 crc kubenswrapper[4811]: I0122 09:39:36.022724 4811 scope.go:117] "RemoveContainer" containerID="40654ce34427dd5aa99d6c5ffc964b6baf4c5a7cece7731537f8a5ae6fd88275" Jan 22 09:39:36 crc kubenswrapper[4811]: E0122 09:39:36.022984 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"40654ce34427dd5aa99d6c5ffc964b6baf4c5a7cece7731537f8a5ae6fd88275\": container with ID starting with 40654ce34427dd5aa99d6c5ffc964b6baf4c5a7cece7731537f8a5ae6fd88275 not found: ID does not exist" containerID="40654ce34427dd5aa99d6c5ffc964b6baf4c5a7cece7731537f8a5ae6fd88275" Jan 22 09:39:36 crc kubenswrapper[4811]: I0122 09:39:36.023005 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40654ce34427dd5aa99d6c5ffc964b6baf4c5a7cece7731537f8a5ae6fd88275"} err="failed to get container status \"40654ce34427dd5aa99d6c5ffc964b6baf4c5a7cece7731537f8a5ae6fd88275\": rpc error: code = NotFound desc = could not find container \"40654ce34427dd5aa99d6c5ffc964b6baf4c5a7cece7731537f8a5ae6fd88275\": container with ID starting with 40654ce34427dd5aa99d6c5ffc964b6baf4c5a7cece7731537f8a5ae6fd88275 not found: ID does not exist" Jan 22 09:39:36 crc kubenswrapper[4811]: I0122 09:39:36.023018 4811 scope.go:117] "RemoveContainer" containerID="ced32301e5ef1d8b3552438c38496f7e13f260331f8d9552ec85b135f617881c" Jan 22 09:39:36 crc kubenswrapper[4811]: E0122 09:39:36.023511 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ced32301e5ef1d8b3552438c38496f7e13f260331f8d9552ec85b135f617881c\": container with ID starting with ced32301e5ef1d8b3552438c38496f7e13f260331f8d9552ec85b135f617881c not found: ID does not exist" containerID="ced32301e5ef1d8b3552438c38496f7e13f260331f8d9552ec85b135f617881c" Jan 22 09:39:36 crc kubenswrapper[4811]: I0122 09:39:36.023532 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ced32301e5ef1d8b3552438c38496f7e13f260331f8d9552ec85b135f617881c"} err="failed to get container status \"ced32301e5ef1d8b3552438c38496f7e13f260331f8d9552ec85b135f617881c\": rpc error: code = NotFound desc = could not find container \"ced32301e5ef1d8b3552438c38496f7e13f260331f8d9552ec85b135f617881c\": container with ID starting with ced32301e5ef1d8b3552438c38496f7e13f260331f8d9552ec85b135f617881c not found: ID does not exist" Jan 22 09:39:36 crc kubenswrapper[4811]: I0122 09:39:36.023566 4811 scope.go:117] "RemoveContainer" containerID="7ad4ee4e29e4160acce9b43bf51a049c4db8639af097152a47f48c1c59ceba2c" Jan 22 09:39:36 crc kubenswrapper[4811]: I0122 09:39:36.043331 4811 scope.go:117] "RemoveContainer" containerID="bd0d2030f7c4ba91c502922f1ee696ab6d91cf42d54d0711b67cdfb8efed2d8e" Jan 22 09:39:36 crc kubenswrapper[4811]: I0122 09:39:36.080419 4811 scope.go:117] "RemoveContainer" containerID="092638de032eb71c2242a535dc11d186a59d6327951ff345ea1ab322aa482372" Jan 22 09:39:36 crc kubenswrapper[4811]: I0122 09:39:36.108416 4811 scope.go:117] "RemoveContainer" containerID="7ad4ee4e29e4160acce9b43bf51a049c4db8639af097152a47f48c1c59ceba2c" Jan 22 09:39:36 crc kubenswrapper[4811]: E0122 09:39:36.108845 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ad4ee4e29e4160acce9b43bf51a049c4db8639af097152a47f48c1c59ceba2c\": container with ID starting with 7ad4ee4e29e4160acce9b43bf51a049c4db8639af097152a47f48c1c59ceba2c not found: ID does not exist" containerID="7ad4ee4e29e4160acce9b43bf51a049c4db8639af097152a47f48c1c59ceba2c" Jan 22 09:39:36 crc kubenswrapper[4811]: I0122 09:39:36.108890 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ad4ee4e29e4160acce9b43bf51a049c4db8639af097152a47f48c1c59ceba2c"} err="failed to get container status \"7ad4ee4e29e4160acce9b43bf51a049c4db8639af097152a47f48c1c59ceba2c\": rpc error: code = NotFound desc = could not find container \"7ad4ee4e29e4160acce9b43bf51a049c4db8639af097152a47f48c1c59ceba2c\": container with ID starting with 7ad4ee4e29e4160acce9b43bf51a049c4db8639af097152a47f48c1c59ceba2c not found: ID does not exist" Jan 22 09:39:36 crc kubenswrapper[4811]: I0122 09:39:36.108907 4811 scope.go:117] "RemoveContainer" containerID="bd0d2030f7c4ba91c502922f1ee696ab6d91cf42d54d0711b67cdfb8efed2d8e" Jan 22 09:39:36 crc kubenswrapper[4811]: E0122 09:39:36.109184 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd0d2030f7c4ba91c502922f1ee696ab6d91cf42d54d0711b67cdfb8efed2d8e\": container with ID starting with bd0d2030f7c4ba91c502922f1ee696ab6d91cf42d54d0711b67cdfb8efed2d8e not found: ID does not exist" containerID="bd0d2030f7c4ba91c502922f1ee696ab6d91cf42d54d0711b67cdfb8efed2d8e" Jan 22 09:39:36 crc kubenswrapper[4811]: I0122 09:39:36.109234 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd0d2030f7c4ba91c502922f1ee696ab6d91cf42d54d0711b67cdfb8efed2d8e"} err="failed to get container status \"bd0d2030f7c4ba91c502922f1ee696ab6d91cf42d54d0711b67cdfb8efed2d8e\": rpc error: code = NotFound desc = could not find container \"bd0d2030f7c4ba91c502922f1ee696ab6d91cf42d54d0711b67cdfb8efed2d8e\": container with ID starting with bd0d2030f7c4ba91c502922f1ee696ab6d91cf42d54d0711b67cdfb8efed2d8e not found: ID does not exist" Jan 22 09:39:36 crc kubenswrapper[4811]: I0122 09:39:36.109305 4811 scope.go:117] "RemoveContainer" containerID="092638de032eb71c2242a535dc11d186a59d6327951ff345ea1ab322aa482372" Jan 22 09:39:36 crc kubenswrapper[4811]: E0122 09:39:36.109665 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"092638de032eb71c2242a535dc11d186a59d6327951ff345ea1ab322aa482372\": container with ID starting with 092638de032eb71c2242a535dc11d186a59d6327951ff345ea1ab322aa482372 not found: ID does not exist" containerID="092638de032eb71c2242a535dc11d186a59d6327951ff345ea1ab322aa482372" Jan 22 09:39:36 crc kubenswrapper[4811]: I0122 09:39:36.109686 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"092638de032eb71c2242a535dc11d186a59d6327951ff345ea1ab322aa482372"} err="failed to get container status \"092638de032eb71c2242a535dc11d186a59d6327951ff345ea1ab322aa482372\": rpc error: code = NotFound desc = could not find container \"092638de032eb71c2242a535dc11d186a59d6327951ff345ea1ab322aa482372\": container with ID starting with 092638de032eb71c2242a535dc11d186a59d6327951ff345ea1ab322aa482372 not found: ID does not exist" Jan 22 09:39:38 crc kubenswrapper[4811]: I0122 09:39:38.000404 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6bbf2a1-0ee5-47db-ae6e-485cadd0b802" path="/var/lib/kubelet/pods/e6bbf2a1-0ee5-47db-ae6e-485cadd0b802/volumes" Jan 22 09:39:50 crc kubenswrapper[4811]: I0122 09:39:50.058415 4811 generic.go:334] "Generic (PLEG): container finished" podID="4baf2862-a8ca-4314-a70a-67e087e5c897" containerID="e37a19f108eb179e9d83dcf91d5ddf85b4c988f0b9da107386a4b4ed8fc07c47" exitCode=0 Jan 22 09:39:50 crc kubenswrapper[4811]: I0122 09:39:50.058493 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-hnjt7" event={"ID":"4baf2862-a8ca-4314-a70a-67e087e5c897","Type":"ContainerDied","Data":"e37a19f108eb179e9d83dcf91d5ddf85b4c988f0b9da107386a4b4ed8fc07c47"} Jan 22 09:39:51 crc kubenswrapper[4811]: I0122 09:39:51.441648 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-hnjt7" Jan 22 09:39:51 crc kubenswrapper[4811]: I0122 09:39:51.472827 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4baf2862-a8ca-4314-a70a-67e087e5c897-ssh-key-openstack-edpm-ipam\") pod \"4baf2862-a8ca-4314-a70a-67e087e5c897\" (UID: \"4baf2862-a8ca-4314-a70a-67e087e5c897\") " Jan 22 09:39:51 crc kubenswrapper[4811]: I0122 09:39:51.472933 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4baf2862-a8ca-4314-a70a-67e087e5c897-ceph\") pod \"4baf2862-a8ca-4314-a70a-67e087e5c897\" (UID: \"4baf2862-a8ca-4314-a70a-67e087e5c897\") " Jan 22 09:39:51 crc kubenswrapper[4811]: I0122 09:39:51.472958 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7frn\" (UniqueName: \"kubernetes.io/projected/4baf2862-a8ca-4314-a70a-67e087e5c897-kube-api-access-d7frn\") pod \"4baf2862-a8ca-4314-a70a-67e087e5c897\" (UID: \"4baf2862-a8ca-4314-a70a-67e087e5c897\") " Jan 22 09:39:51 crc kubenswrapper[4811]: I0122 09:39:51.472986 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4baf2862-a8ca-4314-a70a-67e087e5c897-inventory\") pod \"4baf2862-a8ca-4314-a70a-67e087e5c897\" (UID: \"4baf2862-a8ca-4314-a70a-67e087e5c897\") " Jan 22 09:39:51 crc kubenswrapper[4811]: I0122 09:39:51.480406 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4baf2862-a8ca-4314-a70a-67e087e5c897-kube-api-access-d7frn" (OuterVolumeSpecName: "kube-api-access-d7frn") pod "4baf2862-a8ca-4314-a70a-67e087e5c897" (UID: "4baf2862-a8ca-4314-a70a-67e087e5c897"). InnerVolumeSpecName "kube-api-access-d7frn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:39:51 crc kubenswrapper[4811]: I0122 09:39:51.484134 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4baf2862-a8ca-4314-a70a-67e087e5c897-ceph" (OuterVolumeSpecName: "ceph") pod "4baf2862-a8ca-4314-a70a-67e087e5c897" (UID: "4baf2862-a8ca-4314-a70a-67e087e5c897"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:39:51 crc kubenswrapper[4811]: I0122 09:39:51.493410 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4baf2862-a8ca-4314-a70a-67e087e5c897-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "4baf2862-a8ca-4314-a70a-67e087e5c897" (UID: "4baf2862-a8ca-4314-a70a-67e087e5c897"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:39:51 crc kubenswrapper[4811]: I0122 09:39:51.495314 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4baf2862-a8ca-4314-a70a-67e087e5c897-inventory" (OuterVolumeSpecName: "inventory") pod "4baf2862-a8ca-4314-a70a-67e087e5c897" (UID: "4baf2862-a8ca-4314-a70a-67e087e5c897"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:39:51 crc kubenswrapper[4811]: I0122 09:39:51.577314 4811 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4baf2862-a8ca-4314-a70a-67e087e5c897-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 22 09:39:51 crc kubenswrapper[4811]: I0122 09:39:51.577360 4811 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4baf2862-a8ca-4314-a70a-67e087e5c897-ceph\") on node \"crc\" DevicePath \"\"" Jan 22 09:39:51 crc kubenswrapper[4811]: I0122 09:39:51.577374 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d7frn\" (UniqueName: \"kubernetes.io/projected/4baf2862-a8ca-4314-a70a-67e087e5c897-kube-api-access-d7frn\") on node \"crc\" DevicePath \"\"" Jan 22 09:39:51 crc kubenswrapper[4811]: I0122 09:39:51.577385 4811 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4baf2862-a8ca-4314-a70a-67e087e5c897-inventory\") on node \"crc\" DevicePath \"\"" Jan 22 09:39:52 crc kubenswrapper[4811]: I0122 09:39:52.077243 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-hnjt7" event={"ID":"4baf2862-a8ca-4314-a70a-67e087e5c897","Type":"ContainerDied","Data":"57bd85074bd4964c04e254ece0c7542e369c6a6b7e2bd2bde514b040bb7151db"} Jan 22 09:39:52 crc kubenswrapper[4811]: I0122 09:39:52.077284 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="57bd85074bd4964c04e254ece0c7542e369c6a6b7e2bd2bde514b040bb7151db" Jan 22 09:39:52 crc kubenswrapper[4811]: I0122 09:39:52.077283 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-hnjt7" Jan 22 09:39:52 crc kubenswrapper[4811]: I0122 09:39:52.149398 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-pn5nw"] Jan 22 09:39:52 crc kubenswrapper[4811]: E0122 09:39:52.149761 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5eca884a-dbab-4163-919b-15af29e6740c" containerName="extract-utilities" Jan 22 09:39:52 crc kubenswrapper[4811]: I0122 09:39:52.149779 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="5eca884a-dbab-4163-919b-15af29e6740c" containerName="extract-utilities" Jan 22 09:39:52 crc kubenswrapper[4811]: E0122 09:39:52.149797 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4baf2862-a8ca-4314-a70a-67e087e5c897" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 22 09:39:52 crc kubenswrapper[4811]: I0122 09:39:52.149813 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="4baf2862-a8ca-4314-a70a-67e087e5c897" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 22 09:39:52 crc kubenswrapper[4811]: E0122 09:39:52.149824 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5eca884a-dbab-4163-919b-15af29e6740c" containerName="extract-content" Jan 22 09:39:52 crc kubenswrapper[4811]: I0122 09:39:52.149829 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="5eca884a-dbab-4163-919b-15af29e6740c" containerName="extract-content" Jan 22 09:39:52 crc kubenswrapper[4811]: E0122 09:39:52.149850 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6bbf2a1-0ee5-47db-ae6e-485cadd0b802" containerName="extract-utilities" Jan 22 09:39:52 crc kubenswrapper[4811]: I0122 09:39:52.149855 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6bbf2a1-0ee5-47db-ae6e-485cadd0b802" containerName="extract-utilities" Jan 22 09:39:52 crc kubenswrapper[4811]: E0122 09:39:52.149866 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5eca884a-dbab-4163-919b-15af29e6740c" containerName="registry-server" Jan 22 09:39:52 crc kubenswrapper[4811]: I0122 09:39:52.149871 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="5eca884a-dbab-4163-919b-15af29e6740c" containerName="registry-server" Jan 22 09:39:52 crc kubenswrapper[4811]: E0122 09:39:52.149877 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6bbf2a1-0ee5-47db-ae6e-485cadd0b802" containerName="registry-server" Jan 22 09:39:52 crc kubenswrapper[4811]: I0122 09:39:52.149883 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6bbf2a1-0ee5-47db-ae6e-485cadd0b802" containerName="registry-server" Jan 22 09:39:52 crc kubenswrapper[4811]: E0122 09:39:52.149895 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6bbf2a1-0ee5-47db-ae6e-485cadd0b802" containerName="extract-content" Jan 22 09:39:52 crc kubenswrapper[4811]: I0122 09:39:52.149901 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6bbf2a1-0ee5-47db-ae6e-485cadd0b802" containerName="extract-content" Jan 22 09:39:52 crc kubenswrapper[4811]: I0122 09:39:52.150051 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6bbf2a1-0ee5-47db-ae6e-485cadd0b802" containerName="registry-server" Jan 22 09:39:52 crc kubenswrapper[4811]: I0122 09:39:52.150072 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="4baf2862-a8ca-4314-a70a-67e087e5c897" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 22 09:39:52 crc kubenswrapper[4811]: I0122 09:39:52.150083 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="5eca884a-dbab-4163-919b-15af29e6740c" containerName="registry-server" Jan 22 09:39:52 crc kubenswrapper[4811]: I0122 09:39:52.150576 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-pn5nw" Jan 22 09:39:52 crc kubenswrapper[4811]: I0122 09:39:52.153902 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 22 09:39:52 crc kubenswrapper[4811]: I0122 09:39:52.154006 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 22 09:39:52 crc kubenswrapper[4811]: I0122 09:39:52.154293 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7cbs6" Jan 22 09:39:52 crc kubenswrapper[4811]: I0122 09:39:52.154671 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 22 09:39:52 crc kubenswrapper[4811]: I0122 09:39:52.154892 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 22 09:39:52 crc kubenswrapper[4811]: I0122 09:39:52.168223 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-pn5nw"] Jan 22 09:39:52 crc kubenswrapper[4811]: I0122 09:39:52.190150 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmqrq\" (UniqueName: \"kubernetes.io/projected/b3144d30-a0bb-4788-bf66-089587cabbf5-kube-api-access-vmqrq\") pod \"ssh-known-hosts-edpm-deployment-pn5nw\" (UID: \"b3144d30-a0bb-4788-bf66-089587cabbf5\") " pod="openstack/ssh-known-hosts-edpm-deployment-pn5nw" Jan 22 09:39:52 crc kubenswrapper[4811]: I0122 09:39:52.190361 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b3144d30-a0bb-4788-bf66-089587cabbf5-ceph\") pod \"ssh-known-hosts-edpm-deployment-pn5nw\" (UID: \"b3144d30-a0bb-4788-bf66-089587cabbf5\") " pod="openstack/ssh-known-hosts-edpm-deployment-pn5nw" Jan 22 09:39:52 crc kubenswrapper[4811]: I0122 09:39:52.190590 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/b3144d30-a0bb-4788-bf66-089587cabbf5-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-pn5nw\" (UID: \"b3144d30-a0bb-4788-bf66-089587cabbf5\") " pod="openstack/ssh-known-hosts-edpm-deployment-pn5nw" Jan 22 09:39:52 crc kubenswrapper[4811]: I0122 09:39:52.190755 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b3144d30-a0bb-4788-bf66-089587cabbf5-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-pn5nw\" (UID: \"b3144d30-a0bb-4788-bf66-089587cabbf5\") " pod="openstack/ssh-known-hosts-edpm-deployment-pn5nw" Jan 22 09:39:52 crc kubenswrapper[4811]: I0122 09:39:52.292912 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/b3144d30-a0bb-4788-bf66-089587cabbf5-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-pn5nw\" (UID: \"b3144d30-a0bb-4788-bf66-089587cabbf5\") " pod="openstack/ssh-known-hosts-edpm-deployment-pn5nw" Jan 22 09:39:52 crc kubenswrapper[4811]: I0122 09:39:52.293033 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b3144d30-a0bb-4788-bf66-089587cabbf5-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-pn5nw\" (UID: \"b3144d30-a0bb-4788-bf66-089587cabbf5\") " pod="openstack/ssh-known-hosts-edpm-deployment-pn5nw" Jan 22 09:39:52 crc kubenswrapper[4811]: I0122 09:39:52.293256 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmqrq\" (UniqueName: \"kubernetes.io/projected/b3144d30-a0bb-4788-bf66-089587cabbf5-kube-api-access-vmqrq\") pod \"ssh-known-hosts-edpm-deployment-pn5nw\" (UID: \"b3144d30-a0bb-4788-bf66-089587cabbf5\") " pod="openstack/ssh-known-hosts-edpm-deployment-pn5nw" Jan 22 09:39:52 crc kubenswrapper[4811]: I0122 09:39:52.293361 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b3144d30-a0bb-4788-bf66-089587cabbf5-ceph\") pod \"ssh-known-hosts-edpm-deployment-pn5nw\" (UID: \"b3144d30-a0bb-4788-bf66-089587cabbf5\") " pod="openstack/ssh-known-hosts-edpm-deployment-pn5nw" Jan 22 09:39:52 crc kubenswrapper[4811]: I0122 09:39:52.298155 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b3144d30-a0bb-4788-bf66-089587cabbf5-ceph\") pod \"ssh-known-hosts-edpm-deployment-pn5nw\" (UID: \"b3144d30-a0bb-4788-bf66-089587cabbf5\") " pod="openstack/ssh-known-hosts-edpm-deployment-pn5nw" Jan 22 09:39:52 crc kubenswrapper[4811]: I0122 09:39:52.298515 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b3144d30-a0bb-4788-bf66-089587cabbf5-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-pn5nw\" (UID: \"b3144d30-a0bb-4788-bf66-089587cabbf5\") " pod="openstack/ssh-known-hosts-edpm-deployment-pn5nw" Jan 22 09:39:52 crc kubenswrapper[4811]: I0122 09:39:52.313012 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/b3144d30-a0bb-4788-bf66-089587cabbf5-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-pn5nw\" (UID: \"b3144d30-a0bb-4788-bf66-089587cabbf5\") " pod="openstack/ssh-known-hosts-edpm-deployment-pn5nw" Jan 22 09:39:52 crc kubenswrapper[4811]: I0122 09:39:52.315246 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmqrq\" (UniqueName: \"kubernetes.io/projected/b3144d30-a0bb-4788-bf66-089587cabbf5-kube-api-access-vmqrq\") pod \"ssh-known-hosts-edpm-deployment-pn5nw\" (UID: \"b3144d30-a0bb-4788-bf66-089587cabbf5\") " pod="openstack/ssh-known-hosts-edpm-deployment-pn5nw" Jan 22 09:39:52 crc kubenswrapper[4811]: I0122 09:39:52.465225 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-pn5nw" Jan 22 09:39:52 crc kubenswrapper[4811]: I0122 09:39:52.945665 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-pn5nw"] Jan 22 09:39:53 crc kubenswrapper[4811]: I0122 09:39:53.085321 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-pn5nw" event={"ID":"b3144d30-a0bb-4788-bf66-089587cabbf5","Type":"ContainerStarted","Data":"1f2d4f2f45963540ed2ebb9cd82300a77fd8b97c72ceb08c2803c527117eabc9"} Jan 22 09:39:54 crc kubenswrapper[4811]: I0122 09:39:54.095592 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-pn5nw" event={"ID":"b3144d30-a0bb-4788-bf66-089587cabbf5","Type":"ContainerStarted","Data":"42dcdefa80534327c7bb4502597778aec33159a56c66bb9347a7ccf8102b3b0a"} Jan 22 09:39:54 crc kubenswrapper[4811]: I0122 09:39:54.113037 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-pn5nw" podStartSLOduration=1.6052682759999999 podStartE2EDuration="2.113021451s" podCreationTimestamp="2026-01-22 09:39:52 +0000 UTC" firstStartedPulling="2026-01-22 09:39:52.951281297 +0000 UTC m=+2037.273468420" lastFinishedPulling="2026-01-22 09:39:53.459034472 +0000 UTC m=+2037.781221595" observedRunningTime="2026-01-22 09:39:54.109729121 +0000 UTC m=+2038.431916244" watchObservedRunningTime="2026-01-22 09:39:54.113021451 +0000 UTC m=+2038.435208574" Jan 22 09:40:01 crc kubenswrapper[4811]: I0122 09:40:01.140528 4811 generic.go:334] "Generic (PLEG): container finished" podID="b3144d30-a0bb-4788-bf66-089587cabbf5" containerID="42dcdefa80534327c7bb4502597778aec33159a56c66bb9347a7ccf8102b3b0a" exitCode=0 Jan 22 09:40:01 crc kubenswrapper[4811]: I0122 09:40:01.140644 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-pn5nw" event={"ID":"b3144d30-a0bb-4788-bf66-089587cabbf5","Type":"ContainerDied","Data":"42dcdefa80534327c7bb4502597778aec33159a56c66bb9347a7ccf8102b3b0a"} Jan 22 09:40:02 crc kubenswrapper[4811]: I0122 09:40:02.565169 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-pn5nw" Jan 22 09:40:02 crc kubenswrapper[4811]: I0122 09:40:02.725975 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b3144d30-a0bb-4788-bf66-089587cabbf5-ssh-key-openstack-edpm-ipam\") pod \"b3144d30-a0bb-4788-bf66-089587cabbf5\" (UID: \"b3144d30-a0bb-4788-bf66-089587cabbf5\") " Jan 22 09:40:02 crc kubenswrapper[4811]: I0122 09:40:02.726731 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vmqrq\" (UniqueName: \"kubernetes.io/projected/b3144d30-a0bb-4788-bf66-089587cabbf5-kube-api-access-vmqrq\") pod \"b3144d30-a0bb-4788-bf66-089587cabbf5\" (UID: \"b3144d30-a0bb-4788-bf66-089587cabbf5\") " Jan 22 09:40:02 crc kubenswrapper[4811]: I0122 09:40:02.726779 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/b3144d30-a0bb-4788-bf66-089587cabbf5-inventory-0\") pod \"b3144d30-a0bb-4788-bf66-089587cabbf5\" (UID: \"b3144d30-a0bb-4788-bf66-089587cabbf5\") " Jan 22 09:40:02 crc kubenswrapper[4811]: I0122 09:40:02.726844 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b3144d30-a0bb-4788-bf66-089587cabbf5-ceph\") pod \"b3144d30-a0bb-4788-bf66-089587cabbf5\" (UID: \"b3144d30-a0bb-4788-bf66-089587cabbf5\") " Jan 22 09:40:02 crc kubenswrapper[4811]: I0122 09:40:02.733453 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3144d30-a0bb-4788-bf66-089587cabbf5-kube-api-access-vmqrq" (OuterVolumeSpecName: "kube-api-access-vmqrq") pod "b3144d30-a0bb-4788-bf66-089587cabbf5" (UID: "b3144d30-a0bb-4788-bf66-089587cabbf5"). InnerVolumeSpecName "kube-api-access-vmqrq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:40:02 crc kubenswrapper[4811]: I0122 09:40:02.733573 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3144d30-a0bb-4788-bf66-089587cabbf5-ceph" (OuterVolumeSpecName: "ceph") pod "b3144d30-a0bb-4788-bf66-089587cabbf5" (UID: "b3144d30-a0bb-4788-bf66-089587cabbf5"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:40:02 crc kubenswrapper[4811]: I0122 09:40:02.749810 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3144d30-a0bb-4788-bf66-089587cabbf5-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "b3144d30-a0bb-4788-bf66-089587cabbf5" (UID: "b3144d30-a0bb-4788-bf66-089587cabbf5"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:40:02 crc kubenswrapper[4811]: I0122 09:40:02.755557 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3144d30-a0bb-4788-bf66-089587cabbf5-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "b3144d30-a0bb-4788-bf66-089587cabbf5" (UID: "b3144d30-a0bb-4788-bf66-089587cabbf5"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:40:02 crc kubenswrapper[4811]: I0122 09:40:02.829068 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vmqrq\" (UniqueName: \"kubernetes.io/projected/b3144d30-a0bb-4788-bf66-089587cabbf5-kube-api-access-vmqrq\") on node \"crc\" DevicePath \"\"" Jan 22 09:40:02 crc kubenswrapper[4811]: I0122 09:40:02.829095 4811 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/b3144d30-a0bb-4788-bf66-089587cabbf5-inventory-0\") on node \"crc\" DevicePath \"\"" Jan 22 09:40:02 crc kubenswrapper[4811]: I0122 09:40:02.829106 4811 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b3144d30-a0bb-4788-bf66-089587cabbf5-ceph\") on node \"crc\" DevicePath \"\"" Jan 22 09:40:02 crc kubenswrapper[4811]: I0122 09:40:02.829114 4811 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b3144d30-a0bb-4788-bf66-089587cabbf5-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 22 09:40:03 crc kubenswrapper[4811]: I0122 09:40:03.157496 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-pn5nw" event={"ID":"b3144d30-a0bb-4788-bf66-089587cabbf5","Type":"ContainerDied","Data":"1f2d4f2f45963540ed2ebb9cd82300a77fd8b97c72ceb08c2803c527117eabc9"} Jan 22 09:40:03 crc kubenswrapper[4811]: I0122 09:40:03.157962 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1f2d4f2f45963540ed2ebb9cd82300a77fd8b97c72ceb08c2803c527117eabc9" Jan 22 09:40:03 crc kubenswrapper[4811]: I0122 09:40:03.157557 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-pn5nw" Jan 22 09:40:03 crc kubenswrapper[4811]: I0122 09:40:03.238334 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-5n2sj"] Jan 22 09:40:03 crc kubenswrapper[4811]: E0122 09:40:03.238718 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3144d30-a0bb-4788-bf66-089587cabbf5" containerName="ssh-known-hosts-edpm-deployment" Jan 22 09:40:03 crc kubenswrapper[4811]: I0122 09:40:03.238739 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3144d30-a0bb-4788-bf66-089587cabbf5" containerName="ssh-known-hosts-edpm-deployment" Jan 22 09:40:03 crc kubenswrapper[4811]: I0122 09:40:03.238918 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3144d30-a0bb-4788-bf66-089587cabbf5" containerName="ssh-known-hosts-edpm-deployment" Jan 22 09:40:03 crc kubenswrapper[4811]: I0122 09:40:03.239467 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5n2sj" Jan 22 09:40:03 crc kubenswrapper[4811]: I0122 09:40:03.241844 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 22 09:40:03 crc kubenswrapper[4811]: I0122 09:40:03.242298 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 22 09:40:03 crc kubenswrapper[4811]: I0122 09:40:03.242519 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 22 09:40:03 crc kubenswrapper[4811]: I0122 09:40:03.243331 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7cbs6" Jan 22 09:40:03 crc kubenswrapper[4811]: I0122 09:40:03.246047 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 22 09:40:03 crc kubenswrapper[4811]: I0122 09:40:03.262790 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-5n2sj"] Jan 22 09:40:03 crc kubenswrapper[4811]: I0122 09:40:03.339835 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/247204c2-fb25-45be-a1ec-8bc4b64e41d6-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-5n2sj\" (UID: \"247204c2-fb25-45be-a1ec-8bc4b64e41d6\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5n2sj" Jan 22 09:40:03 crc kubenswrapper[4811]: I0122 09:40:03.340137 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9nj4\" (UniqueName: \"kubernetes.io/projected/247204c2-fb25-45be-a1ec-8bc4b64e41d6-kube-api-access-k9nj4\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-5n2sj\" (UID: \"247204c2-fb25-45be-a1ec-8bc4b64e41d6\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5n2sj" Jan 22 09:40:03 crc kubenswrapper[4811]: I0122 09:40:03.340406 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/247204c2-fb25-45be-a1ec-8bc4b64e41d6-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-5n2sj\" (UID: \"247204c2-fb25-45be-a1ec-8bc4b64e41d6\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5n2sj" Jan 22 09:40:03 crc kubenswrapper[4811]: I0122 09:40:03.340522 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/247204c2-fb25-45be-a1ec-8bc4b64e41d6-ceph\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-5n2sj\" (UID: \"247204c2-fb25-45be-a1ec-8bc4b64e41d6\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5n2sj" Jan 22 09:40:03 crc kubenswrapper[4811]: I0122 09:40:03.442397 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/247204c2-fb25-45be-a1ec-8bc4b64e41d6-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-5n2sj\" (UID: \"247204c2-fb25-45be-a1ec-8bc4b64e41d6\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5n2sj" Jan 22 09:40:03 crc kubenswrapper[4811]: I0122 09:40:03.442592 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9nj4\" (UniqueName: \"kubernetes.io/projected/247204c2-fb25-45be-a1ec-8bc4b64e41d6-kube-api-access-k9nj4\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-5n2sj\" (UID: \"247204c2-fb25-45be-a1ec-8bc4b64e41d6\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5n2sj" Jan 22 09:40:03 crc kubenswrapper[4811]: I0122 09:40:03.442751 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/247204c2-fb25-45be-a1ec-8bc4b64e41d6-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-5n2sj\" (UID: \"247204c2-fb25-45be-a1ec-8bc4b64e41d6\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5n2sj" Jan 22 09:40:03 crc kubenswrapper[4811]: I0122 09:40:03.442867 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/247204c2-fb25-45be-a1ec-8bc4b64e41d6-ceph\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-5n2sj\" (UID: \"247204c2-fb25-45be-a1ec-8bc4b64e41d6\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5n2sj" Jan 22 09:40:03 crc kubenswrapper[4811]: I0122 09:40:03.446814 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/247204c2-fb25-45be-a1ec-8bc4b64e41d6-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-5n2sj\" (UID: \"247204c2-fb25-45be-a1ec-8bc4b64e41d6\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5n2sj" Jan 22 09:40:03 crc kubenswrapper[4811]: I0122 09:40:03.447394 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/247204c2-fb25-45be-a1ec-8bc4b64e41d6-ceph\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-5n2sj\" (UID: \"247204c2-fb25-45be-a1ec-8bc4b64e41d6\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5n2sj" Jan 22 09:40:03 crc kubenswrapper[4811]: I0122 09:40:03.449498 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/247204c2-fb25-45be-a1ec-8bc4b64e41d6-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-5n2sj\" (UID: \"247204c2-fb25-45be-a1ec-8bc4b64e41d6\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5n2sj" Jan 22 09:40:03 crc kubenswrapper[4811]: I0122 09:40:03.462391 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9nj4\" (UniqueName: \"kubernetes.io/projected/247204c2-fb25-45be-a1ec-8bc4b64e41d6-kube-api-access-k9nj4\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-5n2sj\" (UID: \"247204c2-fb25-45be-a1ec-8bc4b64e41d6\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5n2sj" Jan 22 09:40:03 crc kubenswrapper[4811]: I0122 09:40:03.554857 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5n2sj" Jan 22 09:40:04 crc kubenswrapper[4811]: I0122 09:40:04.039610 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-5n2sj"] Jan 22 09:40:04 crc kubenswrapper[4811]: I0122 09:40:04.165430 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5n2sj" event={"ID":"247204c2-fb25-45be-a1ec-8bc4b64e41d6","Type":"ContainerStarted","Data":"45cdb59fa10e7e898c4342f12bbf54ab533cf679dcd2e0158bd87723f737e519"} Jan 22 09:40:05 crc kubenswrapper[4811]: I0122 09:40:05.173909 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5n2sj" event={"ID":"247204c2-fb25-45be-a1ec-8bc4b64e41d6","Type":"ContainerStarted","Data":"7bdc327c6ceaf776fe66e6b95e405f3e035f266a16c69ce9b49cb41aa2565bbc"} Jan 22 09:40:05 crc kubenswrapper[4811]: I0122 09:40:05.192492 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5n2sj" podStartSLOduration=1.6737969430000001 podStartE2EDuration="2.19247882s" podCreationTimestamp="2026-01-22 09:40:03 +0000 UTC" firstStartedPulling="2026-01-22 09:40:04.054883612 +0000 UTC m=+2048.377070736" lastFinishedPulling="2026-01-22 09:40:04.57356549 +0000 UTC m=+2048.895752613" observedRunningTime="2026-01-22 09:40:05.185773914 +0000 UTC m=+2049.507961038" watchObservedRunningTime="2026-01-22 09:40:05.19247882 +0000 UTC m=+2049.514665943" Jan 22 09:40:11 crc kubenswrapper[4811]: I0122 09:40:11.216759 4811 generic.go:334] "Generic (PLEG): container finished" podID="247204c2-fb25-45be-a1ec-8bc4b64e41d6" containerID="7bdc327c6ceaf776fe66e6b95e405f3e035f266a16c69ce9b49cb41aa2565bbc" exitCode=0 Jan 22 09:40:11 crc kubenswrapper[4811]: I0122 09:40:11.216951 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5n2sj" event={"ID":"247204c2-fb25-45be-a1ec-8bc4b64e41d6","Type":"ContainerDied","Data":"7bdc327c6ceaf776fe66e6b95e405f3e035f266a16c69ce9b49cb41aa2565bbc"} Jan 22 09:40:12 crc kubenswrapper[4811]: I0122 09:40:12.598512 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5n2sj" Jan 22 09:40:12 crc kubenswrapper[4811]: I0122 09:40:12.752526 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/247204c2-fb25-45be-a1ec-8bc4b64e41d6-inventory\") pod \"247204c2-fb25-45be-a1ec-8bc4b64e41d6\" (UID: \"247204c2-fb25-45be-a1ec-8bc4b64e41d6\") " Jan 22 09:40:12 crc kubenswrapper[4811]: I0122 09:40:12.752576 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k9nj4\" (UniqueName: \"kubernetes.io/projected/247204c2-fb25-45be-a1ec-8bc4b64e41d6-kube-api-access-k9nj4\") pod \"247204c2-fb25-45be-a1ec-8bc4b64e41d6\" (UID: \"247204c2-fb25-45be-a1ec-8bc4b64e41d6\") " Jan 22 09:40:12 crc kubenswrapper[4811]: I0122 09:40:12.752668 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/247204c2-fb25-45be-a1ec-8bc4b64e41d6-ceph\") pod \"247204c2-fb25-45be-a1ec-8bc4b64e41d6\" (UID: \"247204c2-fb25-45be-a1ec-8bc4b64e41d6\") " Jan 22 09:40:12 crc kubenswrapper[4811]: I0122 09:40:12.752751 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/247204c2-fb25-45be-a1ec-8bc4b64e41d6-ssh-key-openstack-edpm-ipam\") pod \"247204c2-fb25-45be-a1ec-8bc4b64e41d6\" (UID: \"247204c2-fb25-45be-a1ec-8bc4b64e41d6\") " Jan 22 09:40:12 crc kubenswrapper[4811]: I0122 09:40:12.757670 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/247204c2-fb25-45be-a1ec-8bc4b64e41d6-ceph" (OuterVolumeSpecName: "ceph") pod "247204c2-fb25-45be-a1ec-8bc4b64e41d6" (UID: "247204c2-fb25-45be-a1ec-8bc4b64e41d6"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:40:12 crc kubenswrapper[4811]: I0122 09:40:12.759547 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/247204c2-fb25-45be-a1ec-8bc4b64e41d6-kube-api-access-k9nj4" (OuterVolumeSpecName: "kube-api-access-k9nj4") pod "247204c2-fb25-45be-a1ec-8bc4b64e41d6" (UID: "247204c2-fb25-45be-a1ec-8bc4b64e41d6"). InnerVolumeSpecName "kube-api-access-k9nj4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:40:12 crc kubenswrapper[4811]: I0122 09:40:12.772819 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/247204c2-fb25-45be-a1ec-8bc4b64e41d6-inventory" (OuterVolumeSpecName: "inventory") pod "247204c2-fb25-45be-a1ec-8bc4b64e41d6" (UID: "247204c2-fb25-45be-a1ec-8bc4b64e41d6"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:40:12 crc kubenswrapper[4811]: I0122 09:40:12.774299 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/247204c2-fb25-45be-a1ec-8bc4b64e41d6-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "247204c2-fb25-45be-a1ec-8bc4b64e41d6" (UID: "247204c2-fb25-45be-a1ec-8bc4b64e41d6"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:40:12 crc kubenswrapper[4811]: I0122 09:40:12.854865 4811 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/247204c2-fb25-45be-a1ec-8bc4b64e41d6-inventory\") on node \"crc\" DevicePath \"\"" Jan 22 09:40:12 crc kubenswrapper[4811]: I0122 09:40:12.854890 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k9nj4\" (UniqueName: \"kubernetes.io/projected/247204c2-fb25-45be-a1ec-8bc4b64e41d6-kube-api-access-k9nj4\") on node \"crc\" DevicePath \"\"" Jan 22 09:40:12 crc kubenswrapper[4811]: I0122 09:40:12.854902 4811 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/247204c2-fb25-45be-a1ec-8bc4b64e41d6-ceph\") on node \"crc\" DevicePath \"\"" Jan 22 09:40:12 crc kubenswrapper[4811]: I0122 09:40:12.854922 4811 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/247204c2-fb25-45be-a1ec-8bc4b64e41d6-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 22 09:40:13 crc kubenswrapper[4811]: I0122 09:40:13.230130 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5n2sj" event={"ID":"247204c2-fb25-45be-a1ec-8bc4b64e41d6","Type":"ContainerDied","Data":"45cdb59fa10e7e898c4342f12bbf54ab533cf679dcd2e0158bd87723f737e519"} Jan 22 09:40:13 crc kubenswrapper[4811]: I0122 09:40:13.230356 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="45cdb59fa10e7e898c4342f12bbf54ab533cf679dcd2e0158bd87723f737e519" Jan 22 09:40:13 crc kubenswrapper[4811]: I0122 09:40:13.230179 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5n2sj" Jan 22 09:40:13 crc kubenswrapper[4811]: I0122 09:40:13.321252 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rgn6q"] Jan 22 09:40:13 crc kubenswrapper[4811]: E0122 09:40:13.321552 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="247204c2-fb25-45be-a1ec-8bc4b64e41d6" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 22 09:40:13 crc kubenswrapper[4811]: I0122 09:40:13.321568 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="247204c2-fb25-45be-a1ec-8bc4b64e41d6" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 22 09:40:13 crc kubenswrapper[4811]: I0122 09:40:13.321753 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="247204c2-fb25-45be-a1ec-8bc4b64e41d6" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 22 09:40:13 crc kubenswrapper[4811]: I0122 09:40:13.322267 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rgn6q" Jan 22 09:40:13 crc kubenswrapper[4811]: I0122 09:40:13.324379 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 22 09:40:13 crc kubenswrapper[4811]: I0122 09:40:13.324670 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 22 09:40:13 crc kubenswrapper[4811]: I0122 09:40:13.324781 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7cbs6" Jan 22 09:40:13 crc kubenswrapper[4811]: I0122 09:40:13.325444 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 22 09:40:13 crc kubenswrapper[4811]: I0122 09:40:13.328497 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 22 09:40:13 crc kubenswrapper[4811]: I0122 09:40:13.338792 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rgn6q"] Jan 22 09:40:13 crc kubenswrapper[4811]: I0122 09:40:13.462936 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpmkx\" (UniqueName: \"kubernetes.io/projected/84f96cc8-d392-47a2-baab-998459b83025-kube-api-access-hpmkx\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-rgn6q\" (UID: \"84f96cc8-d392-47a2-baab-998459b83025\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rgn6q" Jan 22 09:40:13 crc kubenswrapper[4811]: I0122 09:40:13.462978 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/84f96cc8-d392-47a2-baab-998459b83025-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-rgn6q\" (UID: \"84f96cc8-d392-47a2-baab-998459b83025\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rgn6q" Jan 22 09:40:13 crc kubenswrapper[4811]: I0122 09:40:13.463566 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/84f96cc8-d392-47a2-baab-998459b83025-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-rgn6q\" (UID: \"84f96cc8-d392-47a2-baab-998459b83025\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rgn6q" Jan 22 09:40:13 crc kubenswrapper[4811]: I0122 09:40:13.463597 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/84f96cc8-d392-47a2-baab-998459b83025-ceph\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-rgn6q\" (UID: \"84f96cc8-d392-47a2-baab-998459b83025\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rgn6q" Jan 22 09:40:13 crc kubenswrapper[4811]: I0122 09:40:13.565066 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/84f96cc8-d392-47a2-baab-998459b83025-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-rgn6q\" (UID: \"84f96cc8-d392-47a2-baab-998459b83025\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rgn6q" Jan 22 09:40:13 crc kubenswrapper[4811]: I0122 09:40:13.565111 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/84f96cc8-d392-47a2-baab-998459b83025-ceph\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-rgn6q\" (UID: \"84f96cc8-d392-47a2-baab-998459b83025\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rgn6q" Jan 22 09:40:13 crc kubenswrapper[4811]: I0122 09:40:13.565273 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hpmkx\" (UniqueName: \"kubernetes.io/projected/84f96cc8-d392-47a2-baab-998459b83025-kube-api-access-hpmkx\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-rgn6q\" (UID: \"84f96cc8-d392-47a2-baab-998459b83025\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rgn6q" Jan 22 09:40:13 crc kubenswrapper[4811]: I0122 09:40:13.565306 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/84f96cc8-d392-47a2-baab-998459b83025-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-rgn6q\" (UID: \"84f96cc8-d392-47a2-baab-998459b83025\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rgn6q" Jan 22 09:40:13 crc kubenswrapper[4811]: I0122 09:40:13.568736 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/84f96cc8-d392-47a2-baab-998459b83025-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-rgn6q\" (UID: \"84f96cc8-d392-47a2-baab-998459b83025\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rgn6q" Jan 22 09:40:13 crc kubenswrapper[4811]: I0122 09:40:13.571320 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/84f96cc8-d392-47a2-baab-998459b83025-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-rgn6q\" (UID: \"84f96cc8-d392-47a2-baab-998459b83025\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rgn6q" Jan 22 09:40:13 crc kubenswrapper[4811]: I0122 09:40:13.571660 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/84f96cc8-d392-47a2-baab-998459b83025-ceph\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-rgn6q\" (UID: \"84f96cc8-d392-47a2-baab-998459b83025\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rgn6q" Jan 22 09:40:13 crc kubenswrapper[4811]: I0122 09:40:13.584607 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hpmkx\" (UniqueName: \"kubernetes.io/projected/84f96cc8-d392-47a2-baab-998459b83025-kube-api-access-hpmkx\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-rgn6q\" (UID: \"84f96cc8-d392-47a2-baab-998459b83025\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rgn6q" Jan 22 09:40:13 crc kubenswrapper[4811]: I0122 09:40:13.635426 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rgn6q" Jan 22 09:40:14 crc kubenswrapper[4811]: I0122 09:40:14.085724 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rgn6q"] Jan 22 09:40:14 crc kubenswrapper[4811]: I0122 09:40:14.239213 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rgn6q" event={"ID":"84f96cc8-d392-47a2-baab-998459b83025","Type":"ContainerStarted","Data":"d11229b9f58633d2905574b96deb636fad37ecd94c26449581324c2214a23328"} Jan 22 09:40:15 crc kubenswrapper[4811]: I0122 09:40:15.246077 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rgn6q" event={"ID":"84f96cc8-d392-47a2-baab-998459b83025","Type":"ContainerStarted","Data":"55b2973d253d13285ed7f16ff4ca9b26f5d0f195af4b6f963eb8652f9e4a68c7"} Jan 22 09:40:15 crc kubenswrapper[4811]: I0122 09:40:15.257910 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rgn6q" podStartSLOduration=1.745095929 podStartE2EDuration="2.257897495s" podCreationTimestamp="2026-01-22 09:40:13 +0000 UTC" firstStartedPulling="2026-01-22 09:40:14.086927455 +0000 UTC m=+2058.409114578" lastFinishedPulling="2026-01-22 09:40:14.599729021 +0000 UTC m=+2058.921916144" observedRunningTime="2026-01-22 09:40:15.256300863 +0000 UTC m=+2059.578487987" watchObservedRunningTime="2026-01-22 09:40:15.257897495 +0000 UTC m=+2059.580084618" Jan 22 09:40:22 crc kubenswrapper[4811]: I0122 09:40:22.286256 4811 generic.go:334] "Generic (PLEG): container finished" podID="84f96cc8-d392-47a2-baab-998459b83025" containerID="55b2973d253d13285ed7f16ff4ca9b26f5d0f195af4b6f963eb8652f9e4a68c7" exitCode=0 Jan 22 09:40:22 crc kubenswrapper[4811]: I0122 09:40:22.286331 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rgn6q" event={"ID":"84f96cc8-d392-47a2-baab-998459b83025","Type":"ContainerDied","Data":"55b2973d253d13285ed7f16ff4ca9b26f5d0f195af4b6f963eb8652f9e4a68c7"} Jan 22 09:40:23 crc kubenswrapper[4811]: I0122 09:40:23.663400 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rgn6q" Jan 22 09:40:23 crc kubenswrapper[4811]: I0122 09:40:23.737574 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/84f96cc8-d392-47a2-baab-998459b83025-ceph\") pod \"84f96cc8-d392-47a2-baab-998459b83025\" (UID: \"84f96cc8-d392-47a2-baab-998459b83025\") " Jan 22 09:40:23 crc kubenswrapper[4811]: I0122 09:40:23.737668 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/84f96cc8-d392-47a2-baab-998459b83025-inventory\") pod \"84f96cc8-d392-47a2-baab-998459b83025\" (UID: \"84f96cc8-d392-47a2-baab-998459b83025\") " Jan 22 09:40:23 crc kubenswrapper[4811]: I0122 09:40:23.737704 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/84f96cc8-d392-47a2-baab-998459b83025-ssh-key-openstack-edpm-ipam\") pod \"84f96cc8-d392-47a2-baab-998459b83025\" (UID: \"84f96cc8-d392-47a2-baab-998459b83025\") " Jan 22 09:40:23 crc kubenswrapper[4811]: I0122 09:40:23.737817 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hpmkx\" (UniqueName: \"kubernetes.io/projected/84f96cc8-d392-47a2-baab-998459b83025-kube-api-access-hpmkx\") pod \"84f96cc8-d392-47a2-baab-998459b83025\" (UID: \"84f96cc8-d392-47a2-baab-998459b83025\") " Jan 22 09:40:23 crc kubenswrapper[4811]: I0122 09:40:23.743025 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84f96cc8-d392-47a2-baab-998459b83025-ceph" (OuterVolumeSpecName: "ceph") pod "84f96cc8-d392-47a2-baab-998459b83025" (UID: "84f96cc8-d392-47a2-baab-998459b83025"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:40:23 crc kubenswrapper[4811]: I0122 09:40:23.749080 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84f96cc8-d392-47a2-baab-998459b83025-kube-api-access-hpmkx" (OuterVolumeSpecName: "kube-api-access-hpmkx") pod "84f96cc8-d392-47a2-baab-998459b83025" (UID: "84f96cc8-d392-47a2-baab-998459b83025"). InnerVolumeSpecName "kube-api-access-hpmkx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:40:23 crc kubenswrapper[4811]: I0122 09:40:23.756009 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84f96cc8-d392-47a2-baab-998459b83025-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "84f96cc8-d392-47a2-baab-998459b83025" (UID: "84f96cc8-d392-47a2-baab-998459b83025"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:40:23 crc kubenswrapper[4811]: I0122 09:40:23.757362 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84f96cc8-d392-47a2-baab-998459b83025-inventory" (OuterVolumeSpecName: "inventory") pod "84f96cc8-d392-47a2-baab-998459b83025" (UID: "84f96cc8-d392-47a2-baab-998459b83025"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:40:23 crc kubenswrapper[4811]: I0122 09:40:23.839356 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hpmkx\" (UniqueName: \"kubernetes.io/projected/84f96cc8-d392-47a2-baab-998459b83025-kube-api-access-hpmkx\") on node \"crc\" DevicePath \"\"" Jan 22 09:40:23 crc kubenswrapper[4811]: I0122 09:40:23.839498 4811 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/84f96cc8-d392-47a2-baab-998459b83025-ceph\") on node \"crc\" DevicePath \"\"" Jan 22 09:40:23 crc kubenswrapper[4811]: I0122 09:40:23.839507 4811 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/84f96cc8-d392-47a2-baab-998459b83025-inventory\") on node \"crc\" DevicePath \"\"" Jan 22 09:40:23 crc kubenswrapper[4811]: I0122 09:40:23.839514 4811 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/84f96cc8-d392-47a2-baab-998459b83025-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 22 09:40:24 crc kubenswrapper[4811]: I0122 09:40:24.299408 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rgn6q" event={"ID":"84f96cc8-d392-47a2-baab-998459b83025","Type":"ContainerDied","Data":"d11229b9f58633d2905574b96deb636fad37ecd94c26449581324c2214a23328"} Jan 22 09:40:24 crc kubenswrapper[4811]: I0122 09:40:24.299436 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rgn6q" Jan 22 09:40:24 crc kubenswrapper[4811]: I0122 09:40:24.299447 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d11229b9f58633d2905574b96deb636fad37ecd94c26449581324c2214a23328" Jan 22 09:40:24 crc kubenswrapper[4811]: I0122 09:40:24.366748 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5m56f"] Jan 22 09:40:24 crc kubenswrapper[4811]: E0122 09:40:24.367259 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84f96cc8-d392-47a2-baab-998459b83025" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 22 09:40:24 crc kubenswrapper[4811]: I0122 09:40:24.367345 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="84f96cc8-d392-47a2-baab-998459b83025" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 22 09:40:24 crc kubenswrapper[4811]: I0122 09:40:24.367564 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="84f96cc8-d392-47a2-baab-998459b83025" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 22 09:40:24 crc kubenswrapper[4811]: I0122 09:40:24.368152 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5m56f" Jan 22 09:40:24 crc kubenswrapper[4811]: I0122 09:40:24.370505 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7cbs6" Jan 22 09:40:24 crc kubenswrapper[4811]: I0122 09:40:24.370585 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 22 09:40:24 crc kubenswrapper[4811]: I0122 09:40:24.370596 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 22 09:40:24 crc kubenswrapper[4811]: I0122 09:40:24.370514 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 22 09:40:24 crc kubenswrapper[4811]: I0122 09:40:24.370514 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Jan 22 09:40:24 crc kubenswrapper[4811]: I0122 09:40:24.371414 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Jan 22 09:40:24 crc kubenswrapper[4811]: I0122 09:40:24.373415 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 22 09:40:24 crc kubenswrapper[4811]: I0122 09:40:24.373582 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Jan 22 09:40:24 crc kubenswrapper[4811]: I0122 09:40:24.380816 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5m56f"] Jan 22 09:40:24 crc kubenswrapper[4811]: I0122 09:40:24.447195 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d2d7b6d9-f9f5-4548-a6c3-01248c076247-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5m56f\" (UID: \"d2d7b6d9-f9f5-4548-a6c3-01248c076247\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5m56f" Jan 22 09:40:24 crc kubenswrapper[4811]: I0122 09:40:24.447239 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2d7b6d9-f9f5-4548-a6c3-01248c076247-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5m56f\" (UID: \"d2d7b6d9-f9f5-4548-a6c3-01248c076247\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5m56f" Jan 22 09:40:24 crc kubenswrapper[4811]: I0122 09:40:24.447276 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2d7b6d9-f9f5-4548-a6c3-01248c076247-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5m56f\" (UID: \"d2d7b6d9-f9f5-4548-a6c3-01248c076247\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5m56f" Jan 22 09:40:24 crc kubenswrapper[4811]: I0122 09:40:24.447332 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2d7b6d9-f9f5-4548-a6c3-01248c076247-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5m56f\" (UID: \"d2d7b6d9-f9f5-4548-a6c3-01248c076247\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5m56f" Jan 22 09:40:24 crc kubenswrapper[4811]: I0122 09:40:24.447413 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2d7b6d9-f9f5-4548-a6c3-01248c076247-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5m56f\" (UID: \"d2d7b6d9-f9f5-4548-a6c3-01248c076247\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5m56f" Jan 22 09:40:24 crc kubenswrapper[4811]: I0122 09:40:24.447448 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2d7b6d9-f9f5-4548-a6c3-01248c076247-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5m56f\" (UID: \"d2d7b6d9-f9f5-4548-a6c3-01248c076247\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5m56f" Jan 22 09:40:24 crc kubenswrapper[4811]: I0122 09:40:24.447596 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwh99\" (UniqueName: \"kubernetes.io/projected/d2d7b6d9-f9f5-4548-a6c3-01248c076247-kube-api-access-lwh99\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5m56f\" (UID: \"d2d7b6d9-f9f5-4548-a6c3-01248c076247\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5m56f" Jan 22 09:40:24 crc kubenswrapper[4811]: I0122 09:40:24.447683 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d2d7b6d9-f9f5-4548-a6c3-01248c076247-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5m56f\" (UID: \"d2d7b6d9-f9f5-4548-a6c3-01248c076247\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5m56f" Jan 22 09:40:24 crc kubenswrapper[4811]: I0122 09:40:24.447722 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d2d7b6d9-f9f5-4548-a6c3-01248c076247-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5m56f\" (UID: \"d2d7b6d9-f9f5-4548-a6c3-01248c076247\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5m56f" Jan 22 09:40:24 crc kubenswrapper[4811]: I0122 09:40:24.447809 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2d7b6d9-f9f5-4548-a6c3-01248c076247-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5m56f\" (UID: \"d2d7b6d9-f9f5-4548-a6c3-01248c076247\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5m56f" Jan 22 09:40:24 crc kubenswrapper[4811]: I0122 09:40:24.447836 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d2d7b6d9-f9f5-4548-a6c3-01248c076247-ceph\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5m56f\" (UID: \"d2d7b6d9-f9f5-4548-a6c3-01248c076247\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5m56f" Jan 22 09:40:24 crc kubenswrapper[4811]: I0122 09:40:24.447916 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d2d7b6d9-f9f5-4548-a6c3-01248c076247-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5m56f\" (UID: \"d2d7b6d9-f9f5-4548-a6c3-01248c076247\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5m56f" Jan 22 09:40:24 crc kubenswrapper[4811]: I0122 09:40:24.448063 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d2d7b6d9-f9f5-4548-a6c3-01248c076247-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5m56f\" (UID: \"d2d7b6d9-f9f5-4548-a6c3-01248c076247\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5m56f" Jan 22 09:40:24 crc kubenswrapper[4811]: I0122 09:40:24.549724 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lwh99\" (UniqueName: \"kubernetes.io/projected/d2d7b6d9-f9f5-4548-a6c3-01248c076247-kube-api-access-lwh99\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5m56f\" (UID: \"d2d7b6d9-f9f5-4548-a6c3-01248c076247\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5m56f" Jan 22 09:40:24 crc kubenswrapper[4811]: I0122 09:40:24.549768 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d2d7b6d9-f9f5-4548-a6c3-01248c076247-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5m56f\" (UID: \"d2d7b6d9-f9f5-4548-a6c3-01248c076247\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5m56f" Jan 22 09:40:24 crc kubenswrapper[4811]: I0122 09:40:24.549797 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d2d7b6d9-f9f5-4548-a6c3-01248c076247-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5m56f\" (UID: \"d2d7b6d9-f9f5-4548-a6c3-01248c076247\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5m56f" Jan 22 09:40:24 crc kubenswrapper[4811]: I0122 09:40:24.549847 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2d7b6d9-f9f5-4548-a6c3-01248c076247-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5m56f\" (UID: \"d2d7b6d9-f9f5-4548-a6c3-01248c076247\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5m56f" Jan 22 09:40:24 crc kubenswrapper[4811]: I0122 09:40:24.549873 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d2d7b6d9-f9f5-4548-a6c3-01248c076247-ceph\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5m56f\" (UID: \"d2d7b6d9-f9f5-4548-a6c3-01248c076247\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5m56f" Jan 22 09:40:24 crc kubenswrapper[4811]: I0122 09:40:24.549906 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d2d7b6d9-f9f5-4548-a6c3-01248c076247-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5m56f\" (UID: \"d2d7b6d9-f9f5-4548-a6c3-01248c076247\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5m56f" Jan 22 09:40:24 crc kubenswrapper[4811]: I0122 09:40:24.549991 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d2d7b6d9-f9f5-4548-a6c3-01248c076247-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5m56f\" (UID: \"d2d7b6d9-f9f5-4548-a6c3-01248c076247\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5m56f" Jan 22 09:40:24 crc kubenswrapper[4811]: I0122 09:40:24.550018 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d2d7b6d9-f9f5-4548-a6c3-01248c076247-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5m56f\" (UID: \"d2d7b6d9-f9f5-4548-a6c3-01248c076247\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5m56f" Jan 22 09:40:24 crc kubenswrapper[4811]: I0122 09:40:24.550034 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2d7b6d9-f9f5-4548-a6c3-01248c076247-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5m56f\" (UID: \"d2d7b6d9-f9f5-4548-a6c3-01248c076247\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5m56f" Jan 22 09:40:24 crc kubenswrapper[4811]: I0122 09:40:24.550058 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2d7b6d9-f9f5-4548-a6c3-01248c076247-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5m56f\" (UID: \"d2d7b6d9-f9f5-4548-a6c3-01248c076247\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5m56f" Jan 22 09:40:24 crc kubenswrapper[4811]: I0122 09:40:24.550094 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2d7b6d9-f9f5-4548-a6c3-01248c076247-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5m56f\" (UID: \"d2d7b6d9-f9f5-4548-a6c3-01248c076247\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5m56f" Jan 22 09:40:24 crc kubenswrapper[4811]: I0122 09:40:24.550126 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2d7b6d9-f9f5-4548-a6c3-01248c076247-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5m56f\" (UID: \"d2d7b6d9-f9f5-4548-a6c3-01248c076247\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5m56f" Jan 22 09:40:24 crc kubenswrapper[4811]: I0122 09:40:24.550154 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2d7b6d9-f9f5-4548-a6c3-01248c076247-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5m56f\" (UID: \"d2d7b6d9-f9f5-4548-a6c3-01248c076247\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5m56f" Jan 22 09:40:24 crc kubenswrapper[4811]: I0122 09:40:24.554312 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d2d7b6d9-f9f5-4548-a6c3-01248c076247-ceph\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5m56f\" (UID: \"d2d7b6d9-f9f5-4548-a6c3-01248c076247\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5m56f" Jan 22 09:40:24 crc kubenswrapper[4811]: I0122 09:40:24.554348 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2d7b6d9-f9f5-4548-a6c3-01248c076247-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5m56f\" (UID: \"d2d7b6d9-f9f5-4548-a6c3-01248c076247\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5m56f" Jan 22 09:40:24 crc kubenswrapper[4811]: I0122 09:40:24.554613 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2d7b6d9-f9f5-4548-a6c3-01248c076247-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5m56f\" (UID: \"d2d7b6d9-f9f5-4548-a6c3-01248c076247\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5m56f" Jan 22 09:40:24 crc kubenswrapper[4811]: I0122 09:40:24.555242 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2d7b6d9-f9f5-4548-a6c3-01248c076247-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5m56f\" (UID: \"d2d7b6d9-f9f5-4548-a6c3-01248c076247\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5m56f" Jan 22 09:40:24 crc kubenswrapper[4811]: I0122 09:40:24.555332 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d2d7b6d9-f9f5-4548-a6c3-01248c076247-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5m56f\" (UID: \"d2d7b6d9-f9f5-4548-a6c3-01248c076247\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5m56f" Jan 22 09:40:24 crc kubenswrapper[4811]: I0122 09:40:24.555438 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d2d7b6d9-f9f5-4548-a6c3-01248c076247-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5m56f\" (UID: \"d2d7b6d9-f9f5-4548-a6c3-01248c076247\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5m56f" Jan 22 09:40:24 crc kubenswrapper[4811]: I0122 09:40:24.555808 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2d7b6d9-f9f5-4548-a6c3-01248c076247-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5m56f\" (UID: \"d2d7b6d9-f9f5-4548-a6c3-01248c076247\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5m56f" Jan 22 09:40:24 crc kubenswrapper[4811]: I0122 09:40:24.557342 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2d7b6d9-f9f5-4548-a6c3-01248c076247-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5m56f\" (UID: \"d2d7b6d9-f9f5-4548-a6c3-01248c076247\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5m56f" Jan 22 09:40:24 crc kubenswrapper[4811]: I0122 09:40:24.557452 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2d7b6d9-f9f5-4548-a6c3-01248c076247-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5m56f\" (UID: \"d2d7b6d9-f9f5-4548-a6c3-01248c076247\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5m56f" Jan 22 09:40:24 crc kubenswrapper[4811]: I0122 09:40:24.557614 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d2d7b6d9-f9f5-4548-a6c3-01248c076247-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5m56f\" (UID: \"d2d7b6d9-f9f5-4548-a6c3-01248c076247\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5m56f" Jan 22 09:40:24 crc kubenswrapper[4811]: I0122 09:40:24.557826 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d2d7b6d9-f9f5-4548-a6c3-01248c076247-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5m56f\" (UID: \"d2d7b6d9-f9f5-4548-a6c3-01248c076247\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5m56f" Jan 22 09:40:24 crc kubenswrapper[4811]: I0122 09:40:24.557845 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d2d7b6d9-f9f5-4548-a6c3-01248c076247-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5m56f\" (UID: \"d2d7b6d9-f9f5-4548-a6c3-01248c076247\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5m56f" Jan 22 09:40:24 crc kubenswrapper[4811]: I0122 09:40:24.564876 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwh99\" (UniqueName: \"kubernetes.io/projected/d2d7b6d9-f9f5-4548-a6c3-01248c076247-kube-api-access-lwh99\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5m56f\" (UID: \"d2d7b6d9-f9f5-4548-a6c3-01248c076247\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5m56f" Jan 22 09:40:24 crc kubenswrapper[4811]: I0122 09:40:24.685611 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5m56f" Jan 22 09:40:25 crc kubenswrapper[4811]: I0122 09:40:25.111058 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5m56f"] Jan 22 09:40:25 crc kubenswrapper[4811]: I0122 09:40:25.306814 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5m56f" event={"ID":"d2d7b6d9-f9f5-4548-a6c3-01248c076247","Type":"ContainerStarted","Data":"68654280d9bbd4eaf3fcc78fce16574ff7346cfeaf394937b7e4be8646a38ced"} Jan 22 09:40:26 crc kubenswrapper[4811]: I0122 09:40:26.314125 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5m56f" event={"ID":"d2d7b6d9-f9f5-4548-a6c3-01248c076247","Type":"ContainerStarted","Data":"532b8e6314cba47965d71b56e974764127d7aea09ab3bd26abdced6550ee8857"} Jan 22 09:40:26 crc kubenswrapper[4811]: I0122 09:40:26.336161 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5m56f" podStartSLOduration=1.834626284 podStartE2EDuration="2.336147246s" podCreationTimestamp="2026-01-22 09:40:24 +0000 UTC" firstStartedPulling="2026-01-22 09:40:25.119175548 +0000 UTC m=+2069.441362671" lastFinishedPulling="2026-01-22 09:40:25.62069651 +0000 UTC m=+2069.942883633" observedRunningTime="2026-01-22 09:40:26.330922662 +0000 UTC m=+2070.653109786" watchObservedRunningTime="2026-01-22 09:40:26.336147246 +0000 UTC m=+2070.658334359" Jan 22 09:40:49 crc kubenswrapper[4811]: I0122 09:40:49.471531 4811 generic.go:334] "Generic (PLEG): container finished" podID="d2d7b6d9-f9f5-4548-a6c3-01248c076247" containerID="532b8e6314cba47965d71b56e974764127d7aea09ab3bd26abdced6550ee8857" exitCode=0 Jan 22 09:40:49 crc kubenswrapper[4811]: I0122 09:40:49.471604 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5m56f" event={"ID":"d2d7b6d9-f9f5-4548-a6c3-01248c076247","Type":"ContainerDied","Data":"532b8e6314cba47965d71b56e974764127d7aea09ab3bd26abdced6550ee8857"} Jan 22 09:40:50 crc kubenswrapper[4811]: I0122 09:40:50.867021 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5m56f" Jan 22 09:40:50 crc kubenswrapper[4811]: I0122 09:40:50.942309 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2d7b6d9-f9f5-4548-a6c3-01248c076247-repo-setup-combined-ca-bundle\") pod \"d2d7b6d9-f9f5-4548-a6c3-01248c076247\" (UID: \"d2d7b6d9-f9f5-4548-a6c3-01248c076247\") " Jan 22 09:40:50 crc kubenswrapper[4811]: I0122 09:40:50.942369 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d2d7b6d9-f9f5-4548-a6c3-01248c076247-openstack-edpm-ipam-ovn-default-certs-0\") pod \"d2d7b6d9-f9f5-4548-a6c3-01248c076247\" (UID: \"d2d7b6d9-f9f5-4548-a6c3-01248c076247\") " Jan 22 09:40:50 crc kubenswrapper[4811]: I0122 09:40:50.942435 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d2d7b6d9-f9f5-4548-a6c3-01248c076247-inventory\") pod \"d2d7b6d9-f9f5-4548-a6c3-01248c076247\" (UID: \"d2d7b6d9-f9f5-4548-a6c3-01248c076247\") " Jan 22 09:40:50 crc kubenswrapper[4811]: I0122 09:40:50.942492 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2d7b6d9-f9f5-4548-a6c3-01248c076247-libvirt-combined-ca-bundle\") pod \"d2d7b6d9-f9f5-4548-a6c3-01248c076247\" (UID: \"d2d7b6d9-f9f5-4548-a6c3-01248c076247\") " Jan 22 09:40:50 crc kubenswrapper[4811]: I0122 09:40:50.942512 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d2d7b6d9-f9f5-4548-a6c3-01248c076247-ssh-key-openstack-edpm-ipam\") pod \"d2d7b6d9-f9f5-4548-a6c3-01248c076247\" (UID: \"d2d7b6d9-f9f5-4548-a6c3-01248c076247\") " Jan 22 09:40:50 crc kubenswrapper[4811]: I0122 09:40:50.942542 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d2d7b6d9-f9f5-4548-a6c3-01248c076247-ceph\") pod \"d2d7b6d9-f9f5-4548-a6c3-01248c076247\" (UID: \"d2d7b6d9-f9f5-4548-a6c3-01248c076247\") " Jan 22 09:40:50 crc kubenswrapper[4811]: I0122 09:40:50.942568 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2d7b6d9-f9f5-4548-a6c3-01248c076247-nova-combined-ca-bundle\") pod \"d2d7b6d9-f9f5-4548-a6c3-01248c076247\" (UID: \"d2d7b6d9-f9f5-4548-a6c3-01248c076247\") " Jan 22 09:40:50 crc kubenswrapper[4811]: I0122 09:40:50.942596 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d2d7b6d9-f9f5-4548-a6c3-01248c076247-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"d2d7b6d9-f9f5-4548-a6c3-01248c076247\" (UID: \"d2d7b6d9-f9f5-4548-a6c3-01248c076247\") " Jan 22 09:40:50 crc kubenswrapper[4811]: I0122 09:40:50.942668 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2d7b6d9-f9f5-4548-a6c3-01248c076247-neutron-metadata-combined-ca-bundle\") pod \"d2d7b6d9-f9f5-4548-a6c3-01248c076247\" (UID: \"d2d7b6d9-f9f5-4548-a6c3-01248c076247\") " Jan 22 09:40:50 crc kubenswrapper[4811]: I0122 09:40:50.942691 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2d7b6d9-f9f5-4548-a6c3-01248c076247-ovn-combined-ca-bundle\") pod \"d2d7b6d9-f9f5-4548-a6c3-01248c076247\" (UID: \"d2d7b6d9-f9f5-4548-a6c3-01248c076247\") " Jan 22 09:40:50 crc kubenswrapper[4811]: I0122 09:40:50.942718 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lwh99\" (UniqueName: \"kubernetes.io/projected/d2d7b6d9-f9f5-4548-a6c3-01248c076247-kube-api-access-lwh99\") pod \"d2d7b6d9-f9f5-4548-a6c3-01248c076247\" (UID: \"d2d7b6d9-f9f5-4548-a6c3-01248c076247\") " Jan 22 09:40:50 crc kubenswrapper[4811]: I0122 09:40:50.942749 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2d7b6d9-f9f5-4548-a6c3-01248c076247-bootstrap-combined-ca-bundle\") pod \"d2d7b6d9-f9f5-4548-a6c3-01248c076247\" (UID: \"d2d7b6d9-f9f5-4548-a6c3-01248c076247\") " Jan 22 09:40:50 crc kubenswrapper[4811]: I0122 09:40:50.942769 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d2d7b6d9-f9f5-4548-a6c3-01248c076247-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"d2d7b6d9-f9f5-4548-a6c3-01248c076247\" (UID: \"d2d7b6d9-f9f5-4548-a6c3-01248c076247\") " Jan 22 09:40:50 crc kubenswrapper[4811]: I0122 09:40:50.949991 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2d7b6d9-f9f5-4548-a6c3-01248c076247-ceph" (OuterVolumeSpecName: "ceph") pod "d2d7b6d9-f9f5-4548-a6c3-01248c076247" (UID: "d2d7b6d9-f9f5-4548-a6c3-01248c076247"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:40:50 crc kubenswrapper[4811]: I0122 09:40:50.950082 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2d7b6d9-f9f5-4548-a6c3-01248c076247-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "d2d7b6d9-f9f5-4548-a6c3-01248c076247" (UID: "d2d7b6d9-f9f5-4548-a6c3-01248c076247"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:40:50 crc kubenswrapper[4811]: I0122 09:40:50.950109 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2d7b6d9-f9f5-4548-a6c3-01248c076247-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "d2d7b6d9-f9f5-4548-a6c3-01248c076247" (UID: "d2d7b6d9-f9f5-4548-a6c3-01248c076247"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:40:50 crc kubenswrapper[4811]: I0122 09:40:50.950612 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2d7b6d9-f9f5-4548-a6c3-01248c076247-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "d2d7b6d9-f9f5-4548-a6c3-01248c076247" (UID: "d2d7b6d9-f9f5-4548-a6c3-01248c076247"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:40:50 crc kubenswrapper[4811]: I0122 09:40:50.951500 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2d7b6d9-f9f5-4548-a6c3-01248c076247-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "d2d7b6d9-f9f5-4548-a6c3-01248c076247" (UID: "d2d7b6d9-f9f5-4548-a6c3-01248c076247"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:40:50 crc kubenswrapper[4811]: I0122 09:40:50.952525 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2d7b6d9-f9f5-4548-a6c3-01248c076247-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "d2d7b6d9-f9f5-4548-a6c3-01248c076247" (UID: "d2d7b6d9-f9f5-4548-a6c3-01248c076247"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:40:50 crc kubenswrapper[4811]: I0122 09:40:50.953018 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2d7b6d9-f9f5-4548-a6c3-01248c076247-kube-api-access-lwh99" (OuterVolumeSpecName: "kube-api-access-lwh99") pod "d2d7b6d9-f9f5-4548-a6c3-01248c076247" (UID: "d2d7b6d9-f9f5-4548-a6c3-01248c076247"). InnerVolumeSpecName "kube-api-access-lwh99". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:40:50 crc kubenswrapper[4811]: I0122 09:40:50.953260 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2d7b6d9-f9f5-4548-a6c3-01248c076247-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "d2d7b6d9-f9f5-4548-a6c3-01248c076247" (UID: "d2d7b6d9-f9f5-4548-a6c3-01248c076247"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:40:50 crc kubenswrapper[4811]: I0122 09:40:50.953304 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2d7b6d9-f9f5-4548-a6c3-01248c076247-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "d2d7b6d9-f9f5-4548-a6c3-01248c076247" (UID: "d2d7b6d9-f9f5-4548-a6c3-01248c076247"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:40:50 crc kubenswrapper[4811]: I0122 09:40:50.954271 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2d7b6d9-f9f5-4548-a6c3-01248c076247-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "d2d7b6d9-f9f5-4548-a6c3-01248c076247" (UID: "d2d7b6d9-f9f5-4548-a6c3-01248c076247"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:40:50 crc kubenswrapper[4811]: I0122 09:40:50.956576 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2d7b6d9-f9f5-4548-a6c3-01248c076247-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "d2d7b6d9-f9f5-4548-a6c3-01248c076247" (UID: "d2d7b6d9-f9f5-4548-a6c3-01248c076247"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:40:50 crc kubenswrapper[4811]: E0122 09:40:50.968899 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d2d7b6d9-f9f5-4548-a6c3-01248c076247-inventory podName:d2d7b6d9-f9f5-4548-a6c3-01248c076247 nodeName:}" failed. No retries permitted until 2026-01-22 09:40:51.468854937 +0000 UTC m=+2095.791042059 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "inventory" (UniqueName: "kubernetes.io/secret/d2d7b6d9-f9f5-4548-a6c3-01248c076247-inventory") pod "d2d7b6d9-f9f5-4548-a6c3-01248c076247" (UID: "d2d7b6d9-f9f5-4548-a6c3-01248c076247") : error deleting /var/lib/kubelet/pods/d2d7b6d9-f9f5-4548-a6c3-01248c076247/volume-subpaths: remove /var/lib/kubelet/pods/d2d7b6d9-f9f5-4548-a6c3-01248c076247/volume-subpaths: no such file or directory Jan 22 09:40:50 crc kubenswrapper[4811]: I0122 09:40:50.970773 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2d7b6d9-f9f5-4548-a6c3-01248c076247-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "d2d7b6d9-f9f5-4548-a6c3-01248c076247" (UID: "d2d7b6d9-f9f5-4548-a6c3-01248c076247"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:40:51 crc kubenswrapper[4811]: I0122 09:40:51.044499 4811 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2d7b6d9-f9f5-4548-a6c3-01248c076247-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 09:40:51 crc kubenswrapper[4811]: I0122 09:40:51.044609 4811 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d2d7b6d9-f9f5-4548-a6c3-01248c076247-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 22 09:40:51 crc kubenswrapper[4811]: I0122 09:40:51.044696 4811 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2d7b6d9-f9f5-4548-a6c3-01248c076247-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 09:40:51 crc kubenswrapper[4811]: I0122 09:40:51.044759 4811 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d2d7b6d9-f9f5-4548-a6c3-01248c076247-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 22 09:40:51 crc kubenswrapper[4811]: I0122 09:40:51.044825 4811 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d2d7b6d9-f9f5-4548-a6c3-01248c076247-ceph\") on node \"crc\" DevicePath \"\"" Jan 22 09:40:51 crc kubenswrapper[4811]: I0122 09:40:51.044879 4811 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2d7b6d9-f9f5-4548-a6c3-01248c076247-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 09:40:51 crc kubenswrapper[4811]: I0122 09:40:51.044942 4811 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d2d7b6d9-f9f5-4548-a6c3-01248c076247-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 22 09:40:51 crc kubenswrapper[4811]: I0122 09:40:51.045005 4811 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2d7b6d9-f9f5-4548-a6c3-01248c076247-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 09:40:51 crc kubenswrapper[4811]: I0122 09:40:51.045057 4811 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2d7b6d9-f9f5-4548-a6c3-01248c076247-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 09:40:51 crc kubenswrapper[4811]: I0122 09:40:51.045116 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lwh99\" (UniqueName: \"kubernetes.io/projected/d2d7b6d9-f9f5-4548-a6c3-01248c076247-kube-api-access-lwh99\") on node \"crc\" DevicePath \"\"" Jan 22 09:40:51 crc kubenswrapper[4811]: I0122 09:40:51.045172 4811 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2d7b6d9-f9f5-4548-a6c3-01248c076247-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 09:40:51 crc kubenswrapper[4811]: I0122 09:40:51.045221 4811 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d2d7b6d9-f9f5-4548-a6c3-01248c076247-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 22 09:40:51 crc kubenswrapper[4811]: I0122 09:40:51.489542 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5m56f" event={"ID":"d2d7b6d9-f9f5-4548-a6c3-01248c076247","Type":"ContainerDied","Data":"68654280d9bbd4eaf3fcc78fce16574ff7346cfeaf394937b7e4be8646a38ced"} Jan 22 09:40:51 crc kubenswrapper[4811]: I0122 09:40:51.489875 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="68654280d9bbd4eaf3fcc78fce16574ff7346cfeaf394937b7e4be8646a38ced" Jan 22 09:40:51 crc kubenswrapper[4811]: I0122 09:40:51.489599 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5m56f" Jan 22 09:40:51 crc kubenswrapper[4811]: I0122 09:40:51.552584 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d2d7b6d9-f9f5-4548-a6c3-01248c076247-inventory\") pod \"d2d7b6d9-f9f5-4548-a6c3-01248c076247\" (UID: \"d2d7b6d9-f9f5-4548-a6c3-01248c076247\") " Jan 22 09:40:51 crc kubenswrapper[4811]: I0122 09:40:51.557204 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2d7b6d9-f9f5-4548-a6c3-01248c076247-inventory" (OuterVolumeSpecName: "inventory") pod "d2d7b6d9-f9f5-4548-a6c3-01248c076247" (UID: "d2d7b6d9-f9f5-4548-a6c3-01248c076247"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:40:51 crc kubenswrapper[4811]: I0122 09:40:51.588940 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-q2556"] Jan 22 09:40:51 crc kubenswrapper[4811]: E0122 09:40:51.589279 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2d7b6d9-f9f5-4548-a6c3-01248c076247" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 22 09:40:51 crc kubenswrapper[4811]: I0122 09:40:51.589298 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2d7b6d9-f9f5-4548-a6c3-01248c076247" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 22 09:40:51 crc kubenswrapper[4811]: I0122 09:40:51.589481 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2d7b6d9-f9f5-4548-a6c3-01248c076247" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 22 09:40:51 crc kubenswrapper[4811]: I0122 09:40:51.590793 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-q2556" Jan 22 09:40:51 crc kubenswrapper[4811]: I0122 09:40:51.604697 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-q2556"] Jan 22 09:40:51 crc kubenswrapper[4811]: I0122 09:40:51.654773 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfnmz\" (UniqueName: \"kubernetes.io/projected/c61e1813-0266-4558-9a3d-5895a166d67f-kube-api-access-kfnmz\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-q2556\" (UID: \"c61e1813-0266-4558-9a3d-5895a166d67f\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-q2556" Jan 22 09:40:51 crc kubenswrapper[4811]: I0122 09:40:51.655145 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c61e1813-0266-4558-9a3d-5895a166d67f-ceph\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-q2556\" (UID: \"c61e1813-0266-4558-9a3d-5895a166d67f\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-q2556" Jan 22 09:40:51 crc kubenswrapper[4811]: I0122 09:40:51.655299 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c61e1813-0266-4558-9a3d-5895a166d67f-ssh-key-openstack-edpm-ipam\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-q2556\" (UID: \"c61e1813-0266-4558-9a3d-5895a166d67f\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-q2556" Jan 22 09:40:51 crc kubenswrapper[4811]: I0122 09:40:51.655439 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c61e1813-0266-4558-9a3d-5895a166d67f-inventory\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-q2556\" (UID: \"c61e1813-0266-4558-9a3d-5895a166d67f\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-q2556" Jan 22 09:40:51 crc kubenswrapper[4811]: I0122 09:40:51.655598 4811 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d2d7b6d9-f9f5-4548-a6c3-01248c076247-inventory\") on node \"crc\" DevicePath \"\"" Jan 22 09:40:51 crc kubenswrapper[4811]: I0122 09:40:51.757041 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c61e1813-0266-4558-9a3d-5895a166d67f-inventory\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-q2556\" (UID: \"c61e1813-0266-4558-9a3d-5895a166d67f\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-q2556" Jan 22 09:40:51 crc kubenswrapper[4811]: I0122 09:40:51.757124 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kfnmz\" (UniqueName: \"kubernetes.io/projected/c61e1813-0266-4558-9a3d-5895a166d67f-kube-api-access-kfnmz\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-q2556\" (UID: \"c61e1813-0266-4558-9a3d-5895a166d67f\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-q2556" Jan 22 09:40:51 crc kubenswrapper[4811]: I0122 09:40:51.757257 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c61e1813-0266-4558-9a3d-5895a166d67f-ceph\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-q2556\" (UID: \"c61e1813-0266-4558-9a3d-5895a166d67f\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-q2556" Jan 22 09:40:51 crc kubenswrapper[4811]: I0122 09:40:51.757313 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c61e1813-0266-4558-9a3d-5895a166d67f-ssh-key-openstack-edpm-ipam\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-q2556\" (UID: \"c61e1813-0266-4558-9a3d-5895a166d67f\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-q2556" Jan 22 09:40:51 crc kubenswrapper[4811]: I0122 09:40:51.760335 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c61e1813-0266-4558-9a3d-5895a166d67f-inventory\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-q2556\" (UID: \"c61e1813-0266-4558-9a3d-5895a166d67f\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-q2556" Jan 22 09:40:51 crc kubenswrapper[4811]: I0122 09:40:51.760347 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c61e1813-0266-4558-9a3d-5895a166d67f-ssh-key-openstack-edpm-ipam\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-q2556\" (UID: \"c61e1813-0266-4558-9a3d-5895a166d67f\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-q2556" Jan 22 09:40:51 crc kubenswrapper[4811]: I0122 09:40:51.760742 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c61e1813-0266-4558-9a3d-5895a166d67f-ceph\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-q2556\" (UID: \"c61e1813-0266-4558-9a3d-5895a166d67f\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-q2556" Jan 22 09:40:51 crc kubenswrapper[4811]: I0122 09:40:51.770104 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kfnmz\" (UniqueName: \"kubernetes.io/projected/c61e1813-0266-4558-9a3d-5895a166d67f-kube-api-access-kfnmz\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-q2556\" (UID: \"c61e1813-0266-4558-9a3d-5895a166d67f\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-q2556" Jan 22 09:40:51 crc kubenswrapper[4811]: I0122 09:40:51.906320 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-q2556" Jan 22 09:40:52 crc kubenswrapper[4811]: I0122 09:40:52.370684 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-q2556"] Jan 22 09:40:52 crc kubenswrapper[4811]: I0122 09:40:52.501209 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-q2556" event={"ID":"c61e1813-0266-4558-9a3d-5895a166d67f","Type":"ContainerStarted","Data":"edc616202ff94413397f2c7a88993632efc25baa6c6d2a30f383f402e5f0bc9b"} Jan 22 09:40:53 crc kubenswrapper[4811]: I0122 09:40:53.517827 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-q2556" event={"ID":"c61e1813-0266-4558-9a3d-5895a166d67f","Type":"ContainerStarted","Data":"fa006659671373ed0acb2b0285a6dff7dfdc20cc3dcd2272bb5f75c955774c21"} Jan 22 09:40:53 crc kubenswrapper[4811]: I0122 09:40:53.538143 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-q2556" podStartSLOduration=2.042431012 podStartE2EDuration="2.538124162s" podCreationTimestamp="2026-01-22 09:40:51 +0000 UTC" firstStartedPulling="2026-01-22 09:40:52.373315572 +0000 UTC m=+2096.695502695" lastFinishedPulling="2026-01-22 09:40:52.869008723 +0000 UTC m=+2097.191195845" observedRunningTime="2026-01-22 09:40:53.532646912 +0000 UTC m=+2097.854834026" watchObservedRunningTime="2026-01-22 09:40:53.538124162 +0000 UTC m=+2097.860311285" Jan 22 09:40:57 crc kubenswrapper[4811]: I0122 09:40:57.546971 4811 generic.go:334] "Generic (PLEG): container finished" podID="c61e1813-0266-4558-9a3d-5895a166d67f" containerID="fa006659671373ed0acb2b0285a6dff7dfdc20cc3dcd2272bb5f75c955774c21" exitCode=0 Jan 22 09:40:57 crc kubenswrapper[4811]: I0122 09:40:57.547067 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-q2556" event={"ID":"c61e1813-0266-4558-9a3d-5895a166d67f","Type":"ContainerDied","Data":"fa006659671373ed0acb2b0285a6dff7dfdc20cc3dcd2272bb5f75c955774c21"} Jan 22 09:40:58 crc kubenswrapper[4811]: I0122 09:40:58.979300 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-q2556" Jan 22 09:40:59 crc kubenswrapper[4811]: I0122 09:40:59.100278 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c61e1813-0266-4558-9a3d-5895a166d67f-inventory\") pod \"c61e1813-0266-4558-9a3d-5895a166d67f\" (UID: \"c61e1813-0266-4558-9a3d-5895a166d67f\") " Jan 22 09:40:59 crc kubenswrapper[4811]: I0122 09:40:59.100695 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfnmz\" (UniqueName: \"kubernetes.io/projected/c61e1813-0266-4558-9a3d-5895a166d67f-kube-api-access-kfnmz\") pod \"c61e1813-0266-4558-9a3d-5895a166d67f\" (UID: \"c61e1813-0266-4558-9a3d-5895a166d67f\") " Jan 22 09:40:59 crc kubenswrapper[4811]: I0122 09:40:59.100880 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c61e1813-0266-4558-9a3d-5895a166d67f-ssh-key-openstack-edpm-ipam\") pod \"c61e1813-0266-4558-9a3d-5895a166d67f\" (UID: \"c61e1813-0266-4558-9a3d-5895a166d67f\") " Jan 22 09:40:59 crc kubenswrapper[4811]: I0122 09:40:59.101111 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c61e1813-0266-4558-9a3d-5895a166d67f-ceph\") pod \"c61e1813-0266-4558-9a3d-5895a166d67f\" (UID: \"c61e1813-0266-4558-9a3d-5895a166d67f\") " Jan 22 09:40:59 crc kubenswrapper[4811]: I0122 09:40:59.107710 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c61e1813-0266-4558-9a3d-5895a166d67f-ceph" (OuterVolumeSpecName: "ceph") pod "c61e1813-0266-4558-9a3d-5895a166d67f" (UID: "c61e1813-0266-4558-9a3d-5895a166d67f"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:40:59 crc kubenswrapper[4811]: I0122 09:40:59.107906 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c61e1813-0266-4558-9a3d-5895a166d67f-kube-api-access-kfnmz" (OuterVolumeSpecName: "kube-api-access-kfnmz") pod "c61e1813-0266-4558-9a3d-5895a166d67f" (UID: "c61e1813-0266-4558-9a3d-5895a166d67f"). InnerVolumeSpecName "kube-api-access-kfnmz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:40:59 crc kubenswrapper[4811]: I0122 09:40:59.122702 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c61e1813-0266-4558-9a3d-5895a166d67f-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "c61e1813-0266-4558-9a3d-5895a166d67f" (UID: "c61e1813-0266-4558-9a3d-5895a166d67f"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:40:59 crc kubenswrapper[4811]: I0122 09:40:59.122710 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c61e1813-0266-4558-9a3d-5895a166d67f-inventory" (OuterVolumeSpecName: "inventory") pod "c61e1813-0266-4558-9a3d-5895a166d67f" (UID: "c61e1813-0266-4558-9a3d-5895a166d67f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:40:59 crc kubenswrapper[4811]: I0122 09:40:59.204855 4811 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c61e1813-0266-4558-9a3d-5895a166d67f-inventory\") on node \"crc\" DevicePath \"\"" Jan 22 09:40:59 crc kubenswrapper[4811]: I0122 09:40:59.204893 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfnmz\" (UniqueName: \"kubernetes.io/projected/c61e1813-0266-4558-9a3d-5895a166d67f-kube-api-access-kfnmz\") on node \"crc\" DevicePath \"\"" Jan 22 09:40:59 crc kubenswrapper[4811]: I0122 09:40:59.204909 4811 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c61e1813-0266-4558-9a3d-5895a166d67f-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 22 09:40:59 crc kubenswrapper[4811]: I0122 09:40:59.205126 4811 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c61e1813-0266-4558-9a3d-5895a166d67f-ceph\") on node \"crc\" DevicePath \"\"" Jan 22 09:40:59 crc kubenswrapper[4811]: I0122 09:40:59.569045 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-q2556" event={"ID":"c61e1813-0266-4558-9a3d-5895a166d67f","Type":"ContainerDied","Data":"edc616202ff94413397f2c7a88993632efc25baa6c6d2a30f383f402e5f0bc9b"} Jan 22 09:40:59 crc kubenswrapper[4811]: I0122 09:40:59.569091 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="edc616202ff94413397f2c7a88993632efc25baa6c6d2a30f383f402e5f0bc9b" Jan 22 09:40:59 crc kubenswrapper[4811]: I0122 09:40:59.569181 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-q2556" Jan 22 09:40:59 crc kubenswrapper[4811]: I0122 09:40:59.644496 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-5q54d"] Jan 22 09:40:59 crc kubenswrapper[4811]: E0122 09:40:59.644936 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c61e1813-0266-4558-9a3d-5895a166d67f" containerName="ceph-client-edpm-deployment-openstack-edpm-ipam" Jan 22 09:40:59 crc kubenswrapper[4811]: I0122 09:40:59.644958 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="c61e1813-0266-4558-9a3d-5895a166d67f" containerName="ceph-client-edpm-deployment-openstack-edpm-ipam" Jan 22 09:40:59 crc kubenswrapper[4811]: I0122 09:40:59.645129 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="c61e1813-0266-4558-9a3d-5895a166d67f" containerName="ceph-client-edpm-deployment-openstack-edpm-ipam" Jan 22 09:40:59 crc kubenswrapper[4811]: I0122 09:40:59.645706 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5q54d" Jan 22 09:40:59 crc kubenswrapper[4811]: I0122 09:40:59.647200 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Jan 22 09:40:59 crc kubenswrapper[4811]: I0122 09:40:59.647759 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 22 09:40:59 crc kubenswrapper[4811]: I0122 09:40:59.648420 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 22 09:40:59 crc kubenswrapper[4811]: I0122 09:40:59.649960 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7cbs6" Jan 22 09:40:59 crc kubenswrapper[4811]: I0122 09:40:59.658510 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-5q54d"] Jan 22 09:40:59 crc kubenswrapper[4811]: I0122 09:40:59.659375 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 22 09:40:59 crc kubenswrapper[4811]: I0122 09:40:59.659554 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 22 09:40:59 crc kubenswrapper[4811]: I0122 09:40:59.714367 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/8fafd202-523c-44b0-b229-527193721bb1-ceph\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5q54d\" (UID: \"8fafd202-523c-44b0-b229-527193721bb1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5q54d" Jan 22 09:40:59 crc kubenswrapper[4811]: I0122 09:40:59.714554 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fafd202-523c-44b0-b229-527193721bb1-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5q54d\" (UID: \"8fafd202-523c-44b0-b229-527193721bb1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5q54d" Jan 22 09:40:59 crc kubenswrapper[4811]: I0122 09:40:59.714596 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8fafd202-523c-44b0-b229-527193721bb1-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5q54d\" (UID: \"8fafd202-523c-44b0-b229-527193721bb1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5q54d" Jan 22 09:40:59 crc kubenswrapper[4811]: I0122 09:40:59.714620 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lm82p\" (UniqueName: \"kubernetes.io/projected/8fafd202-523c-44b0-b229-527193721bb1-kube-api-access-lm82p\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5q54d\" (UID: \"8fafd202-523c-44b0-b229-527193721bb1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5q54d" Jan 22 09:40:59 crc kubenswrapper[4811]: I0122 09:40:59.714675 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/8fafd202-523c-44b0-b229-527193721bb1-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5q54d\" (UID: \"8fafd202-523c-44b0-b229-527193721bb1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5q54d" Jan 22 09:40:59 crc kubenswrapper[4811]: I0122 09:40:59.714731 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8fafd202-523c-44b0-b229-527193721bb1-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5q54d\" (UID: \"8fafd202-523c-44b0-b229-527193721bb1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5q54d" Jan 22 09:40:59 crc kubenswrapper[4811]: I0122 09:40:59.817202 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/8fafd202-523c-44b0-b229-527193721bb1-ceph\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5q54d\" (UID: \"8fafd202-523c-44b0-b229-527193721bb1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5q54d" Jan 22 09:40:59 crc kubenswrapper[4811]: I0122 09:40:59.817455 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fafd202-523c-44b0-b229-527193721bb1-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5q54d\" (UID: \"8fafd202-523c-44b0-b229-527193721bb1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5q54d" Jan 22 09:40:59 crc kubenswrapper[4811]: I0122 09:40:59.817492 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8fafd202-523c-44b0-b229-527193721bb1-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5q54d\" (UID: \"8fafd202-523c-44b0-b229-527193721bb1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5q54d" Jan 22 09:40:59 crc kubenswrapper[4811]: I0122 09:40:59.817518 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lm82p\" (UniqueName: \"kubernetes.io/projected/8fafd202-523c-44b0-b229-527193721bb1-kube-api-access-lm82p\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5q54d\" (UID: \"8fafd202-523c-44b0-b229-527193721bb1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5q54d" Jan 22 09:40:59 crc kubenswrapper[4811]: I0122 09:40:59.817563 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/8fafd202-523c-44b0-b229-527193721bb1-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5q54d\" (UID: \"8fafd202-523c-44b0-b229-527193721bb1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5q54d" Jan 22 09:40:59 crc kubenswrapper[4811]: I0122 09:40:59.817607 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8fafd202-523c-44b0-b229-527193721bb1-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5q54d\" (UID: \"8fafd202-523c-44b0-b229-527193721bb1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5q54d" Jan 22 09:40:59 crc kubenswrapper[4811]: I0122 09:40:59.819271 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/8fafd202-523c-44b0-b229-527193721bb1-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5q54d\" (UID: \"8fafd202-523c-44b0-b229-527193721bb1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5q54d" Jan 22 09:40:59 crc kubenswrapper[4811]: I0122 09:40:59.820965 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8fafd202-523c-44b0-b229-527193721bb1-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5q54d\" (UID: \"8fafd202-523c-44b0-b229-527193721bb1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5q54d" Jan 22 09:40:59 crc kubenswrapper[4811]: I0122 09:40:59.820981 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/8fafd202-523c-44b0-b229-527193721bb1-ceph\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5q54d\" (UID: \"8fafd202-523c-44b0-b229-527193721bb1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5q54d" Jan 22 09:40:59 crc kubenswrapper[4811]: I0122 09:40:59.822695 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fafd202-523c-44b0-b229-527193721bb1-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5q54d\" (UID: \"8fafd202-523c-44b0-b229-527193721bb1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5q54d" Jan 22 09:40:59 crc kubenswrapper[4811]: I0122 09:40:59.823220 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8fafd202-523c-44b0-b229-527193721bb1-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5q54d\" (UID: \"8fafd202-523c-44b0-b229-527193721bb1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5q54d" Jan 22 09:40:59 crc kubenswrapper[4811]: I0122 09:40:59.834288 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lm82p\" (UniqueName: \"kubernetes.io/projected/8fafd202-523c-44b0-b229-527193721bb1-kube-api-access-lm82p\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5q54d\" (UID: \"8fafd202-523c-44b0-b229-527193721bb1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5q54d" Jan 22 09:40:59 crc kubenswrapper[4811]: I0122 09:40:59.962546 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5q54d" Jan 22 09:41:00 crc kubenswrapper[4811]: I0122 09:41:00.420554 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-5q54d"] Jan 22 09:41:00 crc kubenswrapper[4811]: I0122 09:41:00.580304 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5q54d" event={"ID":"8fafd202-523c-44b0-b229-527193721bb1","Type":"ContainerStarted","Data":"0beae6b38581211b9a9ff7ede36e6308eeb3265009b125cba99668d411b6f565"} Jan 22 09:41:01 crc kubenswrapper[4811]: I0122 09:41:01.587834 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5q54d" event={"ID":"8fafd202-523c-44b0-b229-527193721bb1","Type":"ContainerStarted","Data":"84ce8b7c8f9ad6c11dee3b03cc7eb24d6bc5b9186b052a7da72900e1702595cc"} Jan 22 09:41:01 crc kubenswrapper[4811]: I0122 09:41:01.613803 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5q54d" podStartSLOduration=2.113131295 podStartE2EDuration="2.61378964s" podCreationTimestamp="2026-01-22 09:40:59 +0000 UTC" firstStartedPulling="2026-01-22 09:41:00.435601666 +0000 UTC m=+2104.757788789" lastFinishedPulling="2026-01-22 09:41:00.936260011 +0000 UTC m=+2105.258447134" observedRunningTime="2026-01-22 09:41:01.609743368 +0000 UTC m=+2105.931930491" watchObservedRunningTime="2026-01-22 09:41:01.61378964 +0000 UTC m=+2105.935976763" Jan 22 09:41:05 crc kubenswrapper[4811]: I0122 09:41:05.501686 4811 patch_prober.go:28] interesting pod/machine-config-daemon-txvcq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 09:41:05 crc kubenswrapper[4811]: I0122 09:41:05.503784 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 09:41:35 crc kubenswrapper[4811]: I0122 09:41:35.501416 4811 patch_prober.go:28] interesting pod/machine-config-daemon-txvcq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 09:41:35 crc kubenswrapper[4811]: I0122 09:41:35.502136 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 09:41:59 crc kubenswrapper[4811]: I0122 09:41:59.051869 4811 generic.go:334] "Generic (PLEG): container finished" podID="8fafd202-523c-44b0-b229-527193721bb1" containerID="84ce8b7c8f9ad6c11dee3b03cc7eb24d6bc5b9186b052a7da72900e1702595cc" exitCode=0 Jan 22 09:41:59 crc kubenswrapper[4811]: I0122 09:41:59.051954 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5q54d" event={"ID":"8fafd202-523c-44b0-b229-527193721bb1","Type":"ContainerDied","Data":"84ce8b7c8f9ad6c11dee3b03cc7eb24d6bc5b9186b052a7da72900e1702595cc"} Jan 22 09:42:00 crc kubenswrapper[4811]: I0122 09:42:00.474503 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5q54d" Jan 22 09:42:00 crc kubenswrapper[4811]: I0122 09:42:00.525686 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/8fafd202-523c-44b0-b229-527193721bb1-ceph\") pod \"8fafd202-523c-44b0-b229-527193721bb1\" (UID: \"8fafd202-523c-44b0-b229-527193721bb1\") " Jan 22 09:42:00 crc kubenswrapper[4811]: I0122 09:42:00.525757 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fafd202-523c-44b0-b229-527193721bb1-ovn-combined-ca-bundle\") pod \"8fafd202-523c-44b0-b229-527193721bb1\" (UID: \"8fafd202-523c-44b0-b229-527193721bb1\") " Jan 22 09:42:00 crc kubenswrapper[4811]: I0122 09:42:00.525876 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lm82p\" (UniqueName: \"kubernetes.io/projected/8fafd202-523c-44b0-b229-527193721bb1-kube-api-access-lm82p\") pod \"8fafd202-523c-44b0-b229-527193721bb1\" (UID: \"8fafd202-523c-44b0-b229-527193721bb1\") " Jan 22 09:42:00 crc kubenswrapper[4811]: I0122 09:42:00.526022 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8fafd202-523c-44b0-b229-527193721bb1-inventory\") pod \"8fafd202-523c-44b0-b229-527193721bb1\" (UID: \"8fafd202-523c-44b0-b229-527193721bb1\") " Jan 22 09:42:00 crc kubenswrapper[4811]: I0122 09:42:00.526320 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/8fafd202-523c-44b0-b229-527193721bb1-ovncontroller-config-0\") pod \"8fafd202-523c-44b0-b229-527193721bb1\" (UID: \"8fafd202-523c-44b0-b229-527193721bb1\") " Jan 22 09:42:00 crc kubenswrapper[4811]: I0122 09:42:00.526894 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8fafd202-523c-44b0-b229-527193721bb1-ssh-key-openstack-edpm-ipam\") pod \"8fafd202-523c-44b0-b229-527193721bb1\" (UID: \"8fafd202-523c-44b0-b229-527193721bb1\") " Jan 22 09:42:00 crc kubenswrapper[4811]: I0122 09:42:00.533872 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8fafd202-523c-44b0-b229-527193721bb1-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "8fafd202-523c-44b0-b229-527193721bb1" (UID: "8fafd202-523c-44b0-b229-527193721bb1"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:42:00 crc kubenswrapper[4811]: I0122 09:42:00.539785 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8fafd202-523c-44b0-b229-527193721bb1-ceph" (OuterVolumeSpecName: "ceph") pod "8fafd202-523c-44b0-b229-527193721bb1" (UID: "8fafd202-523c-44b0-b229-527193721bb1"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:42:00 crc kubenswrapper[4811]: I0122 09:42:00.539954 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8fafd202-523c-44b0-b229-527193721bb1-kube-api-access-lm82p" (OuterVolumeSpecName: "kube-api-access-lm82p") pod "8fafd202-523c-44b0-b229-527193721bb1" (UID: "8fafd202-523c-44b0-b229-527193721bb1"). InnerVolumeSpecName "kube-api-access-lm82p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:42:00 crc kubenswrapper[4811]: I0122 09:42:00.549575 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8fafd202-523c-44b0-b229-527193721bb1-inventory" (OuterVolumeSpecName: "inventory") pod "8fafd202-523c-44b0-b229-527193721bb1" (UID: "8fafd202-523c-44b0-b229-527193721bb1"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:42:00 crc kubenswrapper[4811]: I0122 09:42:00.550690 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8fafd202-523c-44b0-b229-527193721bb1-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "8fafd202-523c-44b0-b229-527193721bb1" (UID: "8fafd202-523c-44b0-b229-527193721bb1"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:42:00 crc kubenswrapper[4811]: I0122 09:42:00.560367 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8fafd202-523c-44b0-b229-527193721bb1-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "8fafd202-523c-44b0-b229-527193721bb1" (UID: "8fafd202-523c-44b0-b229-527193721bb1"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:42:00 crc kubenswrapper[4811]: I0122 09:42:00.630553 4811 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8fafd202-523c-44b0-b229-527193721bb1-inventory\") on node \"crc\" DevicePath \"\"" Jan 22 09:42:00 crc kubenswrapper[4811]: I0122 09:42:00.630671 4811 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/8fafd202-523c-44b0-b229-527193721bb1-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Jan 22 09:42:00 crc kubenswrapper[4811]: I0122 09:42:00.630751 4811 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8fafd202-523c-44b0-b229-527193721bb1-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 22 09:42:00 crc kubenswrapper[4811]: I0122 09:42:00.630807 4811 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/8fafd202-523c-44b0-b229-527193721bb1-ceph\") on node \"crc\" DevicePath \"\"" Jan 22 09:42:00 crc kubenswrapper[4811]: I0122 09:42:00.630865 4811 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fafd202-523c-44b0-b229-527193721bb1-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 09:42:00 crc kubenswrapper[4811]: I0122 09:42:00.630928 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lm82p\" (UniqueName: \"kubernetes.io/projected/8fafd202-523c-44b0-b229-527193721bb1-kube-api-access-lm82p\") on node \"crc\" DevicePath \"\"" Jan 22 09:42:01 crc kubenswrapper[4811]: I0122 09:42:01.070457 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5q54d" event={"ID":"8fafd202-523c-44b0-b229-527193721bb1","Type":"ContainerDied","Data":"0beae6b38581211b9a9ff7ede36e6308eeb3265009b125cba99668d411b6f565"} Jan 22 09:42:01 crc kubenswrapper[4811]: I0122 09:42:01.070509 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0beae6b38581211b9a9ff7ede36e6308eeb3265009b125cba99668d411b6f565" Jan 22 09:42:01 crc kubenswrapper[4811]: I0122 09:42:01.070580 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5q54d" Jan 22 09:42:01 crc kubenswrapper[4811]: I0122 09:42:01.155904 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-lqp9d"] Jan 22 09:42:01 crc kubenswrapper[4811]: E0122 09:42:01.156578 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fafd202-523c-44b0-b229-527193721bb1" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 22 09:42:01 crc kubenswrapper[4811]: I0122 09:42:01.156598 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fafd202-523c-44b0-b229-527193721bb1" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 22 09:42:01 crc kubenswrapper[4811]: I0122 09:42:01.156809 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="8fafd202-523c-44b0-b229-527193721bb1" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 22 09:42:01 crc kubenswrapper[4811]: I0122 09:42:01.157453 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-lqp9d" Jan 22 09:42:01 crc kubenswrapper[4811]: I0122 09:42:01.159384 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 22 09:42:01 crc kubenswrapper[4811]: I0122 09:42:01.159472 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Jan 22 09:42:01 crc kubenswrapper[4811]: I0122 09:42:01.163343 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 22 09:42:01 crc kubenswrapper[4811]: I0122 09:42:01.163438 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 22 09:42:01 crc kubenswrapper[4811]: I0122 09:42:01.163463 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Jan 22 09:42:01 crc kubenswrapper[4811]: I0122 09:42:01.163509 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7cbs6" Jan 22 09:42:01 crc kubenswrapper[4811]: I0122 09:42:01.163804 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 22 09:42:01 crc kubenswrapper[4811]: I0122 09:42:01.178269 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-lqp9d"] Jan 22 09:42:01 crc kubenswrapper[4811]: I0122 09:42:01.250087 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/3c16de7c-e366-4871-b006-d63a565fb17e-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-lqp9d\" (UID: \"3c16de7c-e366-4871-b006-d63a565fb17e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-lqp9d" Jan 22 09:42:01 crc kubenswrapper[4811]: I0122 09:42:01.250203 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/3c16de7c-e366-4871-b006-d63a565fb17e-ceph\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-lqp9d\" (UID: \"3c16de7c-e366-4871-b006-d63a565fb17e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-lqp9d" Jan 22 09:42:01 crc kubenswrapper[4811]: I0122 09:42:01.250250 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2xx5\" (UniqueName: \"kubernetes.io/projected/3c16de7c-e366-4871-b006-d63a565fb17e-kube-api-access-x2xx5\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-lqp9d\" (UID: \"3c16de7c-e366-4871-b006-d63a565fb17e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-lqp9d" Jan 22 09:42:01 crc kubenswrapper[4811]: I0122 09:42:01.250297 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/3c16de7c-e366-4871-b006-d63a565fb17e-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-lqp9d\" (UID: \"3c16de7c-e366-4871-b006-d63a565fb17e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-lqp9d" Jan 22 09:42:01 crc kubenswrapper[4811]: I0122 09:42:01.250340 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3c16de7c-e366-4871-b006-d63a565fb17e-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-lqp9d\" (UID: \"3c16de7c-e366-4871-b006-d63a565fb17e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-lqp9d" Jan 22 09:42:01 crc kubenswrapper[4811]: I0122 09:42:01.250520 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3c16de7c-e366-4871-b006-d63a565fb17e-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-lqp9d\" (UID: \"3c16de7c-e366-4871-b006-d63a565fb17e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-lqp9d" Jan 22 09:42:01 crc kubenswrapper[4811]: I0122 09:42:01.250726 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c16de7c-e366-4871-b006-d63a565fb17e-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-lqp9d\" (UID: \"3c16de7c-e366-4871-b006-d63a565fb17e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-lqp9d" Jan 22 09:42:01 crc kubenswrapper[4811]: I0122 09:42:01.354818 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3c16de7c-e366-4871-b006-d63a565fb17e-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-lqp9d\" (UID: \"3c16de7c-e366-4871-b006-d63a565fb17e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-lqp9d" Jan 22 09:42:01 crc kubenswrapper[4811]: I0122 09:42:01.355053 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c16de7c-e366-4871-b006-d63a565fb17e-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-lqp9d\" (UID: \"3c16de7c-e366-4871-b006-d63a565fb17e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-lqp9d" Jan 22 09:42:01 crc kubenswrapper[4811]: I0122 09:42:01.355195 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/3c16de7c-e366-4871-b006-d63a565fb17e-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-lqp9d\" (UID: \"3c16de7c-e366-4871-b006-d63a565fb17e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-lqp9d" Jan 22 09:42:01 crc kubenswrapper[4811]: I0122 09:42:01.355313 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/3c16de7c-e366-4871-b006-d63a565fb17e-ceph\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-lqp9d\" (UID: \"3c16de7c-e366-4871-b006-d63a565fb17e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-lqp9d" Jan 22 09:42:01 crc kubenswrapper[4811]: I0122 09:42:01.355376 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2xx5\" (UniqueName: \"kubernetes.io/projected/3c16de7c-e366-4871-b006-d63a565fb17e-kube-api-access-x2xx5\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-lqp9d\" (UID: \"3c16de7c-e366-4871-b006-d63a565fb17e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-lqp9d" Jan 22 09:42:01 crc kubenswrapper[4811]: I0122 09:42:01.355435 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/3c16de7c-e366-4871-b006-d63a565fb17e-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-lqp9d\" (UID: \"3c16de7c-e366-4871-b006-d63a565fb17e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-lqp9d" Jan 22 09:42:01 crc kubenswrapper[4811]: I0122 09:42:01.355478 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3c16de7c-e366-4871-b006-d63a565fb17e-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-lqp9d\" (UID: \"3c16de7c-e366-4871-b006-d63a565fb17e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-lqp9d" Jan 22 09:42:01 crc kubenswrapper[4811]: I0122 09:42:01.359791 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3c16de7c-e366-4871-b006-d63a565fb17e-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-lqp9d\" (UID: \"3c16de7c-e366-4871-b006-d63a565fb17e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-lqp9d" Jan 22 09:42:01 crc kubenswrapper[4811]: I0122 09:42:01.361216 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3c16de7c-e366-4871-b006-d63a565fb17e-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-lqp9d\" (UID: \"3c16de7c-e366-4871-b006-d63a565fb17e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-lqp9d" Jan 22 09:42:01 crc kubenswrapper[4811]: I0122 09:42:01.361264 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/3c16de7c-e366-4871-b006-d63a565fb17e-ceph\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-lqp9d\" (UID: \"3c16de7c-e366-4871-b006-d63a565fb17e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-lqp9d" Jan 22 09:42:01 crc kubenswrapper[4811]: I0122 09:42:01.361435 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/3c16de7c-e366-4871-b006-d63a565fb17e-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-lqp9d\" (UID: \"3c16de7c-e366-4871-b006-d63a565fb17e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-lqp9d" Jan 22 09:42:01 crc kubenswrapper[4811]: I0122 09:42:01.361596 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/3c16de7c-e366-4871-b006-d63a565fb17e-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-lqp9d\" (UID: \"3c16de7c-e366-4871-b006-d63a565fb17e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-lqp9d" Jan 22 09:42:01 crc kubenswrapper[4811]: I0122 09:42:01.361893 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c16de7c-e366-4871-b006-d63a565fb17e-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-lqp9d\" (UID: \"3c16de7c-e366-4871-b006-d63a565fb17e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-lqp9d" Jan 22 09:42:01 crc kubenswrapper[4811]: I0122 09:42:01.370598 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2xx5\" (UniqueName: \"kubernetes.io/projected/3c16de7c-e366-4871-b006-d63a565fb17e-kube-api-access-x2xx5\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-lqp9d\" (UID: \"3c16de7c-e366-4871-b006-d63a565fb17e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-lqp9d" Jan 22 09:42:01 crc kubenswrapper[4811]: I0122 09:42:01.475554 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-lqp9d" Jan 22 09:42:01 crc kubenswrapper[4811]: I0122 09:42:01.935948 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-lqp9d"] Jan 22 09:42:01 crc kubenswrapper[4811]: W0122 09:42:01.937432 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3c16de7c_e366_4871_b006_d63a565fb17e.slice/crio-e91b7813bd8138247fe968c74bc051f1e4a865525c8dade314afc6ff76791a1c WatchSource:0}: Error finding container e91b7813bd8138247fe968c74bc051f1e4a865525c8dade314afc6ff76791a1c: Status 404 returned error can't find the container with id e91b7813bd8138247fe968c74bc051f1e4a865525c8dade314afc6ff76791a1c Jan 22 09:42:01 crc kubenswrapper[4811]: I0122 09:42:01.939675 4811 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 09:42:02 crc kubenswrapper[4811]: I0122 09:42:02.080580 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-lqp9d" event={"ID":"3c16de7c-e366-4871-b006-d63a565fb17e","Type":"ContainerStarted","Data":"e91b7813bd8138247fe968c74bc051f1e4a865525c8dade314afc6ff76791a1c"} Jan 22 09:42:03 crc kubenswrapper[4811]: I0122 09:42:03.093421 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-lqp9d" event={"ID":"3c16de7c-e366-4871-b006-d63a565fb17e","Type":"ContainerStarted","Data":"6e6513dec2f8faa11cc3a773e39eee32c767df7aff686776db91df74b364b0ef"} Jan 22 09:42:03 crc kubenswrapper[4811]: I0122 09:42:03.116892 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-lqp9d" podStartSLOduration=1.602525046 podStartE2EDuration="2.116870106s" podCreationTimestamp="2026-01-22 09:42:01 +0000 UTC" firstStartedPulling="2026-01-22 09:42:01.939453607 +0000 UTC m=+2166.261640730" lastFinishedPulling="2026-01-22 09:42:02.453798667 +0000 UTC m=+2166.775985790" observedRunningTime="2026-01-22 09:42:03.109114067 +0000 UTC m=+2167.431301190" watchObservedRunningTime="2026-01-22 09:42:03.116870106 +0000 UTC m=+2167.439057229" Jan 22 09:42:05 crc kubenswrapper[4811]: I0122 09:42:05.500954 4811 patch_prober.go:28] interesting pod/machine-config-daemon-txvcq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 09:42:05 crc kubenswrapper[4811]: I0122 09:42:05.501469 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 09:42:05 crc kubenswrapper[4811]: I0122 09:42:05.501516 4811 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" Jan 22 09:42:05 crc kubenswrapper[4811]: I0122 09:42:05.502078 4811 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a3eaf3fca355ac62b01468032411a18e7dabb820be02232f4becb3d78ad2b686"} pod="openshift-machine-config-operator/machine-config-daemon-txvcq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 09:42:05 crc kubenswrapper[4811]: I0122 09:42:05.502135 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" containerName="machine-config-daemon" containerID="cri-o://a3eaf3fca355ac62b01468032411a18e7dabb820be02232f4becb3d78ad2b686" gracePeriod=600 Jan 22 09:42:05 crc kubenswrapper[4811]: E0122 09:42:05.619526 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-txvcq_openshift-machine-config-operator(84068a6b-e189-419b-87f5-f31428f6eafe)\"" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" Jan 22 09:42:06 crc kubenswrapper[4811]: I0122 09:42:06.116534 4811 generic.go:334] "Generic (PLEG): container finished" podID="84068a6b-e189-419b-87f5-f31428f6eafe" containerID="a3eaf3fca355ac62b01468032411a18e7dabb820be02232f4becb3d78ad2b686" exitCode=0 Jan 22 09:42:06 crc kubenswrapper[4811]: I0122 09:42:06.116573 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" event={"ID":"84068a6b-e189-419b-87f5-f31428f6eafe","Type":"ContainerDied","Data":"a3eaf3fca355ac62b01468032411a18e7dabb820be02232f4becb3d78ad2b686"} Jan 22 09:42:06 crc kubenswrapper[4811]: I0122 09:42:06.116607 4811 scope.go:117] "RemoveContainer" containerID="9d25b46cfa8348bdd18bcf253f004ae0938dcf272be4899b8026857ee7fbaf9f" Jan 22 09:42:06 crc kubenswrapper[4811]: I0122 09:42:06.117491 4811 scope.go:117] "RemoveContainer" containerID="a3eaf3fca355ac62b01468032411a18e7dabb820be02232f4becb3d78ad2b686" Jan 22 09:42:06 crc kubenswrapper[4811]: E0122 09:42:06.117894 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-txvcq_openshift-machine-config-operator(84068a6b-e189-419b-87f5-f31428f6eafe)\"" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" Jan 22 09:42:19 crc kubenswrapper[4811]: I0122 09:42:19.992521 4811 scope.go:117] "RemoveContainer" containerID="a3eaf3fca355ac62b01468032411a18e7dabb820be02232f4becb3d78ad2b686" Jan 22 09:42:19 crc kubenswrapper[4811]: E0122 09:42:19.993366 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-txvcq_openshift-machine-config-operator(84068a6b-e189-419b-87f5-f31428f6eafe)\"" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" Jan 22 09:42:31 crc kubenswrapper[4811]: I0122 09:42:31.992473 4811 scope.go:117] "RemoveContainer" containerID="a3eaf3fca355ac62b01468032411a18e7dabb820be02232f4becb3d78ad2b686" Jan 22 09:42:31 crc kubenswrapper[4811]: E0122 09:42:31.993054 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-txvcq_openshift-machine-config-operator(84068a6b-e189-419b-87f5-f31428f6eafe)\"" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" Jan 22 09:42:38 crc kubenswrapper[4811]: I0122 09:42:38.901042 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-zvxsg"] Jan 22 09:42:38 crc kubenswrapper[4811]: I0122 09:42:38.903185 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zvxsg" Jan 22 09:42:38 crc kubenswrapper[4811]: I0122 09:42:38.914041 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zvxsg"] Jan 22 09:42:38 crc kubenswrapper[4811]: I0122 09:42:38.984813 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c35fa6c-cc25-4ad6-ba73-fc1893155a66-utilities\") pod \"redhat-operators-zvxsg\" (UID: \"0c35fa6c-cc25-4ad6-ba73-fc1893155a66\") " pod="openshift-marketplace/redhat-operators-zvxsg" Jan 22 09:42:38 crc kubenswrapper[4811]: I0122 09:42:38.984866 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggkdr\" (UniqueName: \"kubernetes.io/projected/0c35fa6c-cc25-4ad6-ba73-fc1893155a66-kube-api-access-ggkdr\") pod \"redhat-operators-zvxsg\" (UID: \"0c35fa6c-cc25-4ad6-ba73-fc1893155a66\") " pod="openshift-marketplace/redhat-operators-zvxsg" Jan 22 09:42:38 crc kubenswrapper[4811]: I0122 09:42:38.984970 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c35fa6c-cc25-4ad6-ba73-fc1893155a66-catalog-content\") pod \"redhat-operators-zvxsg\" (UID: \"0c35fa6c-cc25-4ad6-ba73-fc1893155a66\") " pod="openshift-marketplace/redhat-operators-zvxsg" Jan 22 09:42:39 crc kubenswrapper[4811]: I0122 09:42:39.086437 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c35fa6c-cc25-4ad6-ba73-fc1893155a66-utilities\") pod \"redhat-operators-zvxsg\" (UID: \"0c35fa6c-cc25-4ad6-ba73-fc1893155a66\") " pod="openshift-marketplace/redhat-operators-zvxsg" Jan 22 09:42:39 crc kubenswrapper[4811]: I0122 09:42:39.086511 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggkdr\" (UniqueName: \"kubernetes.io/projected/0c35fa6c-cc25-4ad6-ba73-fc1893155a66-kube-api-access-ggkdr\") pod \"redhat-operators-zvxsg\" (UID: \"0c35fa6c-cc25-4ad6-ba73-fc1893155a66\") " pod="openshift-marketplace/redhat-operators-zvxsg" Jan 22 09:42:39 crc kubenswrapper[4811]: I0122 09:42:39.086593 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c35fa6c-cc25-4ad6-ba73-fc1893155a66-catalog-content\") pod \"redhat-operators-zvxsg\" (UID: \"0c35fa6c-cc25-4ad6-ba73-fc1893155a66\") " pod="openshift-marketplace/redhat-operators-zvxsg" Jan 22 09:42:39 crc kubenswrapper[4811]: I0122 09:42:39.087069 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c35fa6c-cc25-4ad6-ba73-fc1893155a66-utilities\") pod \"redhat-operators-zvxsg\" (UID: \"0c35fa6c-cc25-4ad6-ba73-fc1893155a66\") " pod="openshift-marketplace/redhat-operators-zvxsg" Jan 22 09:42:39 crc kubenswrapper[4811]: I0122 09:42:39.087248 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c35fa6c-cc25-4ad6-ba73-fc1893155a66-catalog-content\") pod \"redhat-operators-zvxsg\" (UID: \"0c35fa6c-cc25-4ad6-ba73-fc1893155a66\") " pod="openshift-marketplace/redhat-operators-zvxsg" Jan 22 09:42:39 crc kubenswrapper[4811]: I0122 09:42:39.104146 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggkdr\" (UniqueName: \"kubernetes.io/projected/0c35fa6c-cc25-4ad6-ba73-fc1893155a66-kube-api-access-ggkdr\") pod \"redhat-operators-zvxsg\" (UID: \"0c35fa6c-cc25-4ad6-ba73-fc1893155a66\") " pod="openshift-marketplace/redhat-operators-zvxsg" Jan 22 09:42:39 crc kubenswrapper[4811]: I0122 09:42:39.220017 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zvxsg" Jan 22 09:42:39 crc kubenswrapper[4811]: I0122 09:42:39.617892 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zvxsg"] Jan 22 09:42:39 crc kubenswrapper[4811]: W0122 09:42:39.618414 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0c35fa6c_cc25_4ad6_ba73_fc1893155a66.slice/crio-54eaccc55a268b6611335f6ad340d2221b045ea5f7681f03c44b345f2c0bfb8d WatchSource:0}: Error finding container 54eaccc55a268b6611335f6ad340d2221b045ea5f7681f03c44b345f2c0bfb8d: Status 404 returned error can't find the container with id 54eaccc55a268b6611335f6ad340d2221b045ea5f7681f03c44b345f2c0bfb8d Jan 22 09:42:40 crc kubenswrapper[4811]: I0122 09:42:40.342349 4811 generic.go:334] "Generic (PLEG): container finished" podID="0c35fa6c-cc25-4ad6-ba73-fc1893155a66" containerID="286951c3505a97514ecc0a8f4a9b36844af28aa840a21fd50570706555591cd3" exitCode=0 Jan 22 09:42:40 crc kubenswrapper[4811]: I0122 09:42:40.342693 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zvxsg" event={"ID":"0c35fa6c-cc25-4ad6-ba73-fc1893155a66","Type":"ContainerDied","Data":"286951c3505a97514ecc0a8f4a9b36844af28aa840a21fd50570706555591cd3"} Jan 22 09:42:40 crc kubenswrapper[4811]: I0122 09:42:40.342720 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zvxsg" event={"ID":"0c35fa6c-cc25-4ad6-ba73-fc1893155a66","Type":"ContainerStarted","Data":"54eaccc55a268b6611335f6ad340d2221b045ea5f7681f03c44b345f2c0bfb8d"} Jan 22 09:42:41 crc kubenswrapper[4811]: I0122 09:42:41.350359 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zvxsg" event={"ID":"0c35fa6c-cc25-4ad6-ba73-fc1893155a66","Type":"ContainerStarted","Data":"b7d820790c88aec15d226d171e3d7b840afb74b475a9ada3f504a1a634a6576b"} Jan 22 09:42:43 crc kubenswrapper[4811]: I0122 09:42:43.992508 4811 scope.go:117] "RemoveContainer" containerID="a3eaf3fca355ac62b01468032411a18e7dabb820be02232f4becb3d78ad2b686" Jan 22 09:42:43 crc kubenswrapper[4811]: E0122 09:42:43.993335 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-txvcq_openshift-machine-config-operator(84068a6b-e189-419b-87f5-f31428f6eafe)\"" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" Jan 22 09:42:44 crc kubenswrapper[4811]: I0122 09:42:44.375040 4811 generic.go:334] "Generic (PLEG): container finished" podID="0c35fa6c-cc25-4ad6-ba73-fc1893155a66" containerID="b7d820790c88aec15d226d171e3d7b840afb74b475a9ada3f504a1a634a6576b" exitCode=0 Jan 22 09:42:44 crc kubenswrapper[4811]: I0122 09:42:44.375091 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zvxsg" event={"ID":"0c35fa6c-cc25-4ad6-ba73-fc1893155a66","Type":"ContainerDied","Data":"b7d820790c88aec15d226d171e3d7b840afb74b475a9ada3f504a1a634a6576b"} Jan 22 09:42:45 crc kubenswrapper[4811]: I0122 09:42:45.387151 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zvxsg" event={"ID":"0c35fa6c-cc25-4ad6-ba73-fc1893155a66","Type":"ContainerStarted","Data":"d5f7074d24cd0d04e84c7cc4089d721761893665d9be07d5466b4abf1ef1471b"} Jan 22 09:42:45 crc kubenswrapper[4811]: I0122 09:42:45.408170 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-zvxsg" podStartSLOduration=2.91866828 podStartE2EDuration="7.40815547s" podCreationTimestamp="2026-01-22 09:42:38 +0000 UTC" firstStartedPulling="2026-01-22 09:42:40.34409124 +0000 UTC m=+2204.666278363" lastFinishedPulling="2026-01-22 09:42:44.83357843 +0000 UTC m=+2209.155765553" observedRunningTime="2026-01-22 09:42:45.40613045 +0000 UTC m=+2209.728317573" watchObservedRunningTime="2026-01-22 09:42:45.40815547 +0000 UTC m=+2209.730342593" Jan 22 09:42:49 crc kubenswrapper[4811]: I0122 09:42:49.220711 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-zvxsg" Jan 22 09:42:49 crc kubenswrapper[4811]: I0122 09:42:49.221052 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zvxsg" Jan 22 09:42:49 crc kubenswrapper[4811]: I0122 09:42:49.421111 4811 generic.go:334] "Generic (PLEG): container finished" podID="3c16de7c-e366-4871-b006-d63a565fb17e" containerID="6e6513dec2f8faa11cc3a773e39eee32c767df7aff686776db91df74b364b0ef" exitCode=0 Jan 22 09:42:49 crc kubenswrapper[4811]: I0122 09:42:49.421198 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-lqp9d" event={"ID":"3c16de7c-e366-4871-b006-d63a565fb17e","Type":"ContainerDied","Data":"6e6513dec2f8faa11cc3a773e39eee32c767df7aff686776db91df74b364b0ef"} Jan 22 09:42:50 crc kubenswrapper[4811]: I0122 09:42:50.253489 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zvxsg" podUID="0c35fa6c-cc25-4ad6-ba73-fc1893155a66" containerName="registry-server" probeResult="failure" output=< Jan 22 09:42:50 crc kubenswrapper[4811]: timeout: failed to connect service ":50051" within 1s Jan 22 09:42:50 crc kubenswrapper[4811]: > Jan 22 09:42:50 crc kubenswrapper[4811]: I0122 09:42:50.785555 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-lqp9d" Jan 22 09:42:50 crc kubenswrapper[4811]: I0122 09:42:50.908756 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c16de7c-e366-4871-b006-d63a565fb17e-neutron-metadata-combined-ca-bundle\") pod \"3c16de7c-e366-4871-b006-d63a565fb17e\" (UID: \"3c16de7c-e366-4871-b006-d63a565fb17e\") " Jan 22 09:42:50 crc kubenswrapper[4811]: I0122 09:42:50.908974 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/3c16de7c-e366-4871-b006-d63a565fb17e-ceph\") pod \"3c16de7c-e366-4871-b006-d63a565fb17e\" (UID: \"3c16de7c-e366-4871-b006-d63a565fb17e\") " Jan 22 09:42:50 crc kubenswrapper[4811]: I0122 09:42:50.909008 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3c16de7c-e366-4871-b006-d63a565fb17e-inventory\") pod \"3c16de7c-e366-4871-b006-d63a565fb17e\" (UID: \"3c16de7c-e366-4871-b006-d63a565fb17e\") " Jan 22 09:42:50 crc kubenswrapper[4811]: I0122 09:42:50.909023 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3c16de7c-e366-4871-b006-d63a565fb17e-ssh-key-openstack-edpm-ipam\") pod \"3c16de7c-e366-4871-b006-d63a565fb17e\" (UID: \"3c16de7c-e366-4871-b006-d63a565fb17e\") " Jan 22 09:42:50 crc kubenswrapper[4811]: I0122 09:42:50.909054 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/3c16de7c-e366-4871-b006-d63a565fb17e-nova-metadata-neutron-config-0\") pod \"3c16de7c-e366-4871-b006-d63a565fb17e\" (UID: \"3c16de7c-e366-4871-b006-d63a565fb17e\") " Jan 22 09:42:50 crc kubenswrapper[4811]: I0122 09:42:50.909069 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2xx5\" (UniqueName: \"kubernetes.io/projected/3c16de7c-e366-4871-b006-d63a565fb17e-kube-api-access-x2xx5\") pod \"3c16de7c-e366-4871-b006-d63a565fb17e\" (UID: \"3c16de7c-e366-4871-b006-d63a565fb17e\") " Jan 22 09:42:50 crc kubenswrapper[4811]: I0122 09:42:50.909134 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/3c16de7c-e366-4871-b006-d63a565fb17e-neutron-ovn-metadata-agent-neutron-config-0\") pod \"3c16de7c-e366-4871-b006-d63a565fb17e\" (UID: \"3c16de7c-e366-4871-b006-d63a565fb17e\") " Jan 22 09:42:50 crc kubenswrapper[4811]: I0122 09:42:50.913460 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c16de7c-e366-4871-b006-d63a565fb17e-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "3c16de7c-e366-4871-b006-d63a565fb17e" (UID: "3c16de7c-e366-4871-b006-d63a565fb17e"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:42:50 crc kubenswrapper[4811]: I0122 09:42:50.913489 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c16de7c-e366-4871-b006-d63a565fb17e-ceph" (OuterVolumeSpecName: "ceph") pod "3c16de7c-e366-4871-b006-d63a565fb17e" (UID: "3c16de7c-e366-4871-b006-d63a565fb17e"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:42:50 crc kubenswrapper[4811]: I0122 09:42:50.915895 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c16de7c-e366-4871-b006-d63a565fb17e-kube-api-access-x2xx5" (OuterVolumeSpecName: "kube-api-access-x2xx5") pod "3c16de7c-e366-4871-b006-d63a565fb17e" (UID: "3c16de7c-e366-4871-b006-d63a565fb17e"). InnerVolumeSpecName "kube-api-access-x2xx5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:42:50 crc kubenswrapper[4811]: I0122 09:42:50.930093 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c16de7c-e366-4871-b006-d63a565fb17e-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "3c16de7c-e366-4871-b006-d63a565fb17e" (UID: "3c16de7c-e366-4871-b006-d63a565fb17e"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:42:50 crc kubenswrapper[4811]: I0122 09:42:50.930406 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c16de7c-e366-4871-b006-d63a565fb17e-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "3c16de7c-e366-4871-b006-d63a565fb17e" (UID: "3c16de7c-e366-4871-b006-d63a565fb17e"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:42:50 crc kubenswrapper[4811]: I0122 09:42:50.930841 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c16de7c-e366-4871-b006-d63a565fb17e-inventory" (OuterVolumeSpecName: "inventory") pod "3c16de7c-e366-4871-b006-d63a565fb17e" (UID: "3c16de7c-e366-4871-b006-d63a565fb17e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:42:50 crc kubenswrapper[4811]: I0122 09:42:50.931301 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c16de7c-e366-4871-b006-d63a565fb17e-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "3c16de7c-e366-4871-b006-d63a565fb17e" (UID: "3c16de7c-e366-4871-b006-d63a565fb17e"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:42:51 crc kubenswrapper[4811]: I0122 09:42:51.011182 4811 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3c16de7c-e366-4871-b006-d63a565fb17e-inventory\") on node \"crc\" DevicePath \"\"" Jan 22 09:42:51 crc kubenswrapper[4811]: I0122 09:42:51.011210 4811 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3c16de7c-e366-4871-b006-d63a565fb17e-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 22 09:42:51 crc kubenswrapper[4811]: I0122 09:42:51.011222 4811 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/3c16de7c-e366-4871-b006-d63a565fb17e-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 22 09:42:51 crc kubenswrapper[4811]: I0122 09:42:51.011232 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2xx5\" (UniqueName: \"kubernetes.io/projected/3c16de7c-e366-4871-b006-d63a565fb17e-kube-api-access-x2xx5\") on node \"crc\" DevicePath \"\"" Jan 22 09:42:51 crc kubenswrapper[4811]: I0122 09:42:51.011243 4811 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/3c16de7c-e366-4871-b006-d63a565fb17e-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 22 09:42:51 crc kubenswrapper[4811]: I0122 09:42:51.011254 4811 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c16de7c-e366-4871-b006-d63a565fb17e-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 09:42:51 crc kubenswrapper[4811]: I0122 09:42:51.011265 4811 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/3c16de7c-e366-4871-b006-d63a565fb17e-ceph\") on node \"crc\" DevicePath \"\"" Jan 22 09:42:51 crc kubenswrapper[4811]: I0122 09:42:51.438215 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-lqp9d" event={"ID":"3c16de7c-e366-4871-b006-d63a565fb17e","Type":"ContainerDied","Data":"e91b7813bd8138247fe968c74bc051f1e4a865525c8dade314afc6ff76791a1c"} Jan 22 09:42:51 crc kubenswrapper[4811]: I0122 09:42:51.438259 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e91b7813bd8138247fe968c74bc051f1e4a865525c8dade314afc6ff76791a1c" Jan 22 09:42:51 crc kubenswrapper[4811]: I0122 09:42:51.438814 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-lqp9d" Jan 22 09:42:51 crc kubenswrapper[4811]: I0122 09:42:51.521941 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hklz6"] Jan 22 09:42:51 crc kubenswrapper[4811]: E0122 09:42:51.522433 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c16de7c-e366-4871-b006-d63a565fb17e" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 22 09:42:51 crc kubenswrapper[4811]: I0122 09:42:51.522506 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c16de7c-e366-4871-b006-d63a565fb17e" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 22 09:42:51 crc kubenswrapper[4811]: I0122 09:42:51.522766 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c16de7c-e366-4871-b006-d63a565fb17e" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 22 09:42:51 crc kubenswrapper[4811]: I0122 09:42:51.523439 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hklz6" Jan 22 09:42:51 crc kubenswrapper[4811]: I0122 09:42:51.526655 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 22 09:42:51 crc kubenswrapper[4811]: I0122 09:42:51.526797 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 22 09:42:51 crc kubenswrapper[4811]: I0122 09:42:51.526896 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Jan 22 09:42:51 crc kubenswrapper[4811]: I0122 09:42:51.526810 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 22 09:42:51 crc kubenswrapper[4811]: I0122 09:42:51.526843 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7cbs6" Jan 22 09:42:51 crc kubenswrapper[4811]: I0122 09:42:51.527061 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 22 09:42:51 crc kubenswrapper[4811]: I0122 09:42:51.538106 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hklz6"] Jan 22 09:42:51 crc kubenswrapper[4811]: I0122 09:42:51.621162 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0f4688b1-29e2-475b-80c0-63afbc3b1afa-ceph\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hklz6\" (UID: \"0f4688b1-29e2-475b-80c0-63afbc3b1afa\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hklz6" Jan 22 09:42:51 crc kubenswrapper[4811]: I0122 09:42:51.621210 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/0f4688b1-29e2-475b-80c0-63afbc3b1afa-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hklz6\" (UID: \"0f4688b1-29e2-475b-80c0-63afbc3b1afa\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hklz6" Jan 22 09:42:51 crc kubenswrapper[4811]: I0122 09:42:51.621408 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0f4688b1-29e2-475b-80c0-63afbc3b1afa-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hklz6\" (UID: \"0f4688b1-29e2-475b-80c0-63afbc3b1afa\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hklz6" Jan 22 09:42:51 crc kubenswrapper[4811]: I0122 09:42:51.621575 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f4688b1-29e2-475b-80c0-63afbc3b1afa-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hklz6\" (UID: \"0f4688b1-29e2-475b-80c0-63afbc3b1afa\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hklz6" Jan 22 09:42:51 crc kubenswrapper[4811]: I0122 09:42:51.621734 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0f4688b1-29e2-475b-80c0-63afbc3b1afa-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hklz6\" (UID: \"0f4688b1-29e2-475b-80c0-63afbc3b1afa\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hklz6" Jan 22 09:42:51 crc kubenswrapper[4811]: I0122 09:42:51.621762 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrcqq\" (UniqueName: \"kubernetes.io/projected/0f4688b1-29e2-475b-80c0-63afbc3b1afa-kube-api-access-vrcqq\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hklz6\" (UID: \"0f4688b1-29e2-475b-80c0-63afbc3b1afa\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hklz6" Jan 22 09:42:51 crc kubenswrapper[4811]: I0122 09:42:51.723473 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0f4688b1-29e2-475b-80c0-63afbc3b1afa-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hklz6\" (UID: \"0f4688b1-29e2-475b-80c0-63afbc3b1afa\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hklz6" Jan 22 09:42:51 crc kubenswrapper[4811]: I0122 09:42:51.723516 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrcqq\" (UniqueName: \"kubernetes.io/projected/0f4688b1-29e2-475b-80c0-63afbc3b1afa-kube-api-access-vrcqq\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hklz6\" (UID: \"0f4688b1-29e2-475b-80c0-63afbc3b1afa\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hklz6" Jan 22 09:42:51 crc kubenswrapper[4811]: I0122 09:42:51.723634 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0f4688b1-29e2-475b-80c0-63afbc3b1afa-ceph\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hklz6\" (UID: \"0f4688b1-29e2-475b-80c0-63afbc3b1afa\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hklz6" Jan 22 09:42:51 crc kubenswrapper[4811]: I0122 09:42:51.723662 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/0f4688b1-29e2-475b-80c0-63afbc3b1afa-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hklz6\" (UID: \"0f4688b1-29e2-475b-80c0-63afbc3b1afa\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hklz6" Jan 22 09:42:51 crc kubenswrapper[4811]: I0122 09:42:51.723722 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0f4688b1-29e2-475b-80c0-63afbc3b1afa-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hklz6\" (UID: \"0f4688b1-29e2-475b-80c0-63afbc3b1afa\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hklz6" Jan 22 09:42:51 crc kubenswrapper[4811]: I0122 09:42:51.723806 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f4688b1-29e2-475b-80c0-63afbc3b1afa-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hklz6\" (UID: \"0f4688b1-29e2-475b-80c0-63afbc3b1afa\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hklz6" Jan 22 09:42:51 crc kubenswrapper[4811]: I0122 09:42:51.728208 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0f4688b1-29e2-475b-80c0-63afbc3b1afa-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hklz6\" (UID: \"0f4688b1-29e2-475b-80c0-63afbc3b1afa\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hklz6" Jan 22 09:42:51 crc kubenswrapper[4811]: I0122 09:42:51.728462 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f4688b1-29e2-475b-80c0-63afbc3b1afa-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hklz6\" (UID: \"0f4688b1-29e2-475b-80c0-63afbc3b1afa\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hklz6" Jan 22 09:42:51 crc kubenswrapper[4811]: I0122 09:42:51.728588 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0f4688b1-29e2-475b-80c0-63afbc3b1afa-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hklz6\" (UID: \"0f4688b1-29e2-475b-80c0-63afbc3b1afa\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hklz6" Jan 22 09:42:51 crc kubenswrapper[4811]: I0122 09:42:51.728707 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0f4688b1-29e2-475b-80c0-63afbc3b1afa-ceph\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hklz6\" (UID: \"0f4688b1-29e2-475b-80c0-63afbc3b1afa\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hklz6" Jan 22 09:42:51 crc kubenswrapper[4811]: I0122 09:42:51.729309 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/0f4688b1-29e2-475b-80c0-63afbc3b1afa-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hklz6\" (UID: \"0f4688b1-29e2-475b-80c0-63afbc3b1afa\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hklz6" Jan 22 09:42:51 crc kubenswrapper[4811]: I0122 09:42:51.738013 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrcqq\" (UniqueName: \"kubernetes.io/projected/0f4688b1-29e2-475b-80c0-63afbc3b1afa-kube-api-access-vrcqq\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hklz6\" (UID: \"0f4688b1-29e2-475b-80c0-63afbc3b1afa\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hklz6" Jan 22 09:42:51 crc kubenswrapper[4811]: I0122 09:42:51.839683 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hklz6" Jan 22 09:42:52 crc kubenswrapper[4811]: I0122 09:42:52.273808 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hklz6"] Jan 22 09:42:52 crc kubenswrapper[4811]: I0122 09:42:52.445120 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hklz6" event={"ID":"0f4688b1-29e2-475b-80c0-63afbc3b1afa","Type":"ContainerStarted","Data":"c194f5e0976dbda241b3d188afedd77dd354345686938df7dcfb979d99bd61ab"} Jan 22 09:42:53 crc kubenswrapper[4811]: I0122 09:42:53.453794 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hklz6" event={"ID":"0f4688b1-29e2-475b-80c0-63afbc3b1afa","Type":"ContainerStarted","Data":"164ca8c08987f033cddda9c2fade3bc3e4ed7cf3f9b6f72799b73aed53786439"} Jan 22 09:42:53 crc kubenswrapper[4811]: I0122 09:42:53.474495 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hklz6" podStartSLOduration=1.995166792 podStartE2EDuration="2.474477772s" podCreationTimestamp="2026-01-22 09:42:51 +0000 UTC" firstStartedPulling="2026-01-22 09:42:52.278272644 +0000 UTC m=+2216.600459767" lastFinishedPulling="2026-01-22 09:42:52.757583624 +0000 UTC m=+2217.079770747" observedRunningTime="2026-01-22 09:42:53.469378343 +0000 UTC m=+2217.791565467" watchObservedRunningTime="2026-01-22 09:42:53.474477772 +0000 UTC m=+2217.796664895" Jan 22 09:42:58 crc kubenswrapper[4811]: I0122 09:42:58.992400 4811 scope.go:117] "RemoveContainer" containerID="a3eaf3fca355ac62b01468032411a18e7dabb820be02232f4becb3d78ad2b686" Jan 22 09:42:58 crc kubenswrapper[4811]: E0122 09:42:58.992861 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-txvcq_openshift-machine-config-operator(84068a6b-e189-419b-87f5-f31428f6eafe)\"" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" Jan 22 09:42:59 crc kubenswrapper[4811]: I0122 09:42:59.255487 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-zvxsg" Jan 22 09:42:59 crc kubenswrapper[4811]: I0122 09:42:59.294519 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-zvxsg" Jan 22 09:42:59 crc kubenswrapper[4811]: I0122 09:42:59.485797 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zvxsg"] Jan 22 09:43:00 crc kubenswrapper[4811]: I0122 09:43:00.501055 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-zvxsg" podUID="0c35fa6c-cc25-4ad6-ba73-fc1893155a66" containerName="registry-server" containerID="cri-o://d5f7074d24cd0d04e84c7cc4089d721761893665d9be07d5466b4abf1ef1471b" gracePeriod=2 Jan 22 09:43:00 crc kubenswrapper[4811]: I0122 09:43:00.914756 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zvxsg" Jan 22 09:43:00 crc kubenswrapper[4811]: I0122 09:43:00.998269 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c35fa6c-cc25-4ad6-ba73-fc1893155a66-utilities\") pod \"0c35fa6c-cc25-4ad6-ba73-fc1893155a66\" (UID: \"0c35fa6c-cc25-4ad6-ba73-fc1893155a66\") " Jan 22 09:43:00 crc kubenswrapper[4811]: I0122 09:43:00.998305 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ggkdr\" (UniqueName: \"kubernetes.io/projected/0c35fa6c-cc25-4ad6-ba73-fc1893155a66-kube-api-access-ggkdr\") pod \"0c35fa6c-cc25-4ad6-ba73-fc1893155a66\" (UID: \"0c35fa6c-cc25-4ad6-ba73-fc1893155a66\") " Jan 22 09:43:00 crc kubenswrapper[4811]: I0122 09:43:00.998432 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c35fa6c-cc25-4ad6-ba73-fc1893155a66-catalog-content\") pod \"0c35fa6c-cc25-4ad6-ba73-fc1893155a66\" (UID: \"0c35fa6c-cc25-4ad6-ba73-fc1893155a66\") " Jan 22 09:43:00 crc kubenswrapper[4811]: I0122 09:43:00.998930 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0c35fa6c-cc25-4ad6-ba73-fc1893155a66-utilities" (OuterVolumeSpecName: "utilities") pod "0c35fa6c-cc25-4ad6-ba73-fc1893155a66" (UID: "0c35fa6c-cc25-4ad6-ba73-fc1893155a66"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:43:00 crc kubenswrapper[4811]: I0122 09:43:00.999022 4811 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c35fa6c-cc25-4ad6-ba73-fc1893155a66-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 09:43:01 crc kubenswrapper[4811]: I0122 09:43:01.002987 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c35fa6c-cc25-4ad6-ba73-fc1893155a66-kube-api-access-ggkdr" (OuterVolumeSpecName: "kube-api-access-ggkdr") pod "0c35fa6c-cc25-4ad6-ba73-fc1893155a66" (UID: "0c35fa6c-cc25-4ad6-ba73-fc1893155a66"). InnerVolumeSpecName "kube-api-access-ggkdr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:43:01 crc kubenswrapper[4811]: I0122 09:43:01.090307 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0c35fa6c-cc25-4ad6-ba73-fc1893155a66-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0c35fa6c-cc25-4ad6-ba73-fc1893155a66" (UID: "0c35fa6c-cc25-4ad6-ba73-fc1893155a66"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:43:01 crc kubenswrapper[4811]: I0122 09:43:01.101195 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ggkdr\" (UniqueName: \"kubernetes.io/projected/0c35fa6c-cc25-4ad6-ba73-fc1893155a66-kube-api-access-ggkdr\") on node \"crc\" DevicePath \"\"" Jan 22 09:43:01 crc kubenswrapper[4811]: I0122 09:43:01.101219 4811 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c35fa6c-cc25-4ad6-ba73-fc1893155a66-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 09:43:01 crc kubenswrapper[4811]: I0122 09:43:01.509306 4811 generic.go:334] "Generic (PLEG): container finished" podID="0c35fa6c-cc25-4ad6-ba73-fc1893155a66" containerID="d5f7074d24cd0d04e84c7cc4089d721761893665d9be07d5466b4abf1ef1471b" exitCode=0 Jan 22 09:43:01 crc kubenswrapper[4811]: I0122 09:43:01.509348 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zvxsg" event={"ID":"0c35fa6c-cc25-4ad6-ba73-fc1893155a66","Type":"ContainerDied","Data":"d5f7074d24cd0d04e84c7cc4089d721761893665d9be07d5466b4abf1ef1471b"} Jan 22 09:43:01 crc kubenswrapper[4811]: I0122 09:43:01.509376 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zvxsg" event={"ID":"0c35fa6c-cc25-4ad6-ba73-fc1893155a66","Type":"ContainerDied","Data":"54eaccc55a268b6611335f6ad340d2221b045ea5f7681f03c44b345f2c0bfb8d"} Jan 22 09:43:01 crc kubenswrapper[4811]: I0122 09:43:01.509392 4811 scope.go:117] "RemoveContainer" containerID="d5f7074d24cd0d04e84c7cc4089d721761893665d9be07d5466b4abf1ef1471b" Jan 22 09:43:01 crc kubenswrapper[4811]: I0122 09:43:01.509374 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zvxsg" Jan 22 09:43:01 crc kubenswrapper[4811]: I0122 09:43:01.533653 4811 scope.go:117] "RemoveContainer" containerID="b7d820790c88aec15d226d171e3d7b840afb74b475a9ada3f504a1a634a6576b" Jan 22 09:43:01 crc kubenswrapper[4811]: I0122 09:43:01.535534 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zvxsg"] Jan 22 09:43:01 crc kubenswrapper[4811]: I0122 09:43:01.543698 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-zvxsg"] Jan 22 09:43:01 crc kubenswrapper[4811]: I0122 09:43:01.552061 4811 scope.go:117] "RemoveContainer" containerID="286951c3505a97514ecc0a8f4a9b36844af28aa840a21fd50570706555591cd3" Jan 22 09:43:01 crc kubenswrapper[4811]: I0122 09:43:01.579855 4811 scope.go:117] "RemoveContainer" containerID="d5f7074d24cd0d04e84c7cc4089d721761893665d9be07d5466b4abf1ef1471b" Jan 22 09:43:01 crc kubenswrapper[4811]: E0122 09:43:01.580429 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d5f7074d24cd0d04e84c7cc4089d721761893665d9be07d5466b4abf1ef1471b\": container with ID starting with d5f7074d24cd0d04e84c7cc4089d721761893665d9be07d5466b4abf1ef1471b not found: ID does not exist" containerID="d5f7074d24cd0d04e84c7cc4089d721761893665d9be07d5466b4abf1ef1471b" Jan 22 09:43:01 crc kubenswrapper[4811]: I0122 09:43:01.580471 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d5f7074d24cd0d04e84c7cc4089d721761893665d9be07d5466b4abf1ef1471b"} err="failed to get container status \"d5f7074d24cd0d04e84c7cc4089d721761893665d9be07d5466b4abf1ef1471b\": rpc error: code = NotFound desc = could not find container \"d5f7074d24cd0d04e84c7cc4089d721761893665d9be07d5466b4abf1ef1471b\": container with ID starting with d5f7074d24cd0d04e84c7cc4089d721761893665d9be07d5466b4abf1ef1471b not found: ID does not exist" Jan 22 09:43:01 crc kubenswrapper[4811]: I0122 09:43:01.580495 4811 scope.go:117] "RemoveContainer" containerID="b7d820790c88aec15d226d171e3d7b840afb74b475a9ada3f504a1a634a6576b" Jan 22 09:43:01 crc kubenswrapper[4811]: E0122 09:43:01.580963 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b7d820790c88aec15d226d171e3d7b840afb74b475a9ada3f504a1a634a6576b\": container with ID starting with b7d820790c88aec15d226d171e3d7b840afb74b475a9ada3f504a1a634a6576b not found: ID does not exist" containerID="b7d820790c88aec15d226d171e3d7b840afb74b475a9ada3f504a1a634a6576b" Jan 22 09:43:01 crc kubenswrapper[4811]: I0122 09:43:01.580997 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b7d820790c88aec15d226d171e3d7b840afb74b475a9ada3f504a1a634a6576b"} err="failed to get container status \"b7d820790c88aec15d226d171e3d7b840afb74b475a9ada3f504a1a634a6576b\": rpc error: code = NotFound desc = could not find container \"b7d820790c88aec15d226d171e3d7b840afb74b475a9ada3f504a1a634a6576b\": container with ID starting with b7d820790c88aec15d226d171e3d7b840afb74b475a9ada3f504a1a634a6576b not found: ID does not exist" Jan 22 09:43:01 crc kubenswrapper[4811]: I0122 09:43:01.581019 4811 scope.go:117] "RemoveContainer" containerID="286951c3505a97514ecc0a8f4a9b36844af28aa840a21fd50570706555591cd3" Jan 22 09:43:01 crc kubenswrapper[4811]: E0122 09:43:01.581329 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"286951c3505a97514ecc0a8f4a9b36844af28aa840a21fd50570706555591cd3\": container with ID starting with 286951c3505a97514ecc0a8f4a9b36844af28aa840a21fd50570706555591cd3 not found: ID does not exist" containerID="286951c3505a97514ecc0a8f4a9b36844af28aa840a21fd50570706555591cd3" Jan 22 09:43:01 crc kubenswrapper[4811]: I0122 09:43:01.581376 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"286951c3505a97514ecc0a8f4a9b36844af28aa840a21fd50570706555591cd3"} err="failed to get container status \"286951c3505a97514ecc0a8f4a9b36844af28aa840a21fd50570706555591cd3\": rpc error: code = NotFound desc = could not find container \"286951c3505a97514ecc0a8f4a9b36844af28aa840a21fd50570706555591cd3\": container with ID starting with 286951c3505a97514ecc0a8f4a9b36844af28aa840a21fd50570706555591cd3 not found: ID does not exist" Jan 22 09:43:01 crc kubenswrapper[4811]: I0122 09:43:01.999809 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c35fa6c-cc25-4ad6-ba73-fc1893155a66" path="/var/lib/kubelet/pods/0c35fa6c-cc25-4ad6-ba73-fc1893155a66/volumes" Jan 22 09:43:12 crc kubenswrapper[4811]: I0122 09:43:12.992735 4811 scope.go:117] "RemoveContainer" containerID="a3eaf3fca355ac62b01468032411a18e7dabb820be02232f4becb3d78ad2b686" Jan 22 09:43:12 crc kubenswrapper[4811]: E0122 09:43:12.993185 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-txvcq_openshift-machine-config-operator(84068a6b-e189-419b-87f5-f31428f6eafe)\"" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" Jan 22 09:43:27 crc kubenswrapper[4811]: I0122 09:43:27.991989 4811 scope.go:117] "RemoveContainer" containerID="a3eaf3fca355ac62b01468032411a18e7dabb820be02232f4becb3d78ad2b686" Jan 22 09:43:27 crc kubenswrapper[4811]: E0122 09:43:27.992762 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-txvcq_openshift-machine-config-operator(84068a6b-e189-419b-87f5-f31428f6eafe)\"" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" Jan 22 09:43:42 crc kubenswrapper[4811]: I0122 09:43:42.992482 4811 scope.go:117] "RemoveContainer" containerID="a3eaf3fca355ac62b01468032411a18e7dabb820be02232f4becb3d78ad2b686" Jan 22 09:43:42 crc kubenswrapper[4811]: E0122 09:43:42.993108 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-txvcq_openshift-machine-config-operator(84068a6b-e189-419b-87f5-f31428f6eafe)\"" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" Jan 22 09:43:56 crc kubenswrapper[4811]: I0122 09:43:56.992278 4811 scope.go:117] "RemoveContainer" containerID="a3eaf3fca355ac62b01468032411a18e7dabb820be02232f4becb3d78ad2b686" Jan 22 09:43:56 crc kubenswrapper[4811]: E0122 09:43:56.992874 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-txvcq_openshift-machine-config-operator(84068a6b-e189-419b-87f5-f31428f6eafe)\"" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" Jan 22 09:44:09 crc kubenswrapper[4811]: I0122 09:44:09.992544 4811 scope.go:117] "RemoveContainer" containerID="a3eaf3fca355ac62b01468032411a18e7dabb820be02232f4becb3d78ad2b686" Jan 22 09:44:09 crc kubenswrapper[4811]: E0122 09:44:09.993262 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-txvcq_openshift-machine-config-operator(84068a6b-e189-419b-87f5-f31428f6eafe)\"" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" Jan 22 09:44:23 crc kubenswrapper[4811]: I0122 09:44:23.992251 4811 scope.go:117] "RemoveContainer" containerID="a3eaf3fca355ac62b01468032411a18e7dabb820be02232f4becb3d78ad2b686" Jan 22 09:44:23 crc kubenswrapper[4811]: E0122 09:44:23.993335 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-txvcq_openshift-machine-config-operator(84068a6b-e189-419b-87f5-f31428f6eafe)\"" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" Jan 22 09:44:37 crc kubenswrapper[4811]: I0122 09:44:37.992321 4811 scope.go:117] "RemoveContainer" containerID="a3eaf3fca355ac62b01468032411a18e7dabb820be02232f4becb3d78ad2b686" Jan 22 09:44:37 crc kubenswrapper[4811]: E0122 09:44:37.993182 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-txvcq_openshift-machine-config-operator(84068a6b-e189-419b-87f5-f31428f6eafe)\"" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" Jan 22 09:44:51 crc kubenswrapper[4811]: I0122 09:44:51.992034 4811 scope.go:117] "RemoveContainer" containerID="a3eaf3fca355ac62b01468032411a18e7dabb820be02232f4becb3d78ad2b686" Jan 22 09:44:51 crc kubenswrapper[4811]: E0122 09:44:51.992592 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-txvcq_openshift-machine-config-operator(84068a6b-e189-419b-87f5-f31428f6eafe)\"" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" Jan 22 09:45:00 crc kubenswrapper[4811]: I0122 09:45:00.137841 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484585-vvzxm"] Jan 22 09:45:00 crc kubenswrapper[4811]: E0122 09:45:00.139204 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c35fa6c-cc25-4ad6-ba73-fc1893155a66" containerName="extract-utilities" Jan 22 09:45:00 crc kubenswrapper[4811]: I0122 09:45:00.139284 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c35fa6c-cc25-4ad6-ba73-fc1893155a66" containerName="extract-utilities" Jan 22 09:45:00 crc kubenswrapper[4811]: E0122 09:45:00.139360 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c35fa6c-cc25-4ad6-ba73-fc1893155a66" containerName="extract-content" Jan 22 09:45:00 crc kubenswrapper[4811]: I0122 09:45:00.139424 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c35fa6c-cc25-4ad6-ba73-fc1893155a66" containerName="extract-content" Jan 22 09:45:00 crc kubenswrapper[4811]: E0122 09:45:00.139491 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c35fa6c-cc25-4ad6-ba73-fc1893155a66" containerName="registry-server" Jan 22 09:45:00 crc kubenswrapper[4811]: I0122 09:45:00.139539 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c35fa6c-cc25-4ad6-ba73-fc1893155a66" containerName="registry-server" Jan 22 09:45:00 crc kubenswrapper[4811]: I0122 09:45:00.139807 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c35fa6c-cc25-4ad6-ba73-fc1893155a66" containerName="registry-server" Jan 22 09:45:00 crc kubenswrapper[4811]: I0122 09:45:00.140501 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484585-vvzxm" Jan 22 09:45:00 crc kubenswrapper[4811]: I0122 09:45:00.143077 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 22 09:45:00 crc kubenswrapper[4811]: I0122 09:45:00.143366 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 22 09:45:00 crc kubenswrapper[4811]: I0122 09:45:00.153470 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484585-vvzxm"] Jan 22 09:45:00 crc kubenswrapper[4811]: I0122 09:45:00.218072 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6765a365-6078-4673-a562-ae290f624f9b-config-volume\") pod \"collect-profiles-29484585-vvzxm\" (UID: \"6765a365-6078-4673-a562-ae290f624f9b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484585-vvzxm" Jan 22 09:45:00 crc kubenswrapper[4811]: I0122 09:45:00.218318 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w67qh\" (UniqueName: \"kubernetes.io/projected/6765a365-6078-4673-a562-ae290f624f9b-kube-api-access-w67qh\") pod \"collect-profiles-29484585-vvzxm\" (UID: \"6765a365-6078-4673-a562-ae290f624f9b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484585-vvzxm" Jan 22 09:45:00 crc kubenswrapper[4811]: I0122 09:45:00.218439 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6765a365-6078-4673-a562-ae290f624f9b-secret-volume\") pod \"collect-profiles-29484585-vvzxm\" (UID: \"6765a365-6078-4673-a562-ae290f624f9b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484585-vvzxm" Jan 22 09:45:00 crc kubenswrapper[4811]: I0122 09:45:00.321055 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6765a365-6078-4673-a562-ae290f624f9b-config-volume\") pod \"collect-profiles-29484585-vvzxm\" (UID: \"6765a365-6078-4673-a562-ae290f624f9b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484585-vvzxm" Jan 22 09:45:00 crc kubenswrapper[4811]: I0122 09:45:00.321394 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w67qh\" (UniqueName: \"kubernetes.io/projected/6765a365-6078-4673-a562-ae290f624f9b-kube-api-access-w67qh\") pod \"collect-profiles-29484585-vvzxm\" (UID: \"6765a365-6078-4673-a562-ae290f624f9b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484585-vvzxm" Jan 22 09:45:00 crc kubenswrapper[4811]: I0122 09:45:00.321529 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6765a365-6078-4673-a562-ae290f624f9b-secret-volume\") pod \"collect-profiles-29484585-vvzxm\" (UID: \"6765a365-6078-4673-a562-ae290f624f9b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484585-vvzxm" Jan 22 09:45:00 crc kubenswrapper[4811]: I0122 09:45:00.321973 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6765a365-6078-4673-a562-ae290f624f9b-config-volume\") pod \"collect-profiles-29484585-vvzxm\" (UID: \"6765a365-6078-4673-a562-ae290f624f9b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484585-vvzxm" Jan 22 09:45:00 crc kubenswrapper[4811]: I0122 09:45:00.331501 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6765a365-6078-4673-a562-ae290f624f9b-secret-volume\") pod \"collect-profiles-29484585-vvzxm\" (UID: \"6765a365-6078-4673-a562-ae290f624f9b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484585-vvzxm" Jan 22 09:45:00 crc kubenswrapper[4811]: I0122 09:45:00.340729 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w67qh\" (UniqueName: \"kubernetes.io/projected/6765a365-6078-4673-a562-ae290f624f9b-kube-api-access-w67qh\") pod \"collect-profiles-29484585-vvzxm\" (UID: \"6765a365-6078-4673-a562-ae290f624f9b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484585-vvzxm" Jan 22 09:45:00 crc kubenswrapper[4811]: I0122 09:45:00.459145 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484585-vvzxm" Jan 22 09:45:00 crc kubenswrapper[4811]: I0122 09:45:00.875831 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484585-vvzxm"] Jan 22 09:45:01 crc kubenswrapper[4811]: I0122 09:45:01.283422 4811 generic.go:334] "Generic (PLEG): container finished" podID="6765a365-6078-4673-a562-ae290f624f9b" containerID="d4a3a9e8ac181d2c8a9d6f4f54bde1b6b4297a0cc7439178031d18dfa5576151" exitCode=0 Jan 22 09:45:01 crc kubenswrapper[4811]: I0122 09:45:01.283725 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484585-vvzxm" event={"ID":"6765a365-6078-4673-a562-ae290f624f9b","Type":"ContainerDied","Data":"d4a3a9e8ac181d2c8a9d6f4f54bde1b6b4297a0cc7439178031d18dfa5576151"} Jan 22 09:45:01 crc kubenswrapper[4811]: I0122 09:45:01.283756 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484585-vvzxm" event={"ID":"6765a365-6078-4673-a562-ae290f624f9b","Type":"ContainerStarted","Data":"f5c3be5fe178f1d5436f6c624397eec94652b4f48f491182ded9396447150d27"} Jan 22 09:45:02 crc kubenswrapper[4811]: I0122 09:45:02.615296 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484585-vvzxm" Jan 22 09:45:02 crc kubenswrapper[4811]: I0122 09:45:02.674210 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6765a365-6078-4673-a562-ae290f624f9b-config-volume\") pod \"6765a365-6078-4673-a562-ae290f624f9b\" (UID: \"6765a365-6078-4673-a562-ae290f624f9b\") " Jan 22 09:45:02 crc kubenswrapper[4811]: I0122 09:45:02.674850 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6765a365-6078-4673-a562-ae290f624f9b-secret-volume\") pod \"6765a365-6078-4673-a562-ae290f624f9b\" (UID: \"6765a365-6078-4673-a562-ae290f624f9b\") " Jan 22 09:45:02 crc kubenswrapper[4811]: I0122 09:45:02.675050 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w67qh\" (UniqueName: \"kubernetes.io/projected/6765a365-6078-4673-a562-ae290f624f9b-kube-api-access-w67qh\") pod \"6765a365-6078-4673-a562-ae290f624f9b\" (UID: \"6765a365-6078-4673-a562-ae290f624f9b\") " Jan 22 09:45:02 crc kubenswrapper[4811]: I0122 09:45:02.675048 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6765a365-6078-4673-a562-ae290f624f9b-config-volume" (OuterVolumeSpecName: "config-volume") pod "6765a365-6078-4673-a562-ae290f624f9b" (UID: "6765a365-6078-4673-a562-ae290f624f9b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:45:02 crc kubenswrapper[4811]: I0122 09:45:02.676187 4811 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6765a365-6078-4673-a562-ae290f624f9b-config-volume\") on node \"crc\" DevicePath \"\"" Jan 22 09:45:02 crc kubenswrapper[4811]: I0122 09:45:02.681042 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6765a365-6078-4673-a562-ae290f624f9b-kube-api-access-w67qh" (OuterVolumeSpecName: "kube-api-access-w67qh") pod "6765a365-6078-4673-a562-ae290f624f9b" (UID: "6765a365-6078-4673-a562-ae290f624f9b"). InnerVolumeSpecName "kube-api-access-w67qh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:45:02 crc kubenswrapper[4811]: I0122 09:45:02.684742 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6765a365-6078-4673-a562-ae290f624f9b-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "6765a365-6078-4673-a562-ae290f624f9b" (UID: "6765a365-6078-4673-a562-ae290f624f9b"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:45:02 crc kubenswrapper[4811]: I0122 09:45:02.777981 4811 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6765a365-6078-4673-a562-ae290f624f9b-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 22 09:45:02 crc kubenswrapper[4811]: I0122 09:45:02.778119 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w67qh\" (UniqueName: \"kubernetes.io/projected/6765a365-6078-4673-a562-ae290f624f9b-kube-api-access-w67qh\") on node \"crc\" DevicePath \"\"" Jan 22 09:45:02 crc kubenswrapper[4811]: I0122 09:45:02.993376 4811 scope.go:117] "RemoveContainer" containerID="a3eaf3fca355ac62b01468032411a18e7dabb820be02232f4becb3d78ad2b686" Jan 22 09:45:02 crc kubenswrapper[4811]: E0122 09:45:02.994544 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-txvcq_openshift-machine-config-operator(84068a6b-e189-419b-87f5-f31428f6eafe)\"" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" Jan 22 09:45:03 crc kubenswrapper[4811]: I0122 09:45:03.301303 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484585-vvzxm" event={"ID":"6765a365-6078-4673-a562-ae290f624f9b","Type":"ContainerDied","Data":"f5c3be5fe178f1d5436f6c624397eec94652b4f48f491182ded9396447150d27"} Jan 22 09:45:03 crc kubenswrapper[4811]: I0122 09:45:03.301338 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484585-vvzxm" Jan 22 09:45:03 crc kubenswrapper[4811]: I0122 09:45:03.301351 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f5c3be5fe178f1d5436f6c624397eec94652b4f48f491182ded9396447150d27" Jan 22 09:45:03 crc kubenswrapper[4811]: I0122 09:45:03.687254 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484540-kbb7d"] Jan 22 09:45:03 crc kubenswrapper[4811]: I0122 09:45:03.695837 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484540-kbb7d"] Jan 22 09:45:04 crc kubenswrapper[4811]: I0122 09:45:04.001485 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80663d92-2281-4a3d-9232-f1fc19873d88" path="/var/lib/kubelet/pods/80663d92-2281-4a3d-9232-f1fc19873d88/volumes" Jan 22 09:45:13 crc kubenswrapper[4811]: I0122 09:45:13.992136 4811 scope.go:117] "RemoveContainer" containerID="a3eaf3fca355ac62b01468032411a18e7dabb820be02232f4becb3d78ad2b686" Jan 22 09:45:13 crc kubenswrapper[4811]: E0122 09:45:13.992895 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-txvcq_openshift-machine-config-operator(84068a6b-e189-419b-87f5-f31428f6eafe)\"" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" Jan 22 09:45:23 crc kubenswrapper[4811]: I0122 09:45:23.306055 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-l86kt"] Jan 22 09:45:23 crc kubenswrapper[4811]: E0122 09:45:23.306831 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6765a365-6078-4673-a562-ae290f624f9b" containerName="collect-profiles" Jan 22 09:45:23 crc kubenswrapper[4811]: I0122 09:45:23.306846 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="6765a365-6078-4673-a562-ae290f624f9b" containerName="collect-profiles" Jan 22 09:45:23 crc kubenswrapper[4811]: I0122 09:45:23.307029 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="6765a365-6078-4673-a562-ae290f624f9b" containerName="collect-profiles" Jan 22 09:45:23 crc kubenswrapper[4811]: I0122 09:45:23.308203 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l86kt" Jan 22 09:45:23 crc kubenswrapper[4811]: I0122 09:45:23.320952 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-l86kt"] Jan 22 09:45:23 crc kubenswrapper[4811]: I0122 09:45:23.396149 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b56661d3-d071-40b7-ade8-2a140c63c697-catalog-content\") pod \"community-operators-l86kt\" (UID: \"b56661d3-d071-40b7-ade8-2a140c63c697\") " pod="openshift-marketplace/community-operators-l86kt" Jan 22 09:45:23 crc kubenswrapper[4811]: I0122 09:45:23.396226 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b56661d3-d071-40b7-ade8-2a140c63c697-utilities\") pod \"community-operators-l86kt\" (UID: \"b56661d3-d071-40b7-ade8-2a140c63c697\") " pod="openshift-marketplace/community-operators-l86kt" Jan 22 09:45:23 crc kubenswrapper[4811]: I0122 09:45:23.396476 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xc2d\" (UniqueName: \"kubernetes.io/projected/b56661d3-d071-40b7-ade8-2a140c63c697-kube-api-access-4xc2d\") pod \"community-operators-l86kt\" (UID: \"b56661d3-d071-40b7-ade8-2a140c63c697\") " pod="openshift-marketplace/community-operators-l86kt" Jan 22 09:45:23 crc kubenswrapper[4811]: I0122 09:45:23.497742 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b56661d3-d071-40b7-ade8-2a140c63c697-catalog-content\") pod \"community-operators-l86kt\" (UID: \"b56661d3-d071-40b7-ade8-2a140c63c697\") " pod="openshift-marketplace/community-operators-l86kt" Jan 22 09:45:23 crc kubenswrapper[4811]: I0122 09:45:23.497800 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b56661d3-d071-40b7-ade8-2a140c63c697-utilities\") pod \"community-operators-l86kt\" (UID: \"b56661d3-d071-40b7-ade8-2a140c63c697\") " pod="openshift-marketplace/community-operators-l86kt" Jan 22 09:45:23 crc kubenswrapper[4811]: I0122 09:45:23.497863 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4xc2d\" (UniqueName: \"kubernetes.io/projected/b56661d3-d071-40b7-ade8-2a140c63c697-kube-api-access-4xc2d\") pod \"community-operators-l86kt\" (UID: \"b56661d3-d071-40b7-ade8-2a140c63c697\") " pod="openshift-marketplace/community-operators-l86kt" Jan 22 09:45:23 crc kubenswrapper[4811]: I0122 09:45:23.498196 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b56661d3-d071-40b7-ade8-2a140c63c697-catalog-content\") pod \"community-operators-l86kt\" (UID: \"b56661d3-d071-40b7-ade8-2a140c63c697\") " pod="openshift-marketplace/community-operators-l86kt" Jan 22 09:45:23 crc kubenswrapper[4811]: I0122 09:45:23.498237 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b56661d3-d071-40b7-ade8-2a140c63c697-utilities\") pod \"community-operators-l86kt\" (UID: \"b56661d3-d071-40b7-ade8-2a140c63c697\") " pod="openshift-marketplace/community-operators-l86kt" Jan 22 09:45:23 crc kubenswrapper[4811]: I0122 09:45:23.514222 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4xc2d\" (UniqueName: \"kubernetes.io/projected/b56661d3-d071-40b7-ade8-2a140c63c697-kube-api-access-4xc2d\") pod \"community-operators-l86kt\" (UID: \"b56661d3-d071-40b7-ade8-2a140c63c697\") " pod="openshift-marketplace/community-operators-l86kt" Jan 22 09:45:23 crc kubenswrapper[4811]: I0122 09:45:23.629277 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l86kt" Jan 22 09:45:24 crc kubenswrapper[4811]: W0122 09:45:24.108543 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb56661d3_d071_40b7_ade8_2a140c63c697.slice/crio-f1674e30909408667a94f8ce0456ab1b452ea2d751473ba9ac5830719b954420 WatchSource:0}: Error finding container f1674e30909408667a94f8ce0456ab1b452ea2d751473ba9ac5830719b954420: Status 404 returned error can't find the container with id f1674e30909408667a94f8ce0456ab1b452ea2d751473ba9ac5830719b954420 Jan 22 09:45:24 crc kubenswrapper[4811]: I0122 09:45:24.109166 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-l86kt"] Jan 22 09:45:24 crc kubenswrapper[4811]: I0122 09:45:24.453365 4811 generic.go:334] "Generic (PLEG): container finished" podID="b56661d3-d071-40b7-ade8-2a140c63c697" containerID="cddf6d3d34b7bd71a87dc61e0938d359f0ec2194927edaae13d2fcefdba392fc" exitCode=0 Jan 22 09:45:24 crc kubenswrapper[4811]: I0122 09:45:24.453514 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l86kt" event={"ID":"b56661d3-d071-40b7-ade8-2a140c63c697","Type":"ContainerDied","Data":"cddf6d3d34b7bd71a87dc61e0938d359f0ec2194927edaae13d2fcefdba392fc"} Jan 22 09:45:24 crc kubenswrapper[4811]: I0122 09:45:24.453569 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l86kt" event={"ID":"b56661d3-d071-40b7-ade8-2a140c63c697","Type":"ContainerStarted","Data":"f1674e30909408667a94f8ce0456ab1b452ea2d751473ba9ac5830719b954420"} Jan 22 09:45:25 crc kubenswrapper[4811]: I0122 09:45:25.462858 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l86kt" event={"ID":"b56661d3-d071-40b7-ade8-2a140c63c697","Type":"ContainerStarted","Data":"e96f3e7660edfd7e2f0fd4275e7e2c6443c633d0e3cdab097872a78081cffe98"} Jan 22 09:45:26 crc kubenswrapper[4811]: I0122 09:45:26.473704 4811 generic.go:334] "Generic (PLEG): container finished" podID="b56661d3-d071-40b7-ade8-2a140c63c697" containerID="e96f3e7660edfd7e2f0fd4275e7e2c6443c633d0e3cdab097872a78081cffe98" exitCode=0 Jan 22 09:45:26 crc kubenswrapper[4811]: I0122 09:45:26.473748 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l86kt" event={"ID":"b56661d3-d071-40b7-ade8-2a140c63c697","Type":"ContainerDied","Data":"e96f3e7660edfd7e2f0fd4275e7e2c6443c633d0e3cdab097872a78081cffe98"} Jan 22 09:45:27 crc kubenswrapper[4811]: I0122 09:45:27.481315 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l86kt" event={"ID":"b56661d3-d071-40b7-ade8-2a140c63c697","Type":"ContainerStarted","Data":"755666131ea2857a6695005738c9edd66add709ad1900aeaf0248101b01bd8a6"} Jan 22 09:45:27 crc kubenswrapper[4811]: I0122 09:45:27.499700 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-l86kt" podStartSLOduration=1.951964059 podStartE2EDuration="4.499680031s" podCreationTimestamp="2026-01-22 09:45:23 +0000 UTC" firstStartedPulling="2026-01-22 09:45:24.454936823 +0000 UTC m=+2368.777123946" lastFinishedPulling="2026-01-22 09:45:27.002652795 +0000 UTC m=+2371.324839918" observedRunningTime="2026-01-22 09:45:27.496549766 +0000 UTC m=+2371.818736890" watchObservedRunningTime="2026-01-22 09:45:27.499680031 +0000 UTC m=+2371.821867154" Jan 22 09:45:27 crc kubenswrapper[4811]: I0122 09:45:27.992319 4811 scope.go:117] "RemoveContainer" containerID="a3eaf3fca355ac62b01468032411a18e7dabb820be02232f4becb3d78ad2b686" Jan 22 09:45:27 crc kubenswrapper[4811]: E0122 09:45:27.992608 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-txvcq_openshift-machine-config-operator(84068a6b-e189-419b-87f5-f31428f6eafe)\"" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" Jan 22 09:45:33 crc kubenswrapper[4811]: I0122 09:45:33.630044 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-l86kt" Jan 22 09:45:33 crc kubenswrapper[4811]: I0122 09:45:33.630367 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-l86kt" Jan 22 09:45:33 crc kubenswrapper[4811]: I0122 09:45:33.664196 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-l86kt" Jan 22 09:45:34 crc kubenswrapper[4811]: I0122 09:45:34.559661 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-l86kt" Jan 22 09:45:34 crc kubenswrapper[4811]: I0122 09:45:34.601259 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-l86kt"] Jan 22 09:45:36 crc kubenswrapper[4811]: I0122 09:45:36.540042 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-l86kt" podUID="b56661d3-d071-40b7-ade8-2a140c63c697" containerName="registry-server" containerID="cri-o://755666131ea2857a6695005738c9edd66add709ad1900aeaf0248101b01bd8a6" gracePeriod=2 Jan 22 09:45:36 crc kubenswrapper[4811]: I0122 09:45:36.956983 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l86kt" Jan 22 09:45:37 crc kubenswrapper[4811]: I0122 09:45:37.062460 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b56661d3-d071-40b7-ade8-2a140c63c697-catalog-content\") pod \"b56661d3-d071-40b7-ade8-2a140c63c697\" (UID: \"b56661d3-d071-40b7-ade8-2a140c63c697\") " Jan 22 09:45:37 crc kubenswrapper[4811]: I0122 09:45:37.062698 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b56661d3-d071-40b7-ade8-2a140c63c697-utilities\") pod \"b56661d3-d071-40b7-ade8-2a140c63c697\" (UID: \"b56661d3-d071-40b7-ade8-2a140c63c697\") " Jan 22 09:45:37 crc kubenswrapper[4811]: I0122 09:45:37.063649 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b56661d3-d071-40b7-ade8-2a140c63c697-utilities" (OuterVolumeSpecName: "utilities") pod "b56661d3-d071-40b7-ade8-2a140c63c697" (UID: "b56661d3-d071-40b7-ade8-2a140c63c697"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:45:37 crc kubenswrapper[4811]: I0122 09:45:37.063685 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4xc2d\" (UniqueName: \"kubernetes.io/projected/b56661d3-d071-40b7-ade8-2a140c63c697-kube-api-access-4xc2d\") pod \"b56661d3-d071-40b7-ade8-2a140c63c697\" (UID: \"b56661d3-d071-40b7-ade8-2a140c63c697\") " Jan 22 09:45:37 crc kubenswrapper[4811]: I0122 09:45:37.076007 4811 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b56661d3-d071-40b7-ade8-2a140c63c697-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 09:45:37 crc kubenswrapper[4811]: I0122 09:45:37.085773 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b56661d3-d071-40b7-ade8-2a140c63c697-kube-api-access-4xc2d" (OuterVolumeSpecName: "kube-api-access-4xc2d") pod "b56661d3-d071-40b7-ade8-2a140c63c697" (UID: "b56661d3-d071-40b7-ade8-2a140c63c697"). InnerVolumeSpecName "kube-api-access-4xc2d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:45:37 crc kubenswrapper[4811]: I0122 09:45:37.138053 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b56661d3-d071-40b7-ade8-2a140c63c697-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b56661d3-d071-40b7-ade8-2a140c63c697" (UID: "b56661d3-d071-40b7-ade8-2a140c63c697"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:45:37 crc kubenswrapper[4811]: I0122 09:45:37.178388 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4xc2d\" (UniqueName: \"kubernetes.io/projected/b56661d3-d071-40b7-ade8-2a140c63c697-kube-api-access-4xc2d\") on node \"crc\" DevicePath \"\"" Jan 22 09:45:37 crc kubenswrapper[4811]: I0122 09:45:37.178417 4811 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b56661d3-d071-40b7-ade8-2a140c63c697-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 09:45:37 crc kubenswrapper[4811]: I0122 09:45:37.550704 4811 generic.go:334] "Generic (PLEG): container finished" podID="b56661d3-d071-40b7-ade8-2a140c63c697" containerID="755666131ea2857a6695005738c9edd66add709ad1900aeaf0248101b01bd8a6" exitCode=0 Jan 22 09:45:37 crc kubenswrapper[4811]: I0122 09:45:37.550802 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l86kt" Jan 22 09:45:37 crc kubenswrapper[4811]: I0122 09:45:37.550803 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l86kt" event={"ID":"b56661d3-d071-40b7-ade8-2a140c63c697","Type":"ContainerDied","Data":"755666131ea2857a6695005738c9edd66add709ad1900aeaf0248101b01bd8a6"} Jan 22 09:45:37 crc kubenswrapper[4811]: I0122 09:45:37.551257 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l86kt" event={"ID":"b56661d3-d071-40b7-ade8-2a140c63c697","Type":"ContainerDied","Data":"f1674e30909408667a94f8ce0456ab1b452ea2d751473ba9ac5830719b954420"} Jan 22 09:45:37 crc kubenswrapper[4811]: I0122 09:45:37.551281 4811 scope.go:117] "RemoveContainer" containerID="755666131ea2857a6695005738c9edd66add709ad1900aeaf0248101b01bd8a6" Jan 22 09:45:37 crc kubenswrapper[4811]: I0122 09:45:37.574224 4811 scope.go:117] "RemoveContainer" containerID="e96f3e7660edfd7e2f0fd4275e7e2c6443c633d0e3cdab097872a78081cffe98" Jan 22 09:45:37 crc kubenswrapper[4811]: I0122 09:45:37.585214 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-l86kt"] Jan 22 09:45:37 crc kubenswrapper[4811]: I0122 09:45:37.590609 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-l86kt"] Jan 22 09:45:37 crc kubenswrapper[4811]: I0122 09:45:37.600074 4811 scope.go:117] "RemoveContainer" containerID="cddf6d3d34b7bd71a87dc61e0938d359f0ec2194927edaae13d2fcefdba392fc" Jan 22 09:45:37 crc kubenswrapper[4811]: I0122 09:45:37.627099 4811 scope.go:117] "RemoveContainer" containerID="755666131ea2857a6695005738c9edd66add709ad1900aeaf0248101b01bd8a6" Jan 22 09:45:37 crc kubenswrapper[4811]: E0122 09:45:37.627913 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"755666131ea2857a6695005738c9edd66add709ad1900aeaf0248101b01bd8a6\": container with ID starting with 755666131ea2857a6695005738c9edd66add709ad1900aeaf0248101b01bd8a6 not found: ID does not exist" containerID="755666131ea2857a6695005738c9edd66add709ad1900aeaf0248101b01bd8a6" Jan 22 09:45:37 crc kubenswrapper[4811]: I0122 09:45:37.627970 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"755666131ea2857a6695005738c9edd66add709ad1900aeaf0248101b01bd8a6"} err="failed to get container status \"755666131ea2857a6695005738c9edd66add709ad1900aeaf0248101b01bd8a6\": rpc error: code = NotFound desc = could not find container \"755666131ea2857a6695005738c9edd66add709ad1900aeaf0248101b01bd8a6\": container with ID starting with 755666131ea2857a6695005738c9edd66add709ad1900aeaf0248101b01bd8a6 not found: ID does not exist" Jan 22 09:45:37 crc kubenswrapper[4811]: I0122 09:45:37.627998 4811 scope.go:117] "RemoveContainer" containerID="e96f3e7660edfd7e2f0fd4275e7e2c6443c633d0e3cdab097872a78081cffe98" Jan 22 09:45:37 crc kubenswrapper[4811]: E0122 09:45:37.628352 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e96f3e7660edfd7e2f0fd4275e7e2c6443c633d0e3cdab097872a78081cffe98\": container with ID starting with e96f3e7660edfd7e2f0fd4275e7e2c6443c633d0e3cdab097872a78081cffe98 not found: ID does not exist" containerID="e96f3e7660edfd7e2f0fd4275e7e2c6443c633d0e3cdab097872a78081cffe98" Jan 22 09:45:37 crc kubenswrapper[4811]: I0122 09:45:37.628402 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e96f3e7660edfd7e2f0fd4275e7e2c6443c633d0e3cdab097872a78081cffe98"} err="failed to get container status \"e96f3e7660edfd7e2f0fd4275e7e2c6443c633d0e3cdab097872a78081cffe98\": rpc error: code = NotFound desc = could not find container \"e96f3e7660edfd7e2f0fd4275e7e2c6443c633d0e3cdab097872a78081cffe98\": container with ID starting with e96f3e7660edfd7e2f0fd4275e7e2c6443c633d0e3cdab097872a78081cffe98 not found: ID does not exist" Jan 22 09:45:37 crc kubenswrapper[4811]: I0122 09:45:37.628438 4811 scope.go:117] "RemoveContainer" containerID="cddf6d3d34b7bd71a87dc61e0938d359f0ec2194927edaae13d2fcefdba392fc" Jan 22 09:45:37 crc kubenswrapper[4811]: E0122 09:45:37.628955 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cddf6d3d34b7bd71a87dc61e0938d359f0ec2194927edaae13d2fcefdba392fc\": container with ID starting with cddf6d3d34b7bd71a87dc61e0938d359f0ec2194927edaae13d2fcefdba392fc not found: ID does not exist" containerID="cddf6d3d34b7bd71a87dc61e0938d359f0ec2194927edaae13d2fcefdba392fc" Jan 22 09:45:37 crc kubenswrapper[4811]: I0122 09:45:37.629007 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cddf6d3d34b7bd71a87dc61e0938d359f0ec2194927edaae13d2fcefdba392fc"} err="failed to get container status \"cddf6d3d34b7bd71a87dc61e0938d359f0ec2194927edaae13d2fcefdba392fc\": rpc error: code = NotFound desc = could not find container \"cddf6d3d34b7bd71a87dc61e0938d359f0ec2194927edaae13d2fcefdba392fc\": container with ID starting with cddf6d3d34b7bd71a87dc61e0938d359f0ec2194927edaae13d2fcefdba392fc not found: ID does not exist" Jan 22 09:45:38 crc kubenswrapper[4811]: I0122 09:45:38.000796 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b56661d3-d071-40b7-ade8-2a140c63c697" path="/var/lib/kubelet/pods/b56661d3-d071-40b7-ade8-2a140c63c697/volumes" Jan 22 09:45:40 crc kubenswrapper[4811]: I0122 09:45:40.992059 4811 scope.go:117] "RemoveContainer" containerID="a3eaf3fca355ac62b01468032411a18e7dabb820be02232f4becb3d78ad2b686" Jan 22 09:45:40 crc kubenswrapper[4811]: E0122 09:45:40.992293 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-txvcq_openshift-machine-config-operator(84068a6b-e189-419b-87f5-f31428f6eafe)\"" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" Jan 22 09:45:53 crc kubenswrapper[4811]: I0122 09:45:53.995047 4811 scope.go:117] "RemoveContainer" containerID="a3eaf3fca355ac62b01468032411a18e7dabb820be02232f4becb3d78ad2b686" Jan 22 09:45:53 crc kubenswrapper[4811]: E0122 09:45:53.995665 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-txvcq_openshift-machine-config-operator(84068a6b-e189-419b-87f5-f31428f6eafe)\"" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" Jan 22 09:46:00 crc kubenswrapper[4811]: I0122 09:46:00.538604 4811 scope.go:117] "RemoveContainer" containerID="98fd9e69518f482da1a8e15c9a11f5948330e7204ac69c0cafecb09c3a98aa73" Jan 22 09:46:02 crc kubenswrapper[4811]: I0122 09:46:02.709146 4811 generic.go:334] "Generic (PLEG): container finished" podID="0f4688b1-29e2-475b-80c0-63afbc3b1afa" containerID="164ca8c08987f033cddda9c2fade3bc3e4ed7cf3f9b6f72799b73aed53786439" exitCode=0 Jan 22 09:46:02 crc kubenswrapper[4811]: I0122 09:46:02.709185 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hklz6" event={"ID":"0f4688b1-29e2-475b-80c0-63afbc3b1afa","Type":"ContainerDied","Data":"164ca8c08987f033cddda9c2fade3bc3e4ed7cf3f9b6f72799b73aed53786439"} Jan 22 09:46:04 crc kubenswrapper[4811]: I0122 09:46:04.067435 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hklz6" Jan 22 09:46:04 crc kubenswrapper[4811]: I0122 09:46:04.123276 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/0f4688b1-29e2-475b-80c0-63afbc3b1afa-libvirt-secret-0\") pod \"0f4688b1-29e2-475b-80c0-63afbc3b1afa\" (UID: \"0f4688b1-29e2-475b-80c0-63afbc3b1afa\") " Jan 22 09:46:04 crc kubenswrapper[4811]: I0122 09:46:04.123368 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vrcqq\" (UniqueName: \"kubernetes.io/projected/0f4688b1-29e2-475b-80c0-63afbc3b1afa-kube-api-access-vrcqq\") pod \"0f4688b1-29e2-475b-80c0-63afbc3b1afa\" (UID: \"0f4688b1-29e2-475b-80c0-63afbc3b1afa\") " Jan 22 09:46:04 crc kubenswrapper[4811]: I0122 09:46:04.127799 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f4688b1-29e2-475b-80c0-63afbc3b1afa-kube-api-access-vrcqq" (OuterVolumeSpecName: "kube-api-access-vrcqq") pod "0f4688b1-29e2-475b-80c0-63afbc3b1afa" (UID: "0f4688b1-29e2-475b-80c0-63afbc3b1afa"). InnerVolumeSpecName "kube-api-access-vrcqq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:46:04 crc kubenswrapper[4811]: I0122 09:46:04.142315 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f4688b1-29e2-475b-80c0-63afbc3b1afa-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "0f4688b1-29e2-475b-80c0-63afbc3b1afa" (UID: "0f4688b1-29e2-475b-80c0-63afbc3b1afa"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:46:04 crc kubenswrapper[4811]: I0122 09:46:04.224358 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0f4688b1-29e2-475b-80c0-63afbc3b1afa-ceph\") pod \"0f4688b1-29e2-475b-80c0-63afbc3b1afa\" (UID: \"0f4688b1-29e2-475b-80c0-63afbc3b1afa\") " Jan 22 09:46:04 crc kubenswrapper[4811]: I0122 09:46:04.224851 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f4688b1-29e2-475b-80c0-63afbc3b1afa-libvirt-combined-ca-bundle\") pod \"0f4688b1-29e2-475b-80c0-63afbc3b1afa\" (UID: \"0f4688b1-29e2-475b-80c0-63afbc3b1afa\") " Jan 22 09:46:04 crc kubenswrapper[4811]: I0122 09:46:04.224885 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0f4688b1-29e2-475b-80c0-63afbc3b1afa-inventory\") pod \"0f4688b1-29e2-475b-80c0-63afbc3b1afa\" (UID: \"0f4688b1-29e2-475b-80c0-63afbc3b1afa\") " Jan 22 09:46:04 crc kubenswrapper[4811]: I0122 09:46:04.224904 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0f4688b1-29e2-475b-80c0-63afbc3b1afa-ssh-key-openstack-edpm-ipam\") pod \"0f4688b1-29e2-475b-80c0-63afbc3b1afa\" (UID: \"0f4688b1-29e2-475b-80c0-63afbc3b1afa\") " Jan 22 09:46:04 crc kubenswrapper[4811]: I0122 09:46:04.225198 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vrcqq\" (UniqueName: \"kubernetes.io/projected/0f4688b1-29e2-475b-80c0-63afbc3b1afa-kube-api-access-vrcqq\") on node \"crc\" DevicePath \"\"" Jan 22 09:46:04 crc kubenswrapper[4811]: I0122 09:46:04.225213 4811 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/0f4688b1-29e2-475b-80c0-63afbc3b1afa-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Jan 22 09:46:04 crc kubenswrapper[4811]: I0122 09:46:04.226661 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f4688b1-29e2-475b-80c0-63afbc3b1afa-ceph" (OuterVolumeSpecName: "ceph") pod "0f4688b1-29e2-475b-80c0-63afbc3b1afa" (UID: "0f4688b1-29e2-475b-80c0-63afbc3b1afa"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:46:04 crc kubenswrapper[4811]: I0122 09:46:04.227401 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f4688b1-29e2-475b-80c0-63afbc3b1afa-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "0f4688b1-29e2-475b-80c0-63afbc3b1afa" (UID: "0f4688b1-29e2-475b-80c0-63afbc3b1afa"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:46:04 crc kubenswrapper[4811]: I0122 09:46:04.241963 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f4688b1-29e2-475b-80c0-63afbc3b1afa-inventory" (OuterVolumeSpecName: "inventory") pod "0f4688b1-29e2-475b-80c0-63afbc3b1afa" (UID: "0f4688b1-29e2-475b-80c0-63afbc3b1afa"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:46:04 crc kubenswrapper[4811]: I0122 09:46:04.242266 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f4688b1-29e2-475b-80c0-63afbc3b1afa-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "0f4688b1-29e2-475b-80c0-63afbc3b1afa" (UID: "0f4688b1-29e2-475b-80c0-63afbc3b1afa"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:46:04 crc kubenswrapper[4811]: I0122 09:46:04.326137 4811 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f4688b1-29e2-475b-80c0-63afbc3b1afa-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 09:46:04 crc kubenswrapper[4811]: I0122 09:46:04.326164 4811 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0f4688b1-29e2-475b-80c0-63afbc3b1afa-inventory\") on node \"crc\" DevicePath \"\"" Jan 22 09:46:04 crc kubenswrapper[4811]: I0122 09:46:04.326175 4811 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0f4688b1-29e2-475b-80c0-63afbc3b1afa-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 22 09:46:04 crc kubenswrapper[4811]: I0122 09:46:04.326184 4811 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0f4688b1-29e2-475b-80c0-63afbc3b1afa-ceph\") on node \"crc\" DevicePath \"\"" Jan 22 09:46:04 crc kubenswrapper[4811]: I0122 09:46:04.722285 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hklz6" event={"ID":"0f4688b1-29e2-475b-80c0-63afbc3b1afa","Type":"ContainerDied","Data":"c194f5e0976dbda241b3d188afedd77dd354345686938df7dcfb979d99bd61ab"} Jan 22 09:46:04 crc kubenswrapper[4811]: I0122 09:46:04.722687 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c194f5e0976dbda241b3d188afedd77dd354345686938df7dcfb979d99bd61ab" Jan 22 09:46:04 crc kubenswrapper[4811]: I0122 09:46:04.722320 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hklz6" Jan 22 09:46:04 crc kubenswrapper[4811]: I0122 09:46:04.801277 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-g62qq"] Jan 22 09:46:04 crc kubenswrapper[4811]: E0122 09:46:04.801603 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f4688b1-29e2-475b-80c0-63afbc3b1afa" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 22 09:46:04 crc kubenswrapper[4811]: I0122 09:46:04.801636 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f4688b1-29e2-475b-80c0-63afbc3b1afa" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 22 09:46:04 crc kubenswrapper[4811]: E0122 09:46:04.801648 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b56661d3-d071-40b7-ade8-2a140c63c697" containerName="extract-content" Jan 22 09:46:04 crc kubenswrapper[4811]: I0122 09:46:04.801654 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="b56661d3-d071-40b7-ade8-2a140c63c697" containerName="extract-content" Jan 22 09:46:04 crc kubenswrapper[4811]: E0122 09:46:04.801664 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b56661d3-d071-40b7-ade8-2a140c63c697" containerName="extract-utilities" Jan 22 09:46:04 crc kubenswrapper[4811]: I0122 09:46:04.801669 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="b56661d3-d071-40b7-ade8-2a140c63c697" containerName="extract-utilities" Jan 22 09:46:04 crc kubenswrapper[4811]: E0122 09:46:04.801692 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b56661d3-d071-40b7-ade8-2a140c63c697" containerName="registry-server" Jan 22 09:46:04 crc kubenswrapper[4811]: I0122 09:46:04.801699 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="b56661d3-d071-40b7-ade8-2a140c63c697" containerName="registry-server" Jan 22 09:46:04 crc kubenswrapper[4811]: I0122 09:46:04.801861 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f4688b1-29e2-475b-80c0-63afbc3b1afa" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 22 09:46:04 crc kubenswrapper[4811]: I0122 09:46:04.801880 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="b56661d3-d071-40b7-ade8-2a140c63c697" containerName="registry-server" Jan 22 09:46:04 crc kubenswrapper[4811]: I0122 09:46:04.802347 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-g62qq" Jan 22 09:46:04 crc kubenswrapper[4811]: I0122 09:46:04.804968 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 22 09:46:04 crc kubenswrapper[4811]: I0122 09:46:04.805087 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7cbs6" Jan 22 09:46:04 crc kubenswrapper[4811]: I0122 09:46:04.805360 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Jan 22 09:46:04 crc kubenswrapper[4811]: I0122 09:46:04.805736 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 22 09:46:04 crc kubenswrapper[4811]: I0122 09:46:04.807438 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Jan 22 09:46:04 crc kubenswrapper[4811]: I0122 09:46:04.807519 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 22 09:46:04 crc kubenswrapper[4811]: I0122 09:46:04.807469 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Jan 22 09:46:04 crc kubenswrapper[4811]: I0122 09:46:04.809669 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ceph-nova" Jan 22 09:46:04 crc kubenswrapper[4811]: I0122 09:46:04.811182 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 22 09:46:04 crc kubenswrapper[4811]: I0122 09:46:04.824663 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-g62qq"] Jan 22 09:46:04 crc kubenswrapper[4811]: I0122 09:46:04.844158 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/93c19706-aa1a-40b2-96cb-ea74c87866d6-nova-cell1-compute-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-g62qq\" (UID: \"93c19706-aa1a-40b2-96cb-ea74c87866d6\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-g62qq" Jan 22 09:46:04 crc kubenswrapper[4811]: I0122 09:46:04.844195 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fw2qq\" (UniqueName: \"kubernetes.io/projected/93c19706-aa1a-40b2-96cb-ea74c87866d6-kube-api-access-fw2qq\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-g62qq\" (UID: \"93c19706-aa1a-40b2-96cb-ea74c87866d6\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-g62qq" Jan 22 09:46:04 crc kubenswrapper[4811]: I0122 09:46:04.844292 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93c19706-aa1a-40b2-96cb-ea74c87866d6-nova-custom-ceph-combined-ca-bundle\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-g62qq\" (UID: \"93c19706-aa1a-40b2-96cb-ea74c87866d6\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-g62qq" Jan 22 09:46:04 crc kubenswrapper[4811]: I0122 09:46:04.844322 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/93c19706-aa1a-40b2-96cb-ea74c87866d6-inventory\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-g62qq\" (UID: \"93c19706-aa1a-40b2-96cb-ea74c87866d6\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-g62qq" Jan 22 09:46:04 crc kubenswrapper[4811]: I0122 09:46:04.844342 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/93c19706-aa1a-40b2-96cb-ea74c87866d6-nova-migration-ssh-key-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-g62qq\" (UID: \"93c19706-aa1a-40b2-96cb-ea74c87866d6\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-g62qq" Jan 22 09:46:04 crc kubenswrapper[4811]: I0122 09:46:04.844421 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/93c19706-aa1a-40b2-96cb-ea74c87866d6-ssh-key-openstack-edpm-ipam\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-g62qq\" (UID: \"93c19706-aa1a-40b2-96cb-ea74c87866d6\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-g62qq" Jan 22 09:46:04 crc kubenswrapper[4811]: I0122 09:46:04.844486 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/93c19706-aa1a-40b2-96cb-ea74c87866d6-nova-extra-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-g62qq\" (UID: \"93c19706-aa1a-40b2-96cb-ea74c87866d6\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-g62qq" Jan 22 09:46:04 crc kubenswrapper[4811]: I0122 09:46:04.844550 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/93c19706-aa1a-40b2-96cb-ea74c87866d6-nova-migration-ssh-key-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-g62qq\" (UID: \"93c19706-aa1a-40b2-96cb-ea74c87866d6\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-g62qq" Jan 22 09:46:04 crc kubenswrapper[4811]: I0122 09:46:04.844614 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/93c19706-aa1a-40b2-96cb-ea74c87866d6-nova-cell1-compute-config-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-g62qq\" (UID: \"93c19706-aa1a-40b2-96cb-ea74c87866d6\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-g62qq" Jan 22 09:46:04 crc kubenswrapper[4811]: I0122 09:46:04.844729 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/93c19706-aa1a-40b2-96cb-ea74c87866d6-ceph-nova-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-g62qq\" (UID: \"93c19706-aa1a-40b2-96cb-ea74c87866d6\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-g62qq" Jan 22 09:46:04 crc kubenswrapper[4811]: I0122 09:46:04.844809 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/93c19706-aa1a-40b2-96cb-ea74c87866d6-ceph\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-g62qq\" (UID: \"93c19706-aa1a-40b2-96cb-ea74c87866d6\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-g62qq" Jan 22 09:46:04 crc kubenswrapper[4811]: I0122 09:46:04.946727 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/93c19706-aa1a-40b2-96cb-ea74c87866d6-ceph\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-g62qq\" (UID: \"93c19706-aa1a-40b2-96cb-ea74c87866d6\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-g62qq" Jan 22 09:46:04 crc kubenswrapper[4811]: I0122 09:46:04.946788 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/93c19706-aa1a-40b2-96cb-ea74c87866d6-nova-cell1-compute-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-g62qq\" (UID: \"93c19706-aa1a-40b2-96cb-ea74c87866d6\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-g62qq" Jan 22 09:46:04 crc kubenswrapper[4811]: I0122 09:46:04.946813 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fw2qq\" (UniqueName: \"kubernetes.io/projected/93c19706-aa1a-40b2-96cb-ea74c87866d6-kube-api-access-fw2qq\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-g62qq\" (UID: \"93c19706-aa1a-40b2-96cb-ea74c87866d6\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-g62qq" Jan 22 09:46:04 crc kubenswrapper[4811]: I0122 09:46:04.946885 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93c19706-aa1a-40b2-96cb-ea74c87866d6-nova-custom-ceph-combined-ca-bundle\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-g62qq\" (UID: \"93c19706-aa1a-40b2-96cb-ea74c87866d6\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-g62qq" Jan 22 09:46:04 crc kubenswrapper[4811]: I0122 09:46:04.946907 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/93c19706-aa1a-40b2-96cb-ea74c87866d6-inventory\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-g62qq\" (UID: \"93c19706-aa1a-40b2-96cb-ea74c87866d6\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-g62qq" Jan 22 09:46:04 crc kubenswrapper[4811]: I0122 09:46:04.946924 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/93c19706-aa1a-40b2-96cb-ea74c87866d6-nova-migration-ssh-key-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-g62qq\" (UID: \"93c19706-aa1a-40b2-96cb-ea74c87866d6\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-g62qq" Jan 22 09:46:04 crc kubenswrapper[4811]: I0122 09:46:04.946985 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/93c19706-aa1a-40b2-96cb-ea74c87866d6-ssh-key-openstack-edpm-ipam\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-g62qq\" (UID: \"93c19706-aa1a-40b2-96cb-ea74c87866d6\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-g62qq" Jan 22 09:46:04 crc kubenswrapper[4811]: I0122 09:46:04.947031 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/93c19706-aa1a-40b2-96cb-ea74c87866d6-nova-extra-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-g62qq\" (UID: \"93c19706-aa1a-40b2-96cb-ea74c87866d6\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-g62qq" Jan 22 09:46:04 crc kubenswrapper[4811]: I0122 09:46:04.947246 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/93c19706-aa1a-40b2-96cb-ea74c87866d6-nova-migration-ssh-key-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-g62qq\" (UID: \"93c19706-aa1a-40b2-96cb-ea74c87866d6\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-g62qq" Jan 22 09:46:04 crc kubenswrapper[4811]: I0122 09:46:04.947286 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/93c19706-aa1a-40b2-96cb-ea74c87866d6-nova-cell1-compute-config-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-g62qq\" (UID: \"93c19706-aa1a-40b2-96cb-ea74c87866d6\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-g62qq" Jan 22 09:46:04 crc kubenswrapper[4811]: I0122 09:46:04.947314 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/93c19706-aa1a-40b2-96cb-ea74c87866d6-ceph-nova-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-g62qq\" (UID: \"93c19706-aa1a-40b2-96cb-ea74c87866d6\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-g62qq" Jan 22 09:46:04 crc kubenswrapper[4811]: I0122 09:46:04.948166 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/93c19706-aa1a-40b2-96cb-ea74c87866d6-ceph-nova-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-g62qq\" (UID: \"93c19706-aa1a-40b2-96cb-ea74c87866d6\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-g62qq" Jan 22 09:46:04 crc kubenswrapper[4811]: I0122 09:46:04.948389 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/93c19706-aa1a-40b2-96cb-ea74c87866d6-nova-extra-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-g62qq\" (UID: \"93c19706-aa1a-40b2-96cb-ea74c87866d6\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-g62qq" Jan 22 09:46:04 crc kubenswrapper[4811]: I0122 09:46:04.951049 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/93c19706-aa1a-40b2-96cb-ea74c87866d6-nova-migration-ssh-key-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-g62qq\" (UID: \"93c19706-aa1a-40b2-96cb-ea74c87866d6\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-g62qq" Jan 22 09:46:04 crc kubenswrapper[4811]: I0122 09:46:04.951460 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/93c19706-aa1a-40b2-96cb-ea74c87866d6-ssh-key-openstack-edpm-ipam\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-g62qq\" (UID: \"93c19706-aa1a-40b2-96cb-ea74c87866d6\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-g62qq" Jan 22 09:46:04 crc kubenswrapper[4811]: I0122 09:46:04.952171 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/93c19706-aa1a-40b2-96cb-ea74c87866d6-ceph\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-g62qq\" (UID: \"93c19706-aa1a-40b2-96cb-ea74c87866d6\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-g62qq" Jan 22 09:46:04 crc kubenswrapper[4811]: I0122 09:46:04.952851 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/93c19706-aa1a-40b2-96cb-ea74c87866d6-nova-cell1-compute-config-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-g62qq\" (UID: \"93c19706-aa1a-40b2-96cb-ea74c87866d6\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-g62qq" Jan 22 09:46:04 crc kubenswrapper[4811]: I0122 09:46:04.952957 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/93c19706-aa1a-40b2-96cb-ea74c87866d6-nova-migration-ssh-key-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-g62qq\" (UID: \"93c19706-aa1a-40b2-96cb-ea74c87866d6\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-g62qq" Jan 22 09:46:04 crc kubenswrapper[4811]: I0122 09:46:04.953197 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/93c19706-aa1a-40b2-96cb-ea74c87866d6-nova-cell1-compute-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-g62qq\" (UID: \"93c19706-aa1a-40b2-96cb-ea74c87866d6\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-g62qq" Jan 22 09:46:04 crc kubenswrapper[4811]: I0122 09:46:04.953401 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93c19706-aa1a-40b2-96cb-ea74c87866d6-nova-custom-ceph-combined-ca-bundle\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-g62qq\" (UID: \"93c19706-aa1a-40b2-96cb-ea74c87866d6\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-g62qq" Jan 22 09:46:04 crc kubenswrapper[4811]: I0122 09:46:04.954046 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/93c19706-aa1a-40b2-96cb-ea74c87866d6-inventory\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-g62qq\" (UID: \"93c19706-aa1a-40b2-96cb-ea74c87866d6\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-g62qq" Jan 22 09:46:04 crc kubenswrapper[4811]: I0122 09:46:04.961701 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fw2qq\" (UniqueName: \"kubernetes.io/projected/93c19706-aa1a-40b2-96cb-ea74c87866d6-kube-api-access-fw2qq\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-g62qq\" (UID: \"93c19706-aa1a-40b2-96cb-ea74c87866d6\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-g62qq" Jan 22 09:46:05 crc kubenswrapper[4811]: I0122 09:46:05.115521 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-g62qq" Jan 22 09:46:05 crc kubenswrapper[4811]: I0122 09:46:05.538251 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-g62qq"] Jan 22 09:46:05 crc kubenswrapper[4811]: I0122 09:46:05.729978 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-g62qq" event={"ID":"93c19706-aa1a-40b2-96cb-ea74c87866d6","Type":"ContainerStarted","Data":"59b128e36bf544fc90e406e1f852d33bc29cccf6e1cb936cb77504d2f2a50585"} Jan 22 09:46:05 crc kubenswrapper[4811]: I0122 09:46:05.998156 4811 scope.go:117] "RemoveContainer" containerID="a3eaf3fca355ac62b01468032411a18e7dabb820be02232f4becb3d78ad2b686" Jan 22 09:46:05 crc kubenswrapper[4811]: E0122 09:46:05.998479 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-txvcq_openshift-machine-config-operator(84068a6b-e189-419b-87f5-f31428f6eafe)\"" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" Jan 22 09:46:06 crc kubenswrapper[4811]: I0122 09:46:06.736668 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-g62qq" event={"ID":"93c19706-aa1a-40b2-96cb-ea74c87866d6","Type":"ContainerStarted","Data":"2c77f8b1d50a081d56aa1d0091ac1b061930dd5983d764f6b38798c68e80b7fb"} Jan 22 09:46:06 crc kubenswrapper[4811]: I0122 09:46:06.752611 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-g62qq" podStartSLOduration=2.275078251 podStartE2EDuration="2.752597328s" podCreationTimestamp="2026-01-22 09:46:04 +0000 UTC" firstStartedPulling="2026-01-22 09:46:05.544212625 +0000 UTC m=+2409.866399748" lastFinishedPulling="2026-01-22 09:46:06.021731703 +0000 UTC m=+2410.343918825" observedRunningTime="2026-01-22 09:46:06.751280666 +0000 UTC m=+2411.073467789" watchObservedRunningTime="2026-01-22 09:46:06.752597328 +0000 UTC m=+2411.074784451" Jan 22 09:46:19 crc kubenswrapper[4811]: I0122 09:46:19.996495 4811 scope.go:117] "RemoveContainer" containerID="a3eaf3fca355ac62b01468032411a18e7dabb820be02232f4becb3d78ad2b686" Jan 22 09:46:19 crc kubenswrapper[4811]: E0122 09:46:19.997256 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-txvcq_openshift-machine-config-operator(84068a6b-e189-419b-87f5-f31428f6eafe)\"" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" Jan 22 09:46:31 crc kubenswrapper[4811]: I0122 09:46:31.992837 4811 scope.go:117] "RemoveContainer" containerID="a3eaf3fca355ac62b01468032411a18e7dabb820be02232f4becb3d78ad2b686" Jan 22 09:46:31 crc kubenswrapper[4811]: E0122 09:46:31.993588 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-txvcq_openshift-machine-config-operator(84068a6b-e189-419b-87f5-f31428f6eafe)\"" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" Jan 22 09:46:43 crc kubenswrapper[4811]: I0122 09:46:43.992189 4811 scope.go:117] "RemoveContainer" containerID="a3eaf3fca355ac62b01468032411a18e7dabb820be02232f4becb3d78ad2b686" Jan 22 09:46:43 crc kubenswrapper[4811]: E0122 09:46:43.992806 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-txvcq_openshift-machine-config-operator(84068a6b-e189-419b-87f5-f31428f6eafe)\"" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" Jan 22 09:46:59 crc kubenswrapper[4811]: I0122 09:46:59.003835 4811 scope.go:117] "RemoveContainer" containerID="a3eaf3fca355ac62b01468032411a18e7dabb820be02232f4becb3d78ad2b686" Jan 22 09:46:59 crc kubenswrapper[4811]: E0122 09:46:59.004700 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-txvcq_openshift-machine-config-operator(84068a6b-e189-419b-87f5-f31428f6eafe)\"" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" Jan 22 09:47:09 crc kubenswrapper[4811]: I0122 09:47:09.992706 4811 scope.go:117] "RemoveContainer" containerID="a3eaf3fca355ac62b01468032411a18e7dabb820be02232f4becb3d78ad2b686" Jan 22 09:47:11 crc kubenswrapper[4811]: I0122 09:47:11.125789 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" event={"ID":"84068a6b-e189-419b-87f5-f31428f6eafe","Type":"ContainerStarted","Data":"583eec57819133c8760804210a11b9e588774eddca3a7b248d1500229de8ad12"} Jan 22 09:48:06 crc kubenswrapper[4811]: I0122 09:48:06.481367 4811 generic.go:334] "Generic (PLEG): container finished" podID="93c19706-aa1a-40b2-96cb-ea74c87866d6" containerID="2c77f8b1d50a081d56aa1d0091ac1b061930dd5983d764f6b38798c68e80b7fb" exitCode=0 Jan 22 09:48:06 crc kubenswrapper[4811]: I0122 09:48:06.481470 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-g62qq" event={"ID":"93c19706-aa1a-40b2-96cb-ea74c87866d6","Type":"ContainerDied","Data":"2c77f8b1d50a081d56aa1d0091ac1b061930dd5983d764f6b38798c68e80b7fb"} Jan 22 09:48:07 crc kubenswrapper[4811]: I0122 09:48:07.843641 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-g62qq" Jan 22 09:48:07 crc kubenswrapper[4811]: I0122 09:48:07.934388 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fw2qq\" (UniqueName: \"kubernetes.io/projected/93c19706-aa1a-40b2-96cb-ea74c87866d6-kube-api-access-fw2qq\") pod \"93c19706-aa1a-40b2-96cb-ea74c87866d6\" (UID: \"93c19706-aa1a-40b2-96cb-ea74c87866d6\") " Jan 22 09:48:07 crc kubenswrapper[4811]: I0122 09:48:07.934703 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/93c19706-aa1a-40b2-96cb-ea74c87866d6-nova-extra-config-0\") pod \"93c19706-aa1a-40b2-96cb-ea74c87866d6\" (UID: \"93c19706-aa1a-40b2-96cb-ea74c87866d6\") " Jan 22 09:48:07 crc kubenswrapper[4811]: I0122 09:48:07.934767 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/93c19706-aa1a-40b2-96cb-ea74c87866d6-nova-cell1-compute-config-0\") pod \"93c19706-aa1a-40b2-96cb-ea74c87866d6\" (UID: \"93c19706-aa1a-40b2-96cb-ea74c87866d6\") " Jan 22 09:48:07 crc kubenswrapper[4811]: I0122 09:48:07.934790 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/93c19706-aa1a-40b2-96cb-ea74c87866d6-ceph-nova-0\") pod \"93c19706-aa1a-40b2-96cb-ea74c87866d6\" (UID: \"93c19706-aa1a-40b2-96cb-ea74c87866d6\") " Jan 22 09:48:07 crc kubenswrapper[4811]: I0122 09:48:07.934809 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/93c19706-aa1a-40b2-96cb-ea74c87866d6-nova-cell1-compute-config-1\") pod \"93c19706-aa1a-40b2-96cb-ea74c87866d6\" (UID: \"93c19706-aa1a-40b2-96cb-ea74c87866d6\") " Jan 22 09:48:07 crc kubenswrapper[4811]: I0122 09:48:07.934832 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/93c19706-aa1a-40b2-96cb-ea74c87866d6-ceph\") pod \"93c19706-aa1a-40b2-96cb-ea74c87866d6\" (UID: \"93c19706-aa1a-40b2-96cb-ea74c87866d6\") " Jan 22 09:48:07 crc kubenswrapper[4811]: I0122 09:48:07.934908 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/93c19706-aa1a-40b2-96cb-ea74c87866d6-inventory\") pod \"93c19706-aa1a-40b2-96cb-ea74c87866d6\" (UID: \"93c19706-aa1a-40b2-96cb-ea74c87866d6\") " Jan 22 09:48:07 crc kubenswrapper[4811]: I0122 09:48:07.934933 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/93c19706-aa1a-40b2-96cb-ea74c87866d6-nova-migration-ssh-key-0\") pod \"93c19706-aa1a-40b2-96cb-ea74c87866d6\" (UID: \"93c19706-aa1a-40b2-96cb-ea74c87866d6\") " Jan 22 09:48:07 crc kubenswrapper[4811]: I0122 09:48:07.934953 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/93c19706-aa1a-40b2-96cb-ea74c87866d6-ssh-key-openstack-edpm-ipam\") pod \"93c19706-aa1a-40b2-96cb-ea74c87866d6\" (UID: \"93c19706-aa1a-40b2-96cb-ea74c87866d6\") " Jan 22 09:48:07 crc kubenswrapper[4811]: I0122 09:48:07.934979 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93c19706-aa1a-40b2-96cb-ea74c87866d6-nova-custom-ceph-combined-ca-bundle\") pod \"93c19706-aa1a-40b2-96cb-ea74c87866d6\" (UID: \"93c19706-aa1a-40b2-96cb-ea74c87866d6\") " Jan 22 09:48:07 crc kubenswrapper[4811]: I0122 09:48:07.935050 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/93c19706-aa1a-40b2-96cb-ea74c87866d6-nova-migration-ssh-key-1\") pod \"93c19706-aa1a-40b2-96cb-ea74c87866d6\" (UID: \"93c19706-aa1a-40b2-96cb-ea74c87866d6\") " Jan 22 09:48:07 crc kubenswrapper[4811]: I0122 09:48:07.953017 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93c19706-aa1a-40b2-96cb-ea74c87866d6-kube-api-access-fw2qq" (OuterVolumeSpecName: "kube-api-access-fw2qq") pod "93c19706-aa1a-40b2-96cb-ea74c87866d6" (UID: "93c19706-aa1a-40b2-96cb-ea74c87866d6"). InnerVolumeSpecName "kube-api-access-fw2qq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:48:07 crc kubenswrapper[4811]: I0122 09:48:07.955127 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93c19706-aa1a-40b2-96cb-ea74c87866d6-ceph" (OuterVolumeSpecName: "ceph") pod "93c19706-aa1a-40b2-96cb-ea74c87866d6" (UID: "93c19706-aa1a-40b2-96cb-ea74c87866d6"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:48:07 crc kubenswrapper[4811]: I0122 09:48:07.957271 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93c19706-aa1a-40b2-96cb-ea74c87866d6-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "93c19706-aa1a-40b2-96cb-ea74c87866d6" (UID: "93c19706-aa1a-40b2-96cb-ea74c87866d6"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:48:07 crc kubenswrapper[4811]: I0122 09:48:07.962213 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93c19706-aa1a-40b2-96cb-ea74c87866d6-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "93c19706-aa1a-40b2-96cb-ea74c87866d6" (UID: "93c19706-aa1a-40b2-96cb-ea74c87866d6"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:48:07 crc kubenswrapper[4811]: I0122 09:48:07.962268 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93c19706-aa1a-40b2-96cb-ea74c87866d6-ceph-nova-0" (OuterVolumeSpecName: "ceph-nova-0") pod "93c19706-aa1a-40b2-96cb-ea74c87866d6" (UID: "93c19706-aa1a-40b2-96cb-ea74c87866d6"). InnerVolumeSpecName "ceph-nova-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:48:07 crc kubenswrapper[4811]: I0122 09:48:07.962332 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93c19706-aa1a-40b2-96cb-ea74c87866d6-nova-custom-ceph-combined-ca-bundle" (OuterVolumeSpecName: "nova-custom-ceph-combined-ca-bundle") pod "93c19706-aa1a-40b2-96cb-ea74c87866d6" (UID: "93c19706-aa1a-40b2-96cb-ea74c87866d6"). InnerVolumeSpecName "nova-custom-ceph-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:48:07 crc kubenswrapper[4811]: I0122 09:48:07.963596 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93c19706-aa1a-40b2-96cb-ea74c87866d6-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "93c19706-aa1a-40b2-96cb-ea74c87866d6" (UID: "93c19706-aa1a-40b2-96cb-ea74c87866d6"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:48:07 crc kubenswrapper[4811]: I0122 09:48:07.970857 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93c19706-aa1a-40b2-96cb-ea74c87866d6-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "93c19706-aa1a-40b2-96cb-ea74c87866d6" (UID: "93c19706-aa1a-40b2-96cb-ea74c87866d6"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:48:07 crc kubenswrapper[4811]: I0122 09:48:07.976798 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93c19706-aa1a-40b2-96cb-ea74c87866d6-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "93c19706-aa1a-40b2-96cb-ea74c87866d6" (UID: "93c19706-aa1a-40b2-96cb-ea74c87866d6"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:48:07 crc kubenswrapper[4811]: I0122 09:48:07.978767 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93c19706-aa1a-40b2-96cb-ea74c87866d6-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "93c19706-aa1a-40b2-96cb-ea74c87866d6" (UID: "93c19706-aa1a-40b2-96cb-ea74c87866d6"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:48:07 crc kubenswrapper[4811]: I0122 09:48:07.985360 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93c19706-aa1a-40b2-96cb-ea74c87866d6-inventory" (OuterVolumeSpecName: "inventory") pod "93c19706-aa1a-40b2-96cb-ea74c87866d6" (UID: "93c19706-aa1a-40b2-96cb-ea74c87866d6"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:48:08 crc kubenswrapper[4811]: I0122 09:48:08.036954 4811 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/93c19706-aa1a-40b2-96cb-ea74c87866d6-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Jan 22 09:48:08 crc kubenswrapper[4811]: I0122 09:48:08.037081 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fw2qq\" (UniqueName: \"kubernetes.io/projected/93c19706-aa1a-40b2-96cb-ea74c87866d6-kube-api-access-fw2qq\") on node \"crc\" DevicePath \"\"" Jan 22 09:48:08 crc kubenswrapper[4811]: I0122 09:48:08.037173 4811 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/93c19706-aa1a-40b2-96cb-ea74c87866d6-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Jan 22 09:48:08 crc kubenswrapper[4811]: I0122 09:48:08.037231 4811 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/93c19706-aa1a-40b2-96cb-ea74c87866d6-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Jan 22 09:48:08 crc kubenswrapper[4811]: I0122 09:48:08.037283 4811 reconciler_common.go:293] "Volume detached for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/93c19706-aa1a-40b2-96cb-ea74c87866d6-ceph-nova-0\") on node \"crc\" DevicePath \"\"" Jan 22 09:48:08 crc kubenswrapper[4811]: I0122 09:48:08.037343 4811 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/93c19706-aa1a-40b2-96cb-ea74c87866d6-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Jan 22 09:48:08 crc kubenswrapper[4811]: I0122 09:48:08.037399 4811 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/93c19706-aa1a-40b2-96cb-ea74c87866d6-ceph\") on node \"crc\" DevicePath \"\"" Jan 22 09:48:08 crc kubenswrapper[4811]: I0122 09:48:08.037765 4811 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/93c19706-aa1a-40b2-96cb-ea74c87866d6-inventory\") on node \"crc\" DevicePath \"\"" Jan 22 09:48:08 crc kubenswrapper[4811]: I0122 09:48:08.037903 4811 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/93c19706-aa1a-40b2-96cb-ea74c87866d6-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Jan 22 09:48:08 crc kubenswrapper[4811]: I0122 09:48:08.038371 4811 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/93c19706-aa1a-40b2-96cb-ea74c87866d6-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 22 09:48:08 crc kubenswrapper[4811]: I0122 09:48:08.038469 4811 reconciler_common.go:293] "Volume detached for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93c19706-aa1a-40b2-96cb-ea74c87866d6-nova-custom-ceph-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 09:48:08 crc kubenswrapper[4811]: I0122 09:48:08.494406 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-g62qq" event={"ID":"93c19706-aa1a-40b2-96cb-ea74c87866d6","Type":"ContainerDied","Data":"59b128e36bf544fc90e406e1f852d33bc29cccf6e1cb936cb77504d2f2a50585"} Jan 22 09:48:08 crc kubenswrapper[4811]: I0122 09:48:08.494444 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="59b128e36bf544fc90e406e1f852d33bc29cccf6e1cb936cb77504d2f2a50585" Jan 22 09:48:08 crc kubenswrapper[4811]: I0122 09:48:08.494549 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-g62qq" Jan 22 09:48:20 crc kubenswrapper[4811]: I0122 09:48:20.215111 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-volume-volume1-0"] Jan 22 09:48:20 crc kubenswrapper[4811]: E0122 09:48:20.215801 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93c19706-aa1a-40b2-96cb-ea74c87866d6" containerName="nova-custom-ceph-edpm-deployment-openstack-edpm-ipam" Jan 22 09:48:20 crc kubenswrapper[4811]: I0122 09:48:20.215818 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="93c19706-aa1a-40b2-96cb-ea74c87866d6" containerName="nova-custom-ceph-edpm-deployment-openstack-edpm-ipam" Jan 22 09:48:20 crc kubenswrapper[4811]: I0122 09:48:20.215996 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="93c19706-aa1a-40b2-96cb-ea74c87866d6" containerName="nova-custom-ceph-edpm-deployment-openstack-edpm-ipam" Jan 22 09:48:20 crc kubenswrapper[4811]: I0122 09:48:20.216785 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Jan 22 09:48:20 crc kubenswrapper[4811]: I0122 09:48:20.224992 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-volume1-0"] Jan 22 09:48:20 crc kubenswrapper[4811]: I0122 09:48:20.226287 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 22 09:48:20 crc kubenswrapper[4811]: I0122 09:48:20.226467 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-volume-volume1-config-data" Jan 22 09:48:20 crc kubenswrapper[4811]: I0122 09:48:20.284287 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-backup-0"] Jan 22 09:48:20 crc kubenswrapper[4811]: I0122 09:48:20.285617 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Jan 22 09:48:20 crc kubenswrapper[4811]: I0122 09:48:20.287254 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-backup-config-data" Jan 22 09:48:20 crc kubenswrapper[4811]: I0122 09:48:20.299454 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Jan 22 09:48:20 crc kubenswrapper[4811]: I0122 09:48:20.326117 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/747abd8a-15a3-42fe-b8bd-a74f2e03c00c-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"747abd8a-15a3-42fe-b8bd-a74f2e03c00c\") " pod="openstack/cinder-backup-0" Jan 22 09:48:20 crc kubenswrapper[4811]: I0122 09:48:20.326292 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/747abd8a-15a3-42fe-b8bd-a74f2e03c00c-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"747abd8a-15a3-42fe-b8bd-a74f2e03c00c\") " pod="openstack/cinder-backup-0" Jan 22 09:48:20 crc kubenswrapper[4811]: I0122 09:48:20.326403 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/747abd8a-15a3-42fe-b8bd-a74f2e03c00c-ceph\") pod \"cinder-backup-0\" (UID: \"747abd8a-15a3-42fe-b8bd-a74f2e03c00c\") " pod="openstack/cinder-backup-0" Jan 22 09:48:20 crc kubenswrapper[4811]: I0122 09:48:20.326486 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/747abd8a-15a3-42fe-b8bd-a74f2e03c00c-config-data\") pod \"cinder-backup-0\" (UID: \"747abd8a-15a3-42fe-b8bd-a74f2e03c00c\") " pod="openstack/cinder-backup-0" Jan 22 09:48:20 crc kubenswrapper[4811]: I0122 09:48:20.326658 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3a4d222-4fbd-4c90-9bb0-d787f257d7c0-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"e3a4d222-4fbd-4c90-9bb0-d787f257d7c0\") " pod="openstack/cinder-volume-volume1-0" Jan 22 09:48:20 crc kubenswrapper[4811]: I0122 09:48:20.326779 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3a4d222-4fbd-4c90-9bb0-d787f257d7c0-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"e3a4d222-4fbd-4c90-9bb0-d787f257d7c0\") " pod="openstack/cinder-volume-volume1-0" Jan 22 09:48:20 crc kubenswrapper[4811]: I0122 09:48:20.326865 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e3a4d222-4fbd-4c90-9bb0-d787f257d7c0-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"e3a4d222-4fbd-4c90-9bb0-d787f257d7c0\") " pod="openstack/cinder-volume-volume1-0" Jan 22 09:48:20 crc kubenswrapper[4811]: I0122 09:48:20.326944 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3a4d222-4fbd-4c90-9bb0-d787f257d7c0-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"e3a4d222-4fbd-4c90-9bb0-d787f257d7c0\") " pod="openstack/cinder-volume-volume1-0" Jan 22 09:48:20 crc kubenswrapper[4811]: I0122 09:48:20.327025 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e3a4d222-4fbd-4c90-9bb0-d787f257d7c0-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"e3a4d222-4fbd-4c90-9bb0-d787f257d7c0\") " pod="openstack/cinder-volume-volume1-0" Jan 22 09:48:20 crc kubenswrapper[4811]: I0122 09:48:20.327112 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/e3a4d222-4fbd-4c90-9bb0-d787f257d7c0-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"e3a4d222-4fbd-4c90-9bb0-d787f257d7c0\") " pod="openstack/cinder-volume-volume1-0" Jan 22 09:48:20 crc kubenswrapper[4811]: I0122 09:48:20.327190 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/e3a4d222-4fbd-4c90-9bb0-d787f257d7c0-dev\") pod \"cinder-volume-volume1-0\" (UID: \"e3a4d222-4fbd-4c90-9bb0-d787f257d7c0\") " pod="openstack/cinder-volume-volume1-0" Jan 22 09:48:20 crc kubenswrapper[4811]: I0122 09:48:20.327271 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e3a4d222-4fbd-4c90-9bb0-d787f257d7c0-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"e3a4d222-4fbd-4c90-9bb0-d787f257d7c0\") " pod="openstack/cinder-volume-volume1-0" Jan 22 09:48:20 crc kubenswrapper[4811]: I0122 09:48:20.327343 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/747abd8a-15a3-42fe-b8bd-a74f2e03c00c-scripts\") pod \"cinder-backup-0\" (UID: \"747abd8a-15a3-42fe-b8bd-a74f2e03c00c\") " pod="openstack/cinder-backup-0" Jan 22 09:48:20 crc kubenswrapper[4811]: I0122 09:48:20.327416 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/747abd8a-15a3-42fe-b8bd-a74f2e03c00c-sys\") pod \"cinder-backup-0\" (UID: \"747abd8a-15a3-42fe-b8bd-a74f2e03c00c\") " pod="openstack/cinder-backup-0" Jan 22 09:48:20 crc kubenswrapper[4811]: I0122 09:48:20.327594 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/e3a4d222-4fbd-4c90-9bb0-d787f257d7c0-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"e3a4d222-4fbd-4c90-9bb0-d787f257d7c0\") " pod="openstack/cinder-volume-volume1-0" Jan 22 09:48:20 crc kubenswrapper[4811]: I0122 09:48:20.327656 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/747abd8a-15a3-42fe-b8bd-a74f2e03c00c-dev\") pod \"cinder-backup-0\" (UID: \"747abd8a-15a3-42fe-b8bd-a74f2e03c00c\") " pod="openstack/cinder-backup-0" Jan 22 09:48:20 crc kubenswrapper[4811]: I0122 09:48:20.327694 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/e3a4d222-4fbd-4c90-9bb0-d787f257d7c0-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"e3a4d222-4fbd-4c90-9bb0-d787f257d7c0\") " pod="openstack/cinder-volume-volume1-0" Jan 22 09:48:20 crc kubenswrapper[4811]: I0122 09:48:20.327719 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/747abd8a-15a3-42fe-b8bd-a74f2e03c00c-lib-modules\") pod \"cinder-backup-0\" (UID: \"747abd8a-15a3-42fe-b8bd-a74f2e03c00c\") " pod="openstack/cinder-backup-0" Jan 22 09:48:20 crc kubenswrapper[4811]: I0122 09:48:20.327777 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/747abd8a-15a3-42fe-b8bd-a74f2e03c00c-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"747abd8a-15a3-42fe-b8bd-a74f2e03c00c\") " pod="openstack/cinder-backup-0" Jan 22 09:48:20 crc kubenswrapper[4811]: I0122 09:48:20.327804 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/e3a4d222-4fbd-4c90-9bb0-d787f257d7c0-sys\") pod \"cinder-volume-volume1-0\" (UID: \"e3a4d222-4fbd-4c90-9bb0-d787f257d7c0\") " pod="openstack/cinder-volume-volume1-0" Jan 22 09:48:20 crc kubenswrapper[4811]: I0122 09:48:20.327846 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/747abd8a-15a3-42fe-b8bd-a74f2e03c00c-etc-nvme\") pod \"cinder-backup-0\" (UID: \"747abd8a-15a3-42fe-b8bd-a74f2e03c00c\") " pod="openstack/cinder-backup-0" Jan 22 09:48:20 crc kubenswrapper[4811]: I0122 09:48:20.327872 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/747abd8a-15a3-42fe-b8bd-a74f2e03c00c-run\") pod \"cinder-backup-0\" (UID: \"747abd8a-15a3-42fe-b8bd-a74f2e03c00c\") " pod="openstack/cinder-backup-0" Jan 22 09:48:20 crc kubenswrapper[4811]: I0122 09:48:20.327892 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/747abd8a-15a3-42fe-b8bd-a74f2e03c00c-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"747abd8a-15a3-42fe-b8bd-a74f2e03c00c\") " pod="openstack/cinder-backup-0" Jan 22 09:48:20 crc kubenswrapper[4811]: I0122 09:48:20.327905 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/747abd8a-15a3-42fe-b8bd-a74f2e03c00c-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"747abd8a-15a3-42fe-b8bd-a74f2e03c00c\") " pod="openstack/cinder-backup-0" Jan 22 09:48:20 crc kubenswrapper[4811]: I0122 09:48:20.327928 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/e3a4d222-4fbd-4c90-9bb0-d787f257d7c0-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"e3a4d222-4fbd-4c90-9bb0-d787f257d7c0\") " pod="openstack/cinder-volume-volume1-0" Jan 22 09:48:20 crc kubenswrapper[4811]: I0122 09:48:20.327949 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/747abd8a-15a3-42fe-b8bd-a74f2e03c00c-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"747abd8a-15a3-42fe-b8bd-a74f2e03c00c\") " pod="openstack/cinder-backup-0" Jan 22 09:48:20 crc kubenswrapper[4811]: I0122 09:48:20.327965 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lsfjp\" (UniqueName: \"kubernetes.io/projected/e3a4d222-4fbd-4c90-9bb0-d787f257d7c0-kube-api-access-lsfjp\") pod \"cinder-volume-volume1-0\" (UID: \"e3a4d222-4fbd-4c90-9bb0-d787f257d7c0\") " pod="openstack/cinder-volume-volume1-0" Jan 22 09:48:20 crc kubenswrapper[4811]: I0122 09:48:20.327999 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/e3a4d222-4fbd-4c90-9bb0-d787f257d7c0-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"e3a4d222-4fbd-4c90-9bb0-d787f257d7c0\") " pod="openstack/cinder-volume-volume1-0" Jan 22 09:48:20 crc kubenswrapper[4811]: I0122 09:48:20.328054 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/e3a4d222-4fbd-4c90-9bb0-d787f257d7c0-run\") pod \"cinder-volume-volume1-0\" (UID: \"e3a4d222-4fbd-4c90-9bb0-d787f257d7c0\") " pod="openstack/cinder-volume-volume1-0" Jan 22 09:48:20 crc kubenswrapper[4811]: I0122 09:48:20.328071 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/747abd8a-15a3-42fe-b8bd-a74f2e03c00c-config-data-custom\") pod \"cinder-backup-0\" (UID: \"747abd8a-15a3-42fe-b8bd-a74f2e03c00c\") " pod="openstack/cinder-backup-0" Jan 22 09:48:20 crc kubenswrapper[4811]: I0122 09:48:20.328086 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtrwg\" (UniqueName: \"kubernetes.io/projected/747abd8a-15a3-42fe-b8bd-a74f2e03c00c-kube-api-access-mtrwg\") pod \"cinder-backup-0\" (UID: \"747abd8a-15a3-42fe-b8bd-a74f2e03c00c\") " pod="openstack/cinder-backup-0" Jan 22 09:48:20 crc kubenswrapper[4811]: I0122 09:48:20.328106 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/e3a4d222-4fbd-4c90-9bb0-d787f257d7c0-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"e3a4d222-4fbd-4c90-9bb0-d787f257d7c0\") " pod="openstack/cinder-volume-volume1-0" Jan 22 09:48:20 crc kubenswrapper[4811]: I0122 09:48:20.429391 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/747abd8a-15a3-42fe-b8bd-a74f2e03c00c-run\") pod \"cinder-backup-0\" (UID: \"747abd8a-15a3-42fe-b8bd-a74f2e03c00c\") " pod="openstack/cinder-backup-0" Jan 22 09:48:20 crc kubenswrapper[4811]: I0122 09:48:20.429440 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/747abd8a-15a3-42fe-b8bd-a74f2e03c00c-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"747abd8a-15a3-42fe-b8bd-a74f2e03c00c\") " pod="openstack/cinder-backup-0" Jan 22 09:48:20 crc kubenswrapper[4811]: I0122 09:48:20.429457 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/747abd8a-15a3-42fe-b8bd-a74f2e03c00c-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"747abd8a-15a3-42fe-b8bd-a74f2e03c00c\") " pod="openstack/cinder-backup-0" Jan 22 09:48:20 crc kubenswrapper[4811]: I0122 09:48:20.429478 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/e3a4d222-4fbd-4c90-9bb0-d787f257d7c0-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"e3a4d222-4fbd-4c90-9bb0-d787f257d7c0\") " pod="openstack/cinder-volume-volume1-0" Jan 22 09:48:20 crc kubenswrapper[4811]: I0122 09:48:20.429495 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/747abd8a-15a3-42fe-b8bd-a74f2e03c00c-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"747abd8a-15a3-42fe-b8bd-a74f2e03c00c\") " pod="openstack/cinder-backup-0" Jan 22 09:48:20 crc kubenswrapper[4811]: I0122 09:48:20.429514 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lsfjp\" (UniqueName: \"kubernetes.io/projected/e3a4d222-4fbd-4c90-9bb0-d787f257d7c0-kube-api-access-lsfjp\") pod \"cinder-volume-volume1-0\" (UID: \"e3a4d222-4fbd-4c90-9bb0-d787f257d7c0\") " pod="openstack/cinder-volume-volume1-0" Jan 22 09:48:20 crc kubenswrapper[4811]: I0122 09:48:20.429527 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/747abd8a-15a3-42fe-b8bd-a74f2e03c00c-run\") pod \"cinder-backup-0\" (UID: \"747abd8a-15a3-42fe-b8bd-a74f2e03c00c\") " pod="openstack/cinder-backup-0" Jan 22 09:48:20 crc kubenswrapper[4811]: I0122 09:48:20.429553 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/747abd8a-15a3-42fe-b8bd-a74f2e03c00c-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"747abd8a-15a3-42fe-b8bd-a74f2e03c00c\") " pod="openstack/cinder-backup-0" Jan 22 09:48:20 crc kubenswrapper[4811]: I0122 09:48:20.429761 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/e3a4d222-4fbd-4c90-9bb0-d787f257d7c0-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"e3a4d222-4fbd-4c90-9bb0-d787f257d7c0\") " pod="openstack/cinder-volume-volume1-0" Jan 22 09:48:20 crc kubenswrapper[4811]: I0122 09:48:20.429543 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/e3a4d222-4fbd-4c90-9bb0-d787f257d7c0-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"e3a4d222-4fbd-4c90-9bb0-d787f257d7c0\") " pod="openstack/cinder-volume-volume1-0" Jan 22 09:48:20 crc kubenswrapper[4811]: I0122 09:48:20.429833 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/747abd8a-15a3-42fe-b8bd-a74f2e03c00c-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"747abd8a-15a3-42fe-b8bd-a74f2e03c00c\") " pod="openstack/cinder-backup-0" Jan 22 09:48:20 crc kubenswrapper[4811]: I0122 09:48:20.429858 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/e3a4d222-4fbd-4c90-9bb0-d787f257d7c0-run\") pod \"cinder-volume-volume1-0\" (UID: \"e3a4d222-4fbd-4c90-9bb0-d787f257d7c0\") " pod="openstack/cinder-volume-volume1-0" Jan 22 09:48:20 crc kubenswrapper[4811]: I0122 09:48:20.429882 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/e3a4d222-4fbd-4c90-9bb0-d787f257d7c0-run\") pod \"cinder-volume-volume1-0\" (UID: \"e3a4d222-4fbd-4c90-9bb0-d787f257d7c0\") " pod="openstack/cinder-volume-volume1-0" Jan 22 09:48:20 crc kubenswrapper[4811]: I0122 09:48:20.429913 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/747abd8a-15a3-42fe-b8bd-a74f2e03c00c-config-data-custom\") pod \"cinder-backup-0\" (UID: \"747abd8a-15a3-42fe-b8bd-a74f2e03c00c\") " pod="openstack/cinder-backup-0" Jan 22 09:48:20 crc kubenswrapper[4811]: I0122 09:48:20.429936 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mtrwg\" (UniqueName: \"kubernetes.io/projected/747abd8a-15a3-42fe-b8bd-a74f2e03c00c-kube-api-access-mtrwg\") pod \"cinder-backup-0\" (UID: \"747abd8a-15a3-42fe-b8bd-a74f2e03c00c\") " pod="openstack/cinder-backup-0" Jan 22 09:48:20 crc kubenswrapper[4811]: I0122 09:48:20.429958 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/e3a4d222-4fbd-4c90-9bb0-d787f257d7c0-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"e3a4d222-4fbd-4c90-9bb0-d787f257d7c0\") " pod="openstack/cinder-volume-volume1-0" Jan 22 09:48:20 crc kubenswrapper[4811]: I0122 09:48:20.430008 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/747abd8a-15a3-42fe-b8bd-a74f2e03c00c-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"747abd8a-15a3-42fe-b8bd-a74f2e03c00c\") " pod="openstack/cinder-backup-0" Jan 22 09:48:20 crc kubenswrapper[4811]: I0122 09:48:20.430022 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/747abd8a-15a3-42fe-b8bd-a74f2e03c00c-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"747abd8a-15a3-42fe-b8bd-a74f2e03c00c\") " pod="openstack/cinder-backup-0" Jan 22 09:48:20 crc kubenswrapper[4811]: I0122 09:48:20.430038 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/747abd8a-15a3-42fe-b8bd-a74f2e03c00c-ceph\") pod \"cinder-backup-0\" (UID: \"747abd8a-15a3-42fe-b8bd-a74f2e03c00c\") " pod="openstack/cinder-backup-0" Jan 22 09:48:20 crc kubenswrapper[4811]: I0122 09:48:20.430060 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/747abd8a-15a3-42fe-b8bd-a74f2e03c00c-config-data\") pod \"cinder-backup-0\" (UID: \"747abd8a-15a3-42fe-b8bd-a74f2e03c00c\") " pod="openstack/cinder-backup-0" Jan 22 09:48:20 crc kubenswrapper[4811]: I0122 09:48:20.430078 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3a4d222-4fbd-4c90-9bb0-d787f257d7c0-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"e3a4d222-4fbd-4c90-9bb0-d787f257d7c0\") " pod="openstack/cinder-volume-volume1-0" Jan 22 09:48:20 crc kubenswrapper[4811]: I0122 09:48:20.430136 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/747abd8a-15a3-42fe-b8bd-a74f2e03c00c-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"747abd8a-15a3-42fe-b8bd-a74f2e03c00c\") " pod="openstack/cinder-backup-0" Jan 22 09:48:20 crc kubenswrapper[4811]: I0122 09:48:20.430153 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3a4d222-4fbd-4c90-9bb0-d787f257d7c0-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"e3a4d222-4fbd-4c90-9bb0-d787f257d7c0\") " pod="openstack/cinder-volume-volume1-0" Jan 22 09:48:20 crc kubenswrapper[4811]: I0122 09:48:20.430183 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3a4d222-4fbd-4c90-9bb0-d787f257d7c0-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"e3a4d222-4fbd-4c90-9bb0-d787f257d7c0\") " pod="openstack/cinder-volume-volume1-0" Jan 22 09:48:20 crc kubenswrapper[4811]: I0122 09:48:20.430196 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e3a4d222-4fbd-4c90-9bb0-d787f257d7c0-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"e3a4d222-4fbd-4c90-9bb0-d787f257d7c0\") " pod="openstack/cinder-volume-volume1-0" Jan 22 09:48:20 crc kubenswrapper[4811]: I0122 09:48:20.430222 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e3a4d222-4fbd-4c90-9bb0-d787f257d7c0-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"e3a4d222-4fbd-4c90-9bb0-d787f257d7c0\") " pod="openstack/cinder-volume-volume1-0" Jan 22 09:48:20 crc kubenswrapper[4811]: I0122 09:48:20.430244 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/e3a4d222-4fbd-4c90-9bb0-d787f257d7c0-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"e3a4d222-4fbd-4c90-9bb0-d787f257d7c0\") " pod="openstack/cinder-volume-volume1-0" Jan 22 09:48:20 crc kubenswrapper[4811]: I0122 09:48:20.430259 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/e3a4d222-4fbd-4c90-9bb0-d787f257d7c0-dev\") pod \"cinder-volume-volume1-0\" (UID: \"e3a4d222-4fbd-4c90-9bb0-d787f257d7c0\") " pod="openstack/cinder-volume-volume1-0" Jan 22 09:48:20 crc kubenswrapper[4811]: I0122 09:48:20.430280 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e3a4d222-4fbd-4c90-9bb0-d787f257d7c0-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"e3a4d222-4fbd-4c90-9bb0-d787f257d7c0\") " pod="openstack/cinder-volume-volume1-0" Jan 22 09:48:20 crc kubenswrapper[4811]: I0122 09:48:20.430301 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/747abd8a-15a3-42fe-b8bd-a74f2e03c00c-scripts\") pod \"cinder-backup-0\" (UID: \"747abd8a-15a3-42fe-b8bd-a74f2e03c00c\") " pod="openstack/cinder-backup-0" Jan 22 09:48:20 crc kubenswrapper[4811]: I0122 09:48:20.430324 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/747abd8a-15a3-42fe-b8bd-a74f2e03c00c-sys\") pod \"cinder-backup-0\" (UID: \"747abd8a-15a3-42fe-b8bd-a74f2e03c00c\") " pod="openstack/cinder-backup-0" Jan 22 09:48:20 crc kubenswrapper[4811]: I0122 09:48:20.430370 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/e3a4d222-4fbd-4c90-9bb0-d787f257d7c0-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"e3a4d222-4fbd-4c90-9bb0-d787f257d7c0\") " pod="openstack/cinder-volume-volume1-0" Jan 22 09:48:20 crc kubenswrapper[4811]: I0122 09:48:20.430390 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/747abd8a-15a3-42fe-b8bd-a74f2e03c00c-dev\") pod \"cinder-backup-0\" (UID: \"747abd8a-15a3-42fe-b8bd-a74f2e03c00c\") " pod="openstack/cinder-backup-0" Jan 22 09:48:20 crc kubenswrapper[4811]: I0122 09:48:20.430411 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/e3a4d222-4fbd-4c90-9bb0-d787f257d7c0-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"e3a4d222-4fbd-4c90-9bb0-d787f257d7c0\") " pod="openstack/cinder-volume-volume1-0" Jan 22 09:48:20 crc kubenswrapper[4811]: I0122 09:48:20.430428 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/747abd8a-15a3-42fe-b8bd-a74f2e03c00c-lib-modules\") pod \"cinder-backup-0\" (UID: \"747abd8a-15a3-42fe-b8bd-a74f2e03c00c\") " pod="openstack/cinder-backup-0" Jan 22 09:48:20 crc kubenswrapper[4811]: I0122 09:48:20.430457 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e3a4d222-4fbd-4c90-9bb0-d787f257d7c0-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"e3a4d222-4fbd-4c90-9bb0-d787f257d7c0\") " pod="openstack/cinder-volume-volume1-0" Jan 22 09:48:20 crc kubenswrapper[4811]: I0122 09:48:20.430478 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/747abd8a-15a3-42fe-b8bd-a74f2e03c00c-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"747abd8a-15a3-42fe-b8bd-a74f2e03c00c\") " pod="openstack/cinder-backup-0" Jan 22 09:48:20 crc kubenswrapper[4811]: I0122 09:48:20.430500 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/747abd8a-15a3-42fe-b8bd-a74f2e03c00c-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"747abd8a-15a3-42fe-b8bd-a74f2e03c00c\") " pod="openstack/cinder-backup-0" Jan 22 09:48:20 crc kubenswrapper[4811]: I0122 09:48:20.430505 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/e3a4d222-4fbd-4c90-9bb0-d787f257d7c0-sys\") pod \"cinder-volume-volume1-0\" (UID: \"e3a4d222-4fbd-4c90-9bb0-d787f257d7c0\") " pod="openstack/cinder-volume-volume1-0" Jan 22 09:48:20 crc kubenswrapper[4811]: I0122 09:48:20.430541 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/747abd8a-15a3-42fe-b8bd-a74f2e03c00c-etc-nvme\") pod \"cinder-backup-0\" (UID: \"747abd8a-15a3-42fe-b8bd-a74f2e03c00c\") " pod="openstack/cinder-backup-0" Jan 22 09:48:20 crc kubenswrapper[4811]: I0122 09:48:20.430706 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/747abd8a-15a3-42fe-b8bd-a74f2e03c00c-etc-nvme\") pod \"cinder-backup-0\" (UID: \"747abd8a-15a3-42fe-b8bd-a74f2e03c00c\") " pod="openstack/cinder-backup-0" Jan 22 09:48:20 crc kubenswrapper[4811]: I0122 09:48:20.430738 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/e3a4d222-4fbd-4c90-9bb0-d787f257d7c0-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"e3a4d222-4fbd-4c90-9bb0-d787f257d7c0\") " pod="openstack/cinder-volume-volume1-0" Jan 22 09:48:20 crc kubenswrapper[4811]: I0122 09:48:20.430760 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/e3a4d222-4fbd-4c90-9bb0-d787f257d7c0-dev\") pod \"cinder-volume-volume1-0\" (UID: \"e3a4d222-4fbd-4c90-9bb0-d787f257d7c0\") " pod="openstack/cinder-volume-volume1-0" Jan 22 09:48:20 crc kubenswrapper[4811]: I0122 09:48:20.430933 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/747abd8a-15a3-42fe-b8bd-a74f2e03c00c-lib-modules\") pod \"cinder-backup-0\" (UID: \"747abd8a-15a3-42fe-b8bd-a74f2e03c00c\") " pod="openstack/cinder-backup-0" Jan 22 09:48:20 crc kubenswrapper[4811]: I0122 09:48:20.430979 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/e3a4d222-4fbd-4c90-9bb0-d787f257d7c0-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"e3a4d222-4fbd-4c90-9bb0-d787f257d7c0\") " pod="openstack/cinder-volume-volume1-0" Jan 22 09:48:20 crc kubenswrapper[4811]: I0122 09:48:20.431218 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/747abd8a-15a3-42fe-b8bd-a74f2e03c00c-sys\") pod \"cinder-backup-0\" (UID: \"747abd8a-15a3-42fe-b8bd-a74f2e03c00c\") " pod="openstack/cinder-backup-0" Jan 22 09:48:20 crc kubenswrapper[4811]: I0122 09:48:20.431542 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/747abd8a-15a3-42fe-b8bd-a74f2e03c00c-dev\") pod \"cinder-backup-0\" (UID: \"747abd8a-15a3-42fe-b8bd-a74f2e03c00c\") " pod="openstack/cinder-backup-0" Jan 22 09:48:20 crc kubenswrapper[4811]: I0122 09:48:20.431716 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/e3a4d222-4fbd-4c90-9bb0-d787f257d7c0-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"e3a4d222-4fbd-4c90-9bb0-d787f257d7c0\") " pod="openstack/cinder-volume-volume1-0" Jan 22 09:48:20 crc kubenswrapper[4811]: I0122 09:48:20.435809 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/e3a4d222-4fbd-4c90-9bb0-d787f257d7c0-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"e3a4d222-4fbd-4c90-9bb0-d787f257d7c0\") " pod="openstack/cinder-volume-volume1-0" Jan 22 09:48:20 crc kubenswrapper[4811]: I0122 09:48:20.435877 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/747abd8a-15a3-42fe-b8bd-a74f2e03c00c-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"747abd8a-15a3-42fe-b8bd-a74f2e03c00c\") " pod="openstack/cinder-backup-0" Jan 22 09:48:20 crc kubenswrapper[4811]: I0122 09:48:20.435942 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/747abd8a-15a3-42fe-b8bd-a74f2e03c00c-config-data-custom\") pod \"cinder-backup-0\" (UID: \"747abd8a-15a3-42fe-b8bd-a74f2e03c00c\") " pod="openstack/cinder-backup-0" Jan 22 09:48:20 crc kubenswrapper[4811]: I0122 09:48:20.435984 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/e3a4d222-4fbd-4c90-9bb0-d787f257d7c0-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"e3a4d222-4fbd-4c90-9bb0-d787f257d7c0\") " pod="openstack/cinder-volume-volume1-0" Jan 22 09:48:20 crc kubenswrapper[4811]: I0122 09:48:20.436014 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/747abd8a-15a3-42fe-b8bd-a74f2e03c00c-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"747abd8a-15a3-42fe-b8bd-a74f2e03c00c\") " pod="openstack/cinder-backup-0" Jan 22 09:48:20 crc kubenswrapper[4811]: I0122 09:48:20.436017 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/e3a4d222-4fbd-4c90-9bb0-d787f257d7c0-sys\") pod \"cinder-volume-volume1-0\" (UID: \"e3a4d222-4fbd-4c90-9bb0-d787f257d7c0\") " pod="openstack/cinder-volume-volume1-0" Jan 22 09:48:20 crc kubenswrapper[4811]: I0122 09:48:20.436053 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e3a4d222-4fbd-4c90-9bb0-d787f257d7c0-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"e3a4d222-4fbd-4c90-9bb0-d787f257d7c0\") " pod="openstack/cinder-volume-volume1-0" Jan 22 09:48:20 crc kubenswrapper[4811]: I0122 09:48:20.436960 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/747abd8a-15a3-42fe-b8bd-a74f2e03c00c-ceph\") pod \"cinder-backup-0\" (UID: \"747abd8a-15a3-42fe-b8bd-a74f2e03c00c\") " pod="openstack/cinder-backup-0" Jan 22 09:48:20 crc kubenswrapper[4811]: I0122 09:48:20.438137 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e3a4d222-4fbd-4c90-9bb0-d787f257d7c0-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"e3a4d222-4fbd-4c90-9bb0-d787f257d7c0\") " pod="openstack/cinder-volume-volume1-0" Jan 22 09:48:20 crc kubenswrapper[4811]: I0122 09:48:20.438370 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3a4d222-4fbd-4c90-9bb0-d787f257d7c0-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"e3a4d222-4fbd-4c90-9bb0-d787f257d7c0\") " pod="openstack/cinder-volume-volume1-0" Jan 22 09:48:20 crc kubenswrapper[4811]: I0122 09:48:20.438815 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/747abd8a-15a3-42fe-b8bd-a74f2e03c00c-config-data\") pod \"cinder-backup-0\" (UID: \"747abd8a-15a3-42fe-b8bd-a74f2e03c00c\") " pod="openstack/cinder-backup-0" Jan 22 09:48:20 crc kubenswrapper[4811]: I0122 09:48:20.439443 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3a4d222-4fbd-4c90-9bb0-d787f257d7c0-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"e3a4d222-4fbd-4c90-9bb0-d787f257d7c0\") " pod="openstack/cinder-volume-volume1-0" Jan 22 09:48:20 crc kubenswrapper[4811]: I0122 09:48:20.444176 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3a4d222-4fbd-4c90-9bb0-d787f257d7c0-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"e3a4d222-4fbd-4c90-9bb0-d787f257d7c0\") " pod="openstack/cinder-volume-volume1-0" Jan 22 09:48:20 crc kubenswrapper[4811]: I0122 09:48:20.445997 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/747abd8a-15a3-42fe-b8bd-a74f2e03c00c-scripts\") pod \"cinder-backup-0\" (UID: \"747abd8a-15a3-42fe-b8bd-a74f2e03c00c\") " pod="openstack/cinder-backup-0" Jan 22 09:48:20 crc kubenswrapper[4811]: I0122 09:48:20.449057 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lsfjp\" (UniqueName: \"kubernetes.io/projected/e3a4d222-4fbd-4c90-9bb0-d787f257d7c0-kube-api-access-lsfjp\") pod \"cinder-volume-volume1-0\" (UID: \"e3a4d222-4fbd-4c90-9bb0-d787f257d7c0\") " pod="openstack/cinder-volume-volume1-0" Jan 22 09:48:20 crc kubenswrapper[4811]: I0122 09:48:20.450757 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtrwg\" (UniqueName: \"kubernetes.io/projected/747abd8a-15a3-42fe-b8bd-a74f2e03c00c-kube-api-access-mtrwg\") pod \"cinder-backup-0\" (UID: \"747abd8a-15a3-42fe-b8bd-a74f2e03c00c\") " pod="openstack/cinder-backup-0" Jan 22 09:48:20 crc kubenswrapper[4811]: I0122 09:48:20.533365 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Jan 22 09:48:20 crc kubenswrapper[4811]: I0122 09:48:20.598233 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Jan 22 09:48:20 crc kubenswrapper[4811]: I0122 09:48:20.819359 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-db-create-mp7qp"] Jan 22 09:48:20 crc kubenswrapper[4811]: I0122 09:48:20.821593 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-mp7qp" Jan 22 09:48:20 crc kubenswrapper[4811]: I0122 09:48:20.837045 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-create-mp7qp"] Jan 22 09:48:20 crc kubenswrapper[4811]: I0122 09:48:20.848970 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r25s2\" (UniqueName: \"kubernetes.io/projected/60012e62-6a64-4ac5-8d6b-9fc52699dad4-kube-api-access-r25s2\") pod \"manila-db-create-mp7qp\" (UID: \"60012e62-6a64-4ac5-8d6b-9fc52699dad4\") " pod="openstack/manila-db-create-mp7qp" Jan 22 09:48:20 crc kubenswrapper[4811]: I0122 09:48:20.849015 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/60012e62-6a64-4ac5-8d6b-9fc52699dad4-operator-scripts\") pod \"manila-db-create-mp7qp\" (UID: \"60012e62-6a64-4ac5-8d6b-9fc52699dad4\") " pod="openstack/manila-db-create-mp7qp" Jan 22 09:48:20 crc kubenswrapper[4811]: I0122 09:48:20.950246 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r25s2\" (UniqueName: \"kubernetes.io/projected/60012e62-6a64-4ac5-8d6b-9fc52699dad4-kube-api-access-r25s2\") pod \"manila-db-create-mp7qp\" (UID: \"60012e62-6a64-4ac5-8d6b-9fc52699dad4\") " pod="openstack/manila-db-create-mp7qp" Jan 22 09:48:20 crc kubenswrapper[4811]: I0122 09:48:20.950292 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/60012e62-6a64-4ac5-8d6b-9fc52699dad4-operator-scripts\") pod \"manila-db-create-mp7qp\" (UID: \"60012e62-6a64-4ac5-8d6b-9fc52699dad4\") " pod="openstack/manila-db-create-mp7qp" Jan 22 09:48:20 crc kubenswrapper[4811]: I0122 09:48:20.951148 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/60012e62-6a64-4ac5-8d6b-9fc52699dad4-operator-scripts\") pod \"manila-db-create-mp7qp\" (UID: \"60012e62-6a64-4ac5-8d6b-9fc52699dad4\") " pod="openstack/manila-db-create-mp7qp" Jan 22 09:48:20 crc kubenswrapper[4811]: I0122 09:48:20.968172 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r25s2\" (UniqueName: \"kubernetes.io/projected/60012e62-6a64-4ac5-8d6b-9fc52699dad4-kube-api-access-r25s2\") pod \"manila-db-create-mp7qp\" (UID: \"60012e62-6a64-4ac5-8d6b-9fc52699dad4\") " pod="openstack/manila-db-create-mp7qp" Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.013865 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-66c8-account-create-update-2ts7l"] Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.023619 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-66c8-account-create-update-2ts7l" Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.030449 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-db-secret" Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.038551 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-84776b8f5f-8h2x2"] Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.052326 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/10c22353-10d6-4de5-b438-369773462111-operator-scripts\") pod \"manila-66c8-account-create-update-2ts7l\" (UID: \"10c22353-10d6-4de5-b438-369773462111\") " pod="openstack/manila-66c8-account-create-update-2ts7l" Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.052562 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7nxf\" (UniqueName: \"kubernetes.io/projected/10c22353-10d6-4de5-b438-369773462111-kube-api-access-b7nxf\") pod \"manila-66c8-account-create-update-2ts7l\" (UID: \"10c22353-10d6-4de5-b438-369773462111\") " pod="openstack/manila-66c8-account-create-update-2ts7l" Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.054576 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-84776b8f5f-8h2x2" Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.056795 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-h6vhg" Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.057118 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.059506 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.060266 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-66c8-account-create-update-2ts7l"] Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.066154 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.106043 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-84776b8f5f-8h2x2"] Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.126697 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.128243 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.134324 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.134467 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-92nf4" Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.138839 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.139003 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.144193 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-mp7qp" Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.155602 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgpzf\" (UniqueName: \"kubernetes.io/projected/080675ca-91da-4c39-a901-fef7f8496220-kube-api-access-kgpzf\") pod \"horizon-84776b8f5f-8h2x2\" (UID: \"080675ca-91da-4c39-a901-fef7f8496220\") " pod="openstack/horizon-84776b8f5f-8h2x2" Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.155684 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b7nxf\" (UniqueName: \"kubernetes.io/projected/10c22353-10d6-4de5-b438-369773462111-kube-api-access-b7nxf\") pod \"manila-66c8-account-create-update-2ts7l\" (UID: \"10c22353-10d6-4de5-b438-369773462111\") " pod="openstack/manila-66c8-account-create-update-2ts7l" Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.155732 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/080675ca-91da-4c39-a901-fef7f8496220-config-data\") pod \"horizon-84776b8f5f-8h2x2\" (UID: \"080675ca-91da-4c39-a901-fef7f8496220\") " pod="openstack/horizon-84776b8f5f-8h2x2" Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.155771 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/10c22353-10d6-4de5-b438-369773462111-operator-scripts\") pod \"manila-66c8-account-create-update-2ts7l\" (UID: \"10c22353-10d6-4de5-b438-369773462111\") " pod="openstack/manila-66c8-account-create-update-2ts7l" Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.155785 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/080675ca-91da-4c39-a901-fef7f8496220-scripts\") pod \"horizon-84776b8f5f-8h2x2\" (UID: \"080675ca-91da-4c39-a901-fef7f8496220\") " pod="openstack/horizon-84776b8f5f-8h2x2" Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.155822 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/080675ca-91da-4c39-a901-fef7f8496220-horizon-secret-key\") pod \"horizon-84776b8f5f-8h2x2\" (UID: \"080675ca-91da-4c39-a901-fef7f8496220\") " pod="openstack/horizon-84776b8f5f-8h2x2" Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.155862 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/080675ca-91da-4c39-a901-fef7f8496220-logs\") pod \"horizon-84776b8f5f-8h2x2\" (UID: \"080675ca-91da-4c39-a901-fef7f8496220\") " pod="openstack/horizon-84776b8f5f-8h2x2" Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.156681 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/10c22353-10d6-4de5-b438-369773462111-operator-scripts\") pod \"manila-66c8-account-create-update-2ts7l\" (UID: \"10c22353-10d6-4de5-b438-369773462111\") " pod="openstack/manila-66c8-account-create-update-2ts7l" Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.175359 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.189422 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b7nxf\" (UniqueName: \"kubernetes.io/projected/10c22353-10d6-4de5-b438-369773462111-kube-api-access-b7nxf\") pod \"manila-66c8-account-create-update-2ts7l\" (UID: \"10c22353-10d6-4de5-b438-369773462111\") " pod="openstack/manila-66c8-account-create-update-2ts7l" Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.200058 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.201684 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.218094 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.235420 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-79dbbfc64c-27fss"] Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.236710 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-79dbbfc64c-27fss" Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.248126 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.249437 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 22 09:48:21 crc kubenswrapper[4811]: E0122 09:48:21.256955 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[ceph combined-ca-bundle config-data glance httpd-run internal-tls-certs kube-api-access-vfx7k logs scripts], unattached volumes=[], failed to process volumes=[ceph combined-ca-bundle config-data glance httpd-run internal-tls-certs kube-api-access-vfx7k logs scripts]: context canceled" pod="openstack/glance-default-internal-api-0" podUID="334471a6-41fb-41ca-9ab7-fbd5fb2621b9" Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.257519 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/080675ca-91da-4c39-a901-fef7f8496220-config-data\") pod \"horizon-84776b8f5f-8h2x2\" (UID: \"080675ca-91da-4c39-a901-fef7f8496220\") " pod="openstack/horizon-84776b8f5f-8h2x2" Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.257560 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"3e9e74be-0659-45b1-87ae-ddcf867678e0\") " pod="openstack/glance-default-external-api-0" Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.257584 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e9e74be-0659-45b1-87ae-ddcf867678e0-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"3e9e74be-0659-45b1-87ae-ddcf867678e0\") " pod="openstack/glance-default-external-api-0" Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.257609 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/080675ca-91da-4c39-a901-fef7f8496220-scripts\") pod \"horizon-84776b8f5f-8h2x2\" (UID: \"080675ca-91da-4c39-a901-fef7f8496220\") " pod="openstack/horizon-84776b8f5f-8h2x2" Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.257640 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e9e74be-0659-45b1-87ae-ddcf867678e0-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"3e9e74be-0659-45b1-87ae-ddcf867678e0\") " pod="openstack/glance-default-external-api-0" Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.257685 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e9e74be-0659-45b1-87ae-ddcf867678e0-config-data\") pod \"glance-default-external-api-0\" (UID: \"3e9e74be-0659-45b1-87ae-ddcf867678e0\") " pod="openstack/glance-default-external-api-0" Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.257707 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/080675ca-91da-4c39-a901-fef7f8496220-horizon-secret-key\") pod \"horizon-84776b8f5f-8h2x2\" (UID: \"080675ca-91da-4c39-a901-fef7f8496220\") " pod="openstack/horizon-84776b8f5f-8h2x2" Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.257741 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3e9e74be-0659-45b1-87ae-ddcf867678e0-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"3e9e74be-0659-45b1-87ae-ddcf867678e0\") " pod="openstack/glance-default-external-api-0" Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.257759 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3e9e74be-0659-45b1-87ae-ddcf867678e0-scripts\") pod \"glance-default-external-api-0\" (UID: \"3e9e74be-0659-45b1-87ae-ddcf867678e0\") " pod="openstack/glance-default-external-api-0" Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.257777 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/080675ca-91da-4c39-a901-fef7f8496220-logs\") pod \"horizon-84776b8f5f-8h2x2\" (UID: \"080675ca-91da-4c39-a901-fef7f8496220\") " pod="openstack/horizon-84776b8f5f-8h2x2" Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.257831 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3e9e74be-0659-45b1-87ae-ddcf867678e0-logs\") pod \"glance-default-external-api-0\" (UID: \"3e9e74be-0659-45b1-87ae-ddcf867678e0\") " pod="openstack/glance-default-external-api-0" Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.257857 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kgpzf\" (UniqueName: \"kubernetes.io/projected/080675ca-91da-4c39-a901-fef7f8496220-kube-api-access-kgpzf\") pod \"horizon-84776b8f5f-8h2x2\" (UID: \"080675ca-91da-4c39-a901-fef7f8496220\") " pod="openstack/horizon-84776b8f5f-8h2x2" Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.257879 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/3e9e74be-0659-45b1-87ae-ddcf867678e0-ceph\") pod \"glance-default-external-api-0\" (UID: \"3e9e74be-0659-45b1-87ae-ddcf867678e0\") " pod="openstack/glance-default-external-api-0" Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.257907 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v966h\" (UniqueName: \"kubernetes.io/projected/3e9e74be-0659-45b1-87ae-ddcf867678e0-kube-api-access-v966h\") pod \"glance-default-external-api-0\" (UID: \"3e9e74be-0659-45b1-87ae-ddcf867678e0\") " pod="openstack/glance-default-external-api-0" Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.265297 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/080675ca-91da-4c39-a901-fef7f8496220-config-data\") pod \"horizon-84776b8f5f-8h2x2\" (UID: \"080675ca-91da-4c39-a901-fef7f8496220\") " pod="openstack/horizon-84776b8f5f-8h2x2" Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.266156 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/080675ca-91da-4c39-a901-fef7f8496220-scripts\") pod \"horizon-84776b8f5f-8h2x2\" (UID: \"080675ca-91da-4c39-a901-fef7f8496220\") " pod="openstack/horizon-84776b8f5f-8h2x2" Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.267085 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/080675ca-91da-4c39-a901-fef7f8496220-logs\") pod \"horizon-84776b8f5f-8h2x2\" (UID: \"080675ca-91da-4c39-a901-fef7f8496220\") " pod="openstack/horizon-84776b8f5f-8h2x2" Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.270498 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/080675ca-91da-4c39-a901-fef7f8496220-horizon-secret-key\") pod \"horizon-84776b8f5f-8h2x2\" (UID: \"080675ca-91da-4c39-a901-fef7f8496220\") " pod="openstack/horizon-84776b8f5f-8h2x2" Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.313273 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kgpzf\" (UniqueName: \"kubernetes.io/projected/080675ca-91da-4c39-a901-fef7f8496220-kube-api-access-kgpzf\") pod \"horizon-84776b8f5f-8h2x2\" (UID: \"080675ca-91da-4c39-a901-fef7f8496220\") " pod="openstack/horizon-84776b8f5f-8h2x2" Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.314112 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.326677 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-79dbbfc64c-27fss"] Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.348537 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-66c8-account-create-update-2ts7l" Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.364592 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.367015 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/334471a6-41fb-41ca-9ab7-fbd5fb2621b9-scripts\") pod \"glance-default-internal-api-0\" (UID: \"334471a6-41fb-41ca-9ab7-fbd5fb2621b9\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.367048 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d2c00677-42e9-4694-9a73-a020bcb17a98-config-data\") pod \"horizon-79dbbfc64c-27fss\" (UID: \"d2c00677-42e9-4694-9a73-a020bcb17a98\") " pod="openstack/horizon-79dbbfc64c-27fss" Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.367078 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3e9e74be-0659-45b1-87ae-ddcf867678e0-logs\") pod \"glance-default-external-api-0\" (UID: \"3e9e74be-0659-45b1-87ae-ddcf867678e0\") " pod="openstack/glance-default-external-api-0" Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.367107 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfx7k\" (UniqueName: \"kubernetes.io/projected/334471a6-41fb-41ca-9ab7-fbd5fb2621b9-kube-api-access-vfx7k\") pod \"glance-default-internal-api-0\" (UID: \"334471a6-41fb-41ca-9ab7-fbd5fb2621b9\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.367139 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/3e9e74be-0659-45b1-87ae-ddcf867678e0-ceph\") pod \"glance-default-external-api-0\" (UID: \"3e9e74be-0659-45b1-87ae-ddcf867678e0\") " pod="openstack/glance-default-external-api-0" Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.367185 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/334471a6-41fb-41ca-9ab7-fbd5fb2621b9-logs\") pod \"glance-default-internal-api-0\" (UID: \"334471a6-41fb-41ca-9ab7-fbd5fb2621b9\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.367204 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v966h\" (UniqueName: \"kubernetes.io/projected/3e9e74be-0659-45b1-87ae-ddcf867678e0-kube-api-access-v966h\") pod \"glance-default-external-api-0\" (UID: \"3e9e74be-0659-45b1-87ae-ddcf867678e0\") " pod="openstack/glance-default-external-api-0" Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.367251 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d2c00677-42e9-4694-9a73-a020bcb17a98-horizon-secret-key\") pod \"horizon-79dbbfc64c-27fss\" (UID: \"d2c00677-42e9-4694-9a73-a020bcb17a98\") " pod="openstack/horizon-79dbbfc64c-27fss" Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.367275 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d2c00677-42e9-4694-9a73-a020bcb17a98-scripts\") pod \"horizon-79dbbfc64c-27fss\" (UID: \"d2c00677-42e9-4694-9a73-a020bcb17a98\") " pod="openstack/horizon-79dbbfc64c-27fss" Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.367291 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/334471a6-41fb-41ca-9ab7-fbd5fb2621b9-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"334471a6-41fb-41ca-9ab7-fbd5fb2621b9\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.367314 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"3e9e74be-0659-45b1-87ae-ddcf867678e0\") " pod="openstack/glance-default-external-api-0" Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.367341 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/334471a6-41fb-41ca-9ab7-fbd5fb2621b9-config-data\") pod \"glance-default-internal-api-0\" (UID: \"334471a6-41fb-41ca-9ab7-fbd5fb2621b9\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.367357 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e9e74be-0659-45b1-87ae-ddcf867678e0-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"3e9e74be-0659-45b1-87ae-ddcf867678e0\") " pod="openstack/glance-default-external-api-0" Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.367375 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"334471a6-41fb-41ca-9ab7-fbd5fb2621b9\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.367390 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e9e74be-0659-45b1-87ae-ddcf867678e0-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"3e9e74be-0659-45b1-87ae-ddcf867678e0\") " pod="openstack/glance-default-external-api-0" Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.367422 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/334471a6-41fb-41ca-9ab7-fbd5fb2621b9-ceph\") pod \"glance-default-internal-api-0\" (UID: \"334471a6-41fb-41ca-9ab7-fbd5fb2621b9\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.367448 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e9e74be-0659-45b1-87ae-ddcf867678e0-config-data\") pod \"glance-default-external-api-0\" (UID: \"3e9e74be-0659-45b1-87ae-ddcf867678e0\") " pod="openstack/glance-default-external-api-0" Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.367470 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/334471a6-41fb-41ca-9ab7-fbd5fb2621b9-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"334471a6-41fb-41ca-9ab7-fbd5fb2621b9\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.367507 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3e9e74be-0659-45b1-87ae-ddcf867678e0-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"3e9e74be-0659-45b1-87ae-ddcf867678e0\") " pod="openstack/glance-default-external-api-0" Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.367531 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3e9e74be-0659-45b1-87ae-ddcf867678e0-scripts\") pod \"glance-default-external-api-0\" (UID: \"3e9e74be-0659-45b1-87ae-ddcf867678e0\") " pod="openstack/glance-default-external-api-0" Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.367573 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9rtv\" (UniqueName: \"kubernetes.io/projected/d2c00677-42e9-4694-9a73-a020bcb17a98-kube-api-access-b9rtv\") pod \"horizon-79dbbfc64c-27fss\" (UID: \"d2c00677-42e9-4694-9a73-a020bcb17a98\") " pod="openstack/horizon-79dbbfc64c-27fss" Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.367587 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d2c00677-42e9-4694-9a73-a020bcb17a98-logs\") pod \"horizon-79dbbfc64c-27fss\" (UID: \"d2c00677-42e9-4694-9a73-a020bcb17a98\") " pod="openstack/horizon-79dbbfc64c-27fss" Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.367607 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/334471a6-41fb-41ca-9ab7-fbd5fb2621b9-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"334471a6-41fb-41ca-9ab7-fbd5fb2621b9\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:48:21 crc kubenswrapper[4811]: E0122 09:48:21.368836 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[ceph combined-ca-bundle config-data glance httpd-run kube-api-access-v966h logs public-tls-certs scripts], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/glance-default-external-api-0" podUID="3e9e74be-0659-45b1-87ae-ddcf867678e0" Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.369219 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3e9e74be-0659-45b1-87ae-ddcf867678e0-logs\") pod \"glance-default-external-api-0\" (UID: \"3e9e74be-0659-45b1-87ae-ddcf867678e0\") " pod="openstack/glance-default-external-api-0" Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.370199 4811 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"3e9e74be-0659-45b1-87ae-ddcf867678e0\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/glance-default-external-api-0" Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.371408 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3e9e74be-0659-45b1-87ae-ddcf867678e0-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"3e9e74be-0659-45b1-87ae-ddcf867678e0\") " pod="openstack/glance-default-external-api-0" Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.371885 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e9e74be-0659-45b1-87ae-ddcf867678e0-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"3e9e74be-0659-45b1-87ae-ddcf867678e0\") " pod="openstack/glance-default-external-api-0" Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.382018 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/3e9e74be-0659-45b1-87ae-ddcf867678e0-ceph\") pod \"glance-default-external-api-0\" (UID: \"3e9e74be-0659-45b1-87ae-ddcf867678e0\") " pod="openstack/glance-default-external-api-0" Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.382302 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-84776b8f5f-8h2x2" Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.382542 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e9e74be-0659-45b1-87ae-ddcf867678e0-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"3e9e74be-0659-45b1-87ae-ddcf867678e0\") " pod="openstack/glance-default-external-api-0" Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.383558 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e9e74be-0659-45b1-87ae-ddcf867678e0-config-data\") pod \"glance-default-external-api-0\" (UID: \"3e9e74be-0659-45b1-87ae-ddcf867678e0\") " pod="openstack/glance-default-external-api-0" Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.404214 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"3e9e74be-0659-45b1-87ae-ddcf867678e0\") " pod="openstack/glance-default-external-api-0" Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.406770 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3e9e74be-0659-45b1-87ae-ddcf867678e0-scripts\") pod \"glance-default-external-api-0\" (UID: \"3e9e74be-0659-45b1-87ae-ddcf867678e0\") " pod="openstack/glance-default-external-api-0" Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.429025 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v966h\" (UniqueName: \"kubernetes.io/projected/3e9e74be-0659-45b1-87ae-ddcf867678e0-kube-api-access-v966h\") pod \"glance-default-external-api-0\" (UID: \"3e9e74be-0659-45b1-87ae-ddcf867678e0\") " pod="openstack/glance-default-external-api-0" Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.471797 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vfx7k\" (UniqueName: \"kubernetes.io/projected/334471a6-41fb-41ca-9ab7-fbd5fb2621b9-kube-api-access-vfx7k\") pod \"glance-default-internal-api-0\" (UID: \"334471a6-41fb-41ca-9ab7-fbd5fb2621b9\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.472019 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/334471a6-41fb-41ca-9ab7-fbd5fb2621b9-logs\") pod \"glance-default-internal-api-0\" (UID: \"334471a6-41fb-41ca-9ab7-fbd5fb2621b9\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.472075 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d2c00677-42e9-4694-9a73-a020bcb17a98-horizon-secret-key\") pod \"horizon-79dbbfc64c-27fss\" (UID: \"d2c00677-42e9-4694-9a73-a020bcb17a98\") " pod="openstack/horizon-79dbbfc64c-27fss" Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.472105 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d2c00677-42e9-4694-9a73-a020bcb17a98-scripts\") pod \"horizon-79dbbfc64c-27fss\" (UID: \"d2c00677-42e9-4694-9a73-a020bcb17a98\") " pod="openstack/horizon-79dbbfc64c-27fss" Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.472119 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/334471a6-41fb-41ca-9ab7-fbd5fb2621b9-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"334471a6-41fb-41ca-9ab7-fbd5fb2621b9\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.472147 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/334471a6-41fb-41ca-9ab7-fbd5fb2621b9-config-data\") pod \"glance-default-internal-api-0\" (UID: \"334471a6-41fb-41ca-9ab7-fbd5fb2621b9\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.472170 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"334471a6-41fb-41ca-9ab7-fbd5fb2621b9\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.472205 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/334471a6-41fb-41ca-9ab7-fbd5fb2621b9-ceph\") pod \"glance-default-internal-api-0\" (UID: \"334471a6-41fb-41ca-9ab7-fbd5fb2621b9\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.472241 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/334471a6-41fb-41ca-9ab7-fbd5fb2621b9-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"334471a6-41fb-41ca-9ab7-fbd5fb2621b9\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.472320 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d2c00677-42e9-4694-9a73-a020bcb17a98-logs\") pod \"horizon-79dbbfc64c-27fss\" (UID: \"d2c00677-42e9-4694-9a73-a020bcb17a98\") " pod="openstack/horizon-79dbbfc64c-27fss" Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.472342 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9rtv\" (UniqueName: \"kubernetes.io/projected/d2c00677-42e9-4694-9a73-a020bcb17a98-kube-api-access-b9rtv\") pod \"horizon-79dbbfc64c-27fss\" (UID: \"d2c00677-42e9-4694-9a73-a020bcb17a98\") " pod="openstack/horizon-79dbbfc64c-27fss" Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.472375 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/334471a6-41fb-41ca-9ab7-fbd5fb2621b9-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"334471a6-41fb-41ca-9ab7-fbd5fb2621b9\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.472413 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/334471a6-41fb-41ca-9ab7-fbd5fb2621b9-scripts\") pod \"glance-default-internal-api-0\" (UID: \"334471a6-41fb-41ca-9ab7-fbd5fb2621b9\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.472427 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d2c00677-42e9-4694-9a73-a020bcb17a98-config-data\") pod \"horizon-79dbbfc64c-27fss\" (UID: \"d2c00677-42e9-4694-9a73-a020bcb17a98\") " pod="openstack/horizon-79dbbfc64c-27fss" Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.472869 4811 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"334471a6-41fb-41ca-9ab7-fbd5fb2621b9\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-internal-api-0" Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.473209 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d2c00677-42e9-4694-9a73-a020bcb17a98-logs\") pod \"horizon-79dbbfc64c-27fss\" (UID: \"d2c00677-42e9-4694-9a73-a020bcb17a98\") " pod="openstack/horizon-79dbbfc64c-27fss" Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.473609 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/334471a6-41fb-41ca-9ab7-fbd5fb2621b9-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"334471a6-41fb-41ca-9ab7-fbd5fb2621b9\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.482816 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/334471a6-41fb-41ca-9ab7-fbd5fb2621b9-logs\") pod \"glance-default-internal-api-0\" (UID: \"334471a6-41fb-41ca-9ab7-fbd5fb2621b9\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.483173 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d2c00677-42e9-4694-9a73-a020bcb17a98-scripts\") pod \"horizon-79dbbfc64c-27fss\" (UID: \"d2c00677-42e9-4694-9a73-a020bcb17a98\") " pod="openstack/horizon-79dbbfc64c-27fss" Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.483371 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/334471a6-41fb-41ca-9ab7-fbd5fb2621b9-scripts\") pod \"glance-default-internal-api-0\" (UID: \"334471a6-41fb-41ca-9ab7-fbd5fb2621b9\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.483552 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d2c00677-42e9-4694-9a73-a020bcb17a98-config-data\") pod \"horizon-79dbbfc64c-27fss\" (UID: \"d2c00677-42e9-4694-9a73-a020bcb17a98\") " pod="openstack/horizon-79dbbfc64c-27fss" Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.488487 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d2c00677-42e9-4694-9a73-a020bcb17a98-horizon-secret-key\") pod \"horizon-79dbbfc64c-27fss\" (UID: \"d2c00677-42e9-4694-9a73-a020bcb17a98\") " pod="openstack/horizon-79dbbfc64c-27fss" Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.489542 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/334471a6-41fb-41ca-9ab7-fbd5fb2621b9-ceph\") pod \"glance-default-internal-api-0\" (UID: \"334471a6-41fb-41ca-9ab7-fbd5fb2621b9\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.502846 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/334471a6-41fb-41ca-9ab7-fbd5fb2621b9-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"334471a6-41fb-41ca-9ab7-fbd5fb2621b9\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.528444 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/334471a6-41fb-41ca-9ab7-fbd5fb2621b9-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"334471a6-41fb-41ca-9ab7-fbd5fb2621b9\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.534059 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9rtv\" (UniqueName: \"kubernetes.io/projected/d2c00677-42e9-4694-9a73-a020bcb17a98-kube-api-access-b9rtv\") pod \"horizon-79dbbfc64c-27fss\" (UID: \"d2c00677-42e9-4694-9a73-a020bcb17a98\") " pod="openstack/horizon-79dbbfc64c-27fss" Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.536136 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vfx7k\" (UniqueName: \"kubernetes.io/projected/334471a6-41fb-41ca-9ab7-fbd5fb2621b9-kube-api-access-vfx7k\") pod \"glance-default-internal-api-0\" (UID: \"334471a6-41fb-41ca-9ab7-fbd5fb2621b9\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.556835 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/334471a6-41fb-41ca-9ab7-fbd5fb2621b9-config-data\") pod \"glance-default-internal-api-0\" (UID: \"334471a6-41fb-41ca-9ab7-fbd5fb2621b9\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.574298 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-79dbbfc64c-27fss" Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.631110 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.631285 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.632812 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"334471a6-41fb-41ca-9ab7-fbd5fb2621b9\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.699690 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.722178 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.750639 4811 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.751066 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.789173 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/334471a6-41fb-41ca-9ab7-fbd5fb2621b9-combined-ca-bundle\") pod \"334471a6-41fb-41ca-9ab7-fbd5fb2621b9\" (UID: \"334471a6-41fb-41ca-9ab7-fbd5fb2621b9\") " Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.789401 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/334471a6-41fb-41ca-9ab7-fbd5fb2621b9-config-data\") pod \"334471a6-41fb-41ca-9ab7-fbd5fb2621b9\" (UID: \"334471a6-41fb-41ca-9ab7-fbd5fb2621b9\") " Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.789431 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/3e9e74be-0659-45b1-87ae-ddcf867678e0-ceph\") pod \"3e9e74be-0659-45b1-87ae-ddcf867678e0\" (UID: \"3e9e74be-0659-45b1-87ae-ddcf867678e0\") " Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.789449 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"334471a6-41fb-41ca-9ab7-fbd5fb2621b9\" (UID: \"334471a6-41fb-41ca-9ab7-fbd5fb2621b9\") " Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.789480 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/334471a6-41fb-41ca-9ab7-fbd5fb2621b9-httpd-run\") pod \"334471a6-41fb-41ca-9ab7-fbd5fb2621b9\" (UID: \"334471a6-41fb-41ca-9ab7-fbd5fb2621b9\") " Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.789494 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vfx7k\" (UniqueName: \"kubernetes.io/projected/334471a6-41fb-41ca-9ab7-fbd5fb2621b9-kube-api-access-vfx7k\") pod \"334471a6-41fb-41ca-9ab7-fbd5fb2621b9\" (UID: \"334471a6-41fb-41ca-9ab7-fbd5fb2621b9\") " Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.789512 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/334471a6-41fb-41ca-9ab7-fbd5fb2621b9-logs\") pod \"334471a6-41fb-41ca-9ab7-fbd5fb2621b9\" (UID: \"334471a6-41fb-41ca-9ab7-fbd5fb2621b9\") " Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.789532 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/334471a6-41fb-41ca-9ab7-fbd5fb2621b9-internal-tls-certs\") pod \"334471a6-41fb-41ca-9ab7-fbd5fb2621b9\" (UID: \"334471a6-41fb-41ca-9ab7-fbd5fb2621b9\") " Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.789577 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/334471a6-41fb-41ca-9ab7-fbd5fb2621b9-ceph\") pod \"334471a6-41fb-41ca-9ab7-fbd5fb2621b9\" (UID: \"334471a6-41fb-41ca-9ab7-fbd5fb2621b9\") " Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.789640 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/334471a6-41fb-41ca-9ab7-fbd5fb2621b9-scripts\") pod \"334471a6-41fb-41ca-9ab7-fbd5fb2621b9\" (UID: \"334471a6-41fb-41ca-9ab7-fbd5fb2621b9\") " Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.789692 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v966h\" (UniqueName: \"kubernetes.io/projected/3e9e74be-0659-45b1-87ae-ddcf867678e0-kube-api-access-v966h\") pod \"3e9e74be-0659-45b1-87ae-ddcf867678e0\" (UID: \"3e9e74be-0659-45b1-87ae-ddcf867678e0\") " Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.789714 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3e9e74be-0659-45b1-87ae-ddcf867678e0-scripts\") pod \"3e9e74be-0659-45b1-87ae-ddcf867678e0\" (UID: \"3e9e74be-0659-45b1-87ae-ddcf867678e0\") " Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.789739 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"3e9e74be-0659-45b1-87ae-ddcf867678e0\" (UID: \"3e9e74be-0659-45b1-87ae-ddcf867678e0\") " Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.789771 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e9e74be-0659-45b1-87ae-ddcf867678e0-config-data\") pod \"3e9e74be-0659-45b1-87ae-ddcf867678e0\" (UID: \"3e9e74be-0659-45b1-87ae-ddcf867678e0\") " Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.789818 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e9e74be-0659-45b1-87ae-ddcf867678e0-combined-ca-bundle\") pod \"3e9e74be-0659-45b1-87ae-ddcf867678e0\" (UID: \"3e9e74be-0659-45b1-87ae-ddcf867678e0\") " Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.789852 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e9e74be-0659-45b1-87ae-ddcf867678e0-public-tls-certs\") pod \"3e9e74be-0659-45b1-87ae-ddcf867678e0\" (UID: \"3e9e74be-0659-45b1-87ae-ddcf867678e0\") " Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.789894 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3e9e74be-0659-45b1-87ae-ddcf867678e0-httpd-run\") pod \"3e9e74be-0659-45b1-87ae-ddcf867678e0\" (UID: \"3e9e74be-0659-45b1-87ae-ddcf867678e0\") " Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.789919 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3e9e74be-0659-45b1-87ae-ddcf867678e0-logs\") pod \"3e9e74be-0659-45b1-87ae-ddcf867678e0\" (UID: \"3e9e74be-0659-45b1-87ae-ddcf867678e0\") " Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.790697 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e9e74be-0659-45b1-87ae-ddcf867678e0-logs" (OuterVolumeSpecName: "logs") pod "3e9e74be-0659-45b1-87ae-ddcf867678e0" (UID: "3e9e74be-0659-45b1-87ae-ddcf867678e0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.806324 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/334471a6-41fb-41ca-9ab7-fbd5fb2621b9-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "334471a6-41fb-41ca-9ab7-fbd5fb2621b9" (UID: "334471a6-41fb-41ca-9ab7-fbd5fb2621b9"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.811270 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e9e74be-0659-45b1-87ae-ddcf867678e0-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "3e9e74be-0659-45b1-87ae-ddcf867678e0" (UID: "3e9e74be-0659-45b1-87ae-ddcf867678e0"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.814481 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/334471a6-41fb-41ca-9ab7-fbd5fb2621b9-logs" (OuterVolumeSpecName: "logs") pod "334471a6-41fb-41ca-9ab7-fbd5fb2621b9" (UID: "334471a6-41fb-41ca-9ab7-fbd5fb2621b9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.818183 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e9e74be-0659-45b1-87ae-ddcf867678e0-config-data" (OuterVolumeSpecName: "config-data") pod "3e9e74be-0659-45b1-87ae-ddcf867678e0" (UID: "3e9e74be-0659-45b1-87ae-ddcf867678e0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.818314 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/334471a6-41fb-41ca-9ab7-fbd5fb2621b9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "334471a6-41fb-41ca-9ab7-fbd5fb2621b9" (UID: "334471a6-41fb-41ca-9ab7-fbd5fb2621b9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.834786 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/334471a6-41fb-41ca-9ab7-fbd5fb2621b9-config-data" (OuterVolumeSpecName: "config-data") pod "334471a6-41fb-41ca-9ab7-fbd5fb2621b9" (UID: "334471a6-41fb-41ca-9ab7-fbd5fb2621b9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.834895 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/334471a6-41fb-41ca-9ab7-fbd5fb2621b9-ceph" (OuterVolumeSpecName: "ceph") pod "334471a6-41fb-41ca-9ab7-fbd5fb2621b9" (UID: "334471a6-41fb-41ca-9ab7-fbd5fb2621b9"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.834953 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e9e74be-0659-45b1-87ae-ddcf867678e0-ceph" (OuterVolumeSpecName: "ceph") pod "3e9e74be-0659-45b1-87ae-ddcf867678e0" (UID: "3e9e74be-0659-45b1-87ae-ddcf867678e0"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.838303 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "glance") pod "3e9e74be-0659-45b1-87ae-ddcf867678e0" (UID: "3e9e74be-0659-45b1-87ae-ddcf867678e0"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.841892 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e9e74be-0659-45b1-87ae-ddcf867678e0-scripts" (OuterVolumeSpecName: "scripts") pod "3e9e74be-0659-45b1-87ae-ddcf867678e0" (UID: "3e9e74be-0659-45b1-87ae-ddcf867678e0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.842912 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e9e74be-0659-45b1-87ae-ddcf867678e0-kube-api-access-v966h" (OuterVolumeSpecName: "kube-api-access-v966h") pod "3e9e74be-0659-45b1-87ae-ddcf867678e0" (UID: "3e9e74be-0659-45b1-87ae-ddcf867678e0"). InnerVolumeSpecName "kube-api-access-v966h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.843027 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e9e74be-0659-45b1-87ae-ddcf867678e0-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "3e9e74be-0659-45b1-87ae-ddcf867678e0" (UID: "3e9e74be-0659-45b1-87ae-ddcf867678e0"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.843121 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/334471a6-41fb-41ca-9ab7-fbd5fb2621b9-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "334471a6-41fb-41ca-9ab7-fbd5fb2621b9" (UID: "334471a6-41fb-41ca-9ab7-fbd5fb2621b9"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.843213 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "glance") pod "334471a6-41fb-41ca-9ab7-fbd5fb2621b9" (UID: "334471a6-41fb-41ca-9ab7-fbd5fb2621b9"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 22 09:48:21 crc kubenswrapper[4811]: I0122 09:48:21.843728 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e9e74be-0659-45b1-87ae-ddcf867678e0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3e9e74be-0659-45b1-87ae-ddcf867678e0" (UID: "3e9e74be-0659-45b1-87ae-ddcf867678e0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:48:22 crc kubenswrapper[4811]: I0122 09:48:21.878356 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/334471a6-41fb-41ca-9ab7-fbd5fb2621b9-scripts" (OuterVolumeSpecName: "scripts") pod "334471a6-41fb-41ca-9ab7-fbd5fb2621b9" (UID: "334471a6-41fb-41ca-9ab7-fbd5fb2621b9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:48:22 crc kubenswrapper[4811]: I0122 09:48:21.892148 4811 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/334471a6-41fb-41ca-9ab7-fbd5fb2621b9-ceph\") on node \"crc\" DevicePath \"\"" Jan 22 09:48:22 crc kubenswrapper[4811]: I0122 09:48:21.892168 4811 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/334471a6-41fb-41ca-9ab7-fbd5fb2621b9-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 09:48:22 crc kubenswrapper[4811]: I0122 09:48:21.892177 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v966h\" (UniqueName: \"kubernetes.io/projected/3e9e74be-0659-45b1-87ae-ddcf867678e0-kube-api-access-v966h\") on node \"crc\" DevicePath \"\"" Jan 22 09:48:22 crc kubenswrapper[4811]: I0122 09:48:21.892187 4811 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3e9e74be-0659-45b1-87ae-ddcf867678e0-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 09:48:22 crc kubenswrapper[4811]: I0122 09:48:21.892213 4811 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Jan 22 09:48:22 crc kubenswrapper[4811]: I0122 09:48:21.892222 4811 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e9e74be-0659-45b1-87ae-ddcf867678e0-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 09:48:22 crc kubenswrapper[4811]: I0122 09:48:21.892230 4811 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e9e74be-0659-45b1-87ae-ddcf867678e0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 09:48:22 crc kubenswrapper[4811]: I0122 09:48:21.892238 4811 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e9e74be-0659-45b1-87ae-ddcf867678e0-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 09:48:22 crc kubenswrapper[4811]: I0122 09:48:21.892245 4811 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3e9e74be-0659-45b1-87ae-ddcf867678e0-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 22 09:48:22 crc kubenswrapper[4811]: I0122 09:48:21.892252 4811 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3e9e74be-0659-45b1-87ae-ddcf867678e0-logs\") on node \"crc\" DevicePath \"\"" Jan 22 09:48:22 crc kubenswrapper[4811]: I0122 09:48:21.892261 4811 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/334471a6-41fb-41ca-9ab7-fbd5fb2621b9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 09:48:22 crc kubenswrapper[4811]: I0122 09:48:21.892269 4811 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/334471a6-41fb-41ca-9ab7-fbd5fb2621b9-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 09:48:22 crc kubenswrapper[4811]: I0122 09:48:21.892275 4811 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/3e9e74be-0659-45b1-87ae-ddcf867678e0-ceph\") on node \"crc\" DevicePath \"\"" Jan 22 09:48:22 crc kubenswrapper[4811]: I0122 09:48:21.892286 4811 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Jan 22 09:48:22 crc kubenswrapper[4811]: I0122 09:48:21.892293 4811 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/334471a6-41fb-41ca-9ab7-fbd5fb2621b9-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 22 09:48:22 crc kubenswrapper[4811]: I0122 09:48:21.892303 4811 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/334471a6-41fb-41ca-9ab7-fbd5fb2621b9-logs\") on node \"crc\" DevicePath \"\"" Jan 22 09:48:22 crc kubenswrapper[4811]: I0122 09:48:21.892311 4811 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/334471a6-41fb-41ca-9ab7-fbd5fb2621b9-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 09:48:22 crc kubenswrapper[4811]: I0122 09:48:21.902673 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/334471a6-41fb-41ca-9ab7-fbd5fb2621b9-kube-api-access-vfx7k" (OuterVolumeSpecName: "kube-api-access-vfx7k") pod "334471a6-41fb-41ca-9ab7-fbd5fb2621b9" (UID: "334471a6-41fb-41ca-9ab7-fbd5fb2621b9"). InnerVolumeSpecName "kube-api-access-vfx7k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:48:22 crc kubenswrapper[4811]: I0122 09:48:21.920155 4811 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Jan 22 09:48:22 crc kubenswrapper[4811]: I0122 09:48:21.966835 4811 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Jan 22 09:48:22 crc kubenswrapper[4811]: I0122 09:48:21.994218 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vfx7k\" (UniqueName: \"kubernetes.io/projected/334471a6-41fb-41ca-9ab7-fbd5fb2621b9-kube-api-access-vfx7k\") on node \"crc\" DevicePath \"\"" Jan 22 09:48:22 crc kubenswrapper[4811]: I0122 09:48:21.994245 4811 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Jan 22 09:48:22 crc kubenswrapper[4811]: I0122 09:48:21.994255 4811 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Jan 22 09:48:22 crc kubenswrapper[4811]: I0122 09:48:22.023941 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-create-mp7qp"] Jan 22 09:48:22 crc kubenswrapper[4811]: I0122 09:48:22.213339 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-volume1-0"] Jan 22 09:48:22 crc kubenswrapper[4811]: W0122 09:48:22.218739 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode3a4d222_4fbd_4c90_9bb0_d787f257d7c0.slice/crio-e4f6b8960caa91f20a4e13212da3544e199fa834f826ece61d2c353c1284143f WatchSource:0}: Error finding container e4f6b8960caa91f20a4e13212da3544e199fa834f826ece61d2c353c1284143f: Status 404 returned error can't find the container with id e4f6b8960caa91f20a4e13212da3544e199fa834f826ece61d2c353c1284143f Jan 22 09:48:22 crc kubenswrapper[4811]: I0122 09:48:22.637796 4811 generic.go:334] "Generic (PLEG): container finished" podID="60012e62-6a64-4ac5-8d6b-9fc52699dad4" containerID="14042f42d138a0c5f60aca5a157b5572db5aa79559a0d2953032432700f4041b" exitCode=0 Jan 22 09:48:22 crc kubenswrapper[4811]: I0122 09:48:22.638014 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-mp7qp" event={"ID":"60012e62-6a64-4ac5-8d6b-9fc52699dad4","Type":"ContainerDied","Data":"14042f42d138a0c5f60aca5a157b5572db5aa79559a0d2953032432700f4041b"} Jan 22 09:48:22 crc kubenswrapper[4811]: I0122 09:48:22.638042 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-mp7qp" event={"ID":"60012e62-6a64-4ac5-8d6b-9fc52699dad4","Type":"ContainerStarted","Data":"a17228c456610ce56f9f85504ca2e375b001d959149d0ace5e57942f005681a2"} Jan 22 09:48:22 crc kubenswrapper[4811]: I0122 09:48:22.639983 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"e3a4d222-4fbd-4c90-9bb0-d787f257d7c0","Type":"ContainerStarted","Data":"e4f6b8960caa91f20a4e13212da3544e199fa834f826ece61d2c353c1284143f"} Jan 22 09:48:22 crc kubenswrapper[4811]: I0122 09:48:22.642343 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 22 09:48:22 crc kubenswrapper[4811]: I0122 09:48:22.642400 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 22 09:48:22 crc kubenswrapper[4811]: I0122 09:48:22.642334 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"747abd8a-15a3-42fe-b8bd-a74f2e03c00c","Type":"ContainerStarted","Data":"5983c030372bc35ebfe77a38ea3280c0f97c2f1194b2bfc6e028ab65ff531aa0"} Jan 22 09:48:22 crc kubenswrapper[4811]: I0122 09:48:22.690323 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 22 09:48:22 crc kubenswrapper[4811]: I0122 09:48:22.709675 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 22 09:48:22 crc kubenswrapper[4811]: I0122 09:48:22.749270 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 22 09:48:22 crc kubenswrapper[4811]: I0122 09:48:22.750670 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 22 09:48:22 crc kubenswrapper[4811]: I0122 09:48:22.754002 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 22 09:48:22 crc kubenswrapper[4811]: I0122 09:48:22.754289 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 22 09:48:22 crc kubenswrapper[4811]: I0122 09:48:22.754430 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-92nf4" Jan 22 09:48:22 crc kubenswrapper[4811]: I0122 09:48:22.754553 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 22 09:48:22 crc kubenswrapper[4811]: I0122 09:48:22.777327 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 22 09:48:22 crc kubenswrapper[4811]: I0122 09:48:22.785086 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 22 09:48:22 crc kubenswrapper[4811]: I0122 09:48:22.790647 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 22 09:48:22 crc kubenswrapper[4811]: I0122 09:48:22.815420 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 22 09:48:22 crc kubenswrapper[4811]: I0122 09:48:22.816965 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 22 09:48:22 crc kubenswrapper[4811]: I0122 09:48:22.817685 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"a7336bf4-f489-4e7e-a736-49be04734cc3\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:48:22 crc kubenswrapper[4811]: I0122 09:48:22.817741 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a7336bf4-f489-4e7e-a736-49be04734cc3-logs\") pod \"glance-default-internal-api-0\" (UID: \"a7336bf4-f489-4e7e-a736-49be04734cc3\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:48:22 crc kubenswrapper[4811]: I0122 09:48:22.817791 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a7336bf4-f489-4e7e-a736-49be04734cc3-scripts\") pod \"glance-default-internal-api-0\" (UID: \"a7336bf4-f489-4e7e-a736-49be04734cc3\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:48:22 crc kubenswrapper[4811]: I0122 09:48:22.817850 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7336bf4-f489-4e7e-a736-49be04734cc3-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"a7336bf4-f489-4e7e-a736-49be04734cc3\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:48:22 crc kubenswrapper[4811]: I0122 09:48:22.817893 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pv9kg\" (UniqueName: \"kubernetes.io/projected/a7336bf4-f489-4e7e-a736-49be04734cc3-kube-api-access-pv9kg\") pod \"glance-default-internal-api-0\" (UID: \"a7336bf4-f489-4e7e-a736-49be04734cc3\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:48:22 crc kubenswrapper[4811]: I0122 09:48:22.817932 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/a7336bf4-f489-4e7e-a736-49be04734cc3-ceph\") pod \"glance-default-internal-api-0\" (UID: \"a7336bf4-f489-4e7e-a736-49be04734cc3\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:48:22 crc kubenswrapper[4811]: I0122 09:48:22.817988 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7336bf4-f489-4e7e-a736-49be04734cc3-config-data\") pod \"glance-default-internal-api-0\" (UID: \"a7336bf4-f489-4e7e-a736-49be04734cc3\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:48:22 crc kubenswrapper[4811]: I0122 09:48:22.818007 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7336bf4-f489-4e7e-a736-49be04734cc3-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"a7336bf4-f489-4e7e-a736-49be04734cc3\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:48:22 crc kubenswrapper[4811]: I0122 09:48:22.818023 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a7336bf4-f489-4e7e-a736-49be04734cc3-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"a7336bf4-f489-4e7e-a736-49be04734cc3\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:48:22 crc kubenswrapper[4811]: I0122 09:48:22.818984 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 22 09:48:22 crc kubenswrapper[4811]: I0122 09:48:22.819158 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 22 09:48:22 crc kubenswrapper[4811]: I0122 09:48:22.833390 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 22 09:48:22 crc kubenswrapper[4811]: I0122 09:48:22.841148 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-66c8-account-create-update-2ts7l"] Jan 22 09:48:22 crc kubenswrapper[4811]: I0122 09:48:22.853012 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-84776b8f5f-8h2x2"] Jan 22 09:48:22 crc kubenswrapper[4811]: I0122 09:48:22.858239 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-79dbbfc64c-27fss"] Jan 22 09:48:22 crc kubenswrapper[4811]: I0122 09:48:22.919828 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7336bf4-f489-4e7e-a736-49be04734cc3-config-data\") pod \"glance-default-internal-api-0\" (UID: \"a7336bf4-f489-4e7e-a736-49be04734cc3\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:48:22 crc kubenswrapper[4811]: I0122 09:48:22.919981 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7336bf4-f489-4e7e-a736-49be04734cc3-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"a7336bf4-f489-4e7e-a736-49be04734cc3\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:48:22 crc kubenswrapper[4811]: I0122 09:48:22.920070 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a7336bf4-f489-4e7e-a736-49be04734cc3-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"a7336bf4-f489-4e7e-a736-49be04734cc3\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:48:22 crc kubenswrapper[4811]: I0122 09:48:22.920549 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a988491b-cfb6-4a81-8da5-e3d84621b668-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"a988491b-cfb6-4a81-8da5-e3d84621b668\") " pod="openstack/glance-default-external-api-0" Jan 22 09:48:22 crc kubenswrapper[4811]: I0122 09:48:22.920696 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a988491b-cfb6-4a81-8da5-e3d84621b668-logs\") pod \"glance-default-external-api-0\" (UID: \"a988491b-cfb6-4a81-8da5-e3d84621b668\") " pod="openstack/glance-default-external-api-0" Jan 22 09:48:22 crc kubenswrapper[4811]: I0122 09:48:22.920781 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fpnw\" (UniqueName: \"kubernetes.io/projected/a988491b-cfb6-4a81-8da5-e3d84621b668-kube-api-access-4fpnw\") pod \"glance-default-external-api-0\" (UID: \"a988491b-cfb6-4a81-8da5-e3d84621b668\") " pod="openstack/glance-default-external-api-0" Jan 22 09:48:22 crc kubenswrapper[4811]: I0122 09:48:22.920863 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a988491b-cfb6-4a81-8da5-e3d84621b668-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"a988491b-cfb6-4a81-8da5-e3d84621b668\") " pod="openstack/glance-default-external-api-0" Jan 22 09:48:22 crc kubenswrapper[4811]: I0122 09:48:22.920496 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a7336bf4-f489-4e7e-a736-49be04734cc3-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"a7336bf4-f489-4e7e-a736-49be04734cc3\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:48:22 crc kubenswrapper[4811]: I0122 09:48:22.921082 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"a988491b-cfb6-4a81-8da5-e3d84621b668\") " pod="openstack/glance-default-external-api-0" Jan 22 09:48:22 crc kubenswrapper[4811]: I0122 09:48:22.921181 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a988491b-cfb6-4a81-8da5-e3d84621b668-config-data\") pod \"glance-default-external-api-0\" (UID: \"a988491b-cfb6-4a81-8da5-e3d84621b668\") " pod="openstack/glance-default-external-api-0" Jan 22 09:48:22 crc kubenswrapper[4811]: I0122 09:48:22.921276 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"a7336bf4-f489-4e7e-a736-49be04734cc3\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:48:22 crc kubenswrapper[4811]: I0122 09:48:22.921350 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a7336bf4-f489-4e7e-a736-49be04734cc3-logs\") pod \"glance-default-internal-api-0\" (UID: \"a7336bf4-f489-4e7e-a736-49be04734cc3\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:48:22 crc kubenswrapper[4811]: I0122 09:48:22.921452 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a7336bf4-f489-4e7e-a736-49be04734cc3-scripts\") pod \"glance-default-internal-api-0\" (UID: \"a7336bf4-f489-4e7e-a736-49be04734cc3\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:48:22 crc kubenswrapper[4811]: I0122 09:48:22.921594 4811 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"a7336bf4-f489-4e7e-a736-49be04734cc3\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-internal-api-0" Jan 22 09:48:22 crc kubenswrapper[4811]: I0122 09:48:22.921732 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a7336bf4-f489-4e7e-a736-49be04734cc3-logs\") pod \"glance-default-internal-api-0\" (UID: \"a7336bf4-f489-4e7e-a736-49be04734cc3\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:48:22 crc kubenswrapper[4811]: I0122 09:48:22.923199 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a988491b-cfb6-4a81-8da5-e3d84621b668-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"a988491b-cfb6-4a81-8da5-e3d84621b668\") " pod="openstack/glance-default-external-api-0" Jan 22 09:48:22 crc kubenswrapper[4811]: I0122 09:48:22.923372 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7336bf4-f489-4e7e-a736-49be04734cc3-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"a7336bf4-f489-4e7e-a736-49be04734cc3\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:48:22 crc kubenswrapper[4811]: I0122 09:48:22.923446 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pv9kg\" (UniqueName: \"kubernetes.io/projected/a7336bf4-f489-4e7e-a736-49be04734cc3-kube-api-access-pv9kg\") pod \"glance-default-internal-api-0\" (UID: \"a7336bf4-f489-4e7e-a736-49be04734cc3\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:48:22 crc kubenswrapper[4811]: I0122 09:48:22.923471 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a988491b-cfb6-4a81-8da5-e3d84621b668-scripts\") pod \"glance-default-external-api-0\" (UID: \"a988491b-cfb6-4a81-8da5-e3d84621b668\") " pod="openstack/glance-default-external-api-0" Jan 22 09:48:22 crc kubenswrapper[4811]: I0122 09:48:22.923528 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/a7336bf4-f489-4e7e-a736-49be04734cc3-ceph\") pod \"glance-default-internal-api-0\" (UID: \"a7336bf4-f489-4e7e-a736-49be04734cc3\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:48:22 crc kubenswrapper[4811]: I0122 09:48:22.923569 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/a988491b-cfb6-4a81-8da5-e3d84621b668-ceph\") pod \"glance-default-external-api-0\" (UID: \"a988491b-cfb6-4a81-8da5-e3d84621b668\") " pod="openstack/glance-default-external-api-0" Jan 22 09:48:22 crc kubenswrapper[4811]: I0122 09:48:22.928256 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/a7336bf4-f489-4e7e-a736-49be04734cc3-ceph\") pod \"glance-default-internal-api-0\" (UID: \"a7336bf4-f489-4e7e-a736-49be04734cc3\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:48:22 crc kubenswrapper[4811]: I0122 09:48:22.931489 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7336bf4-f489-4e7e-a736-49be04734cc3-config-data\") pod \"glance-default-internal-api-0\" (UID: \"a7336bf4-f489-4e7e-a736-49be04734cc3\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:48:22 crc kubenswrapper[4811]: I0122 09:48:22.939068 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a7336bf4-f489-4e7e-a736-49be04734cc3-scripts\") pod \"glance-default-internal-api-0\" (UID: \"a7336bf4-f489-4e7e-a736-49be04734cc3\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:48:22 crc kubenswrapper[4811]: I0122 09:48:22.942329 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pv9kg\" (UniqueName: \"kubernetes.io/projected/a7336bf4-f489-4e7e-a736-49be04734cc3-kube-api-access-pv9kg\") pod \"glance-default-internal-api-0\" (UID: \"a7336bf4-f489-4e7e-a736-49be04734cc3\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:48:22 crc kubenswrapper[4811]: I0122 09:48:22.943911 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"a7336bf4-f489-4e7e-a736-49be04734cc3\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:48:22 crc kubenswrapper[4811]: I0122 09:48:22.945209 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7336bf4-f489-4e7e-a736-49be04734cc3-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"a7336bf4-f489-4e7e-a736-49be04734cc3\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:48:22 crc kubenswrapper[4811]: I0122 09:48:22.948231 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7336bf4-f489-4e7e-a736-49be04734cc3-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"a7336bf4-f489-4e7e-a736-49be04734cc3\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:48:23 crc kubenswrapper[4811]: I0122 09:48:23.024816 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a988491b-cfb6-4a81-8da5-e3d84621b668-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"a988491b-cfb6-4a81-8da5-e3d84621b668\") " pod="openstack/glance-default-external-api-0" Jan 22 09:48:23 crc kubenswrapper[4811]: I0122 09:48:23.025006 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a988491b-cfb6-4a81-8da5-e3d84621b668-logs\") pod \"glance-default-external-api-0\" (UID: \"a988491b-cfb6-4a81-8da5-e3d84621b668\") " pod="openstack/glance-default-external-api-0" Jan 22 09:48:23 crc kubenswrapper[4811]: I0122 09:48:23.025059 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4fpnw\" (UniqueName: \"kubernetes.io/projected/a988491b-cfb6-4a81-8da5-e3d84621b668-kube-api-access-4fpnw\") pod \"glance-default-external-api-0\" (UID: \"a988491b-cfb6-4a81-8da5-e3d84621b668\") " pod="openstack/glance-default-external-api-0" Jan 22 09:48:23 crc kubenswrapper[4811]: I0122 09:48:23.025095 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a988491b-cfb6-4a81-8da5-e3d84621b668-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"a988491b-cfb6-4a81-8da5-e3d84621b668\") " pod="openstack/glance-default-external-api-0" Jan 22 09:48:23 crc kubenswrapper[4811]: I0122 09:48:23.025135 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"a988491b-cfb6-4a81-8da5-e3d84621b668\") " pod="openstack/glance-default-external-api-0" Jan 22 09:48:23 crc kubenswrapper[4811]: I0122 09:48:23.025157 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a988491b-cfb6-4a81-8da5-e3d84621b668-config-data\") pod \"glance-default-external-api-0\" (UID: \"a988491b-cfb6-4a81-8da5-e3d84621b668\") " pod="openstack/glance-default-external-api-0" Jan 22 09:48:23 crc kubenswrapper[4811]: I0122 09:48:23.025236 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a988491b-cfb6-4a81-8da5-e3d84621b668-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"a988491b-cfb6-4a81-8da5-e3d84621b668\") " pod="openstack/glance-default-external-api-0" Jan 22 09:48:23 crc kubenswrapper[4811]: I0122 09:48:23.025288 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a988491b-cfb6-4a81-8da5-e3d84621b668-scripts\") pod \"glance-default-external-api-0\" (UID: \"a988491b-cfb6-4a81-8da5-e3d84621b668\") " pod="openstack/glance-default-external-api-0" Jan 22 09:48:23 crc kubenswrapper[4811]: I0122 09:48:23.025338 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/a988491b-cfb6-4a81-8da5-e3d84621b668-ceph\") pod \"glance-default-external-api-0\" (UID: \"a988491b-cfb6-4a81-8da5-e3d84621b668\") " pod="openstack/glance-default-external-api-0" Jan 22 09:48:23 crc kubenswrapper[4811]: I0122 09:48:23.029248 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a988491b-cfb6-4a81-8da5-e3d84621b668-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"a988491b-cfb6-4a81-8da5-e3d84621b668\") " pod="openstack/glance-default-external-api-0" Jan 22 09:48:23 crc kubenswrapper[4811]: I0122 09:48:23.030479 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a988491b-cfb6-4a81-8da5-e3d84621b668-logs\") pod \"glance-default-external-api-0\" (UID: \"a988491b-cfb6-4a81-8da5-e3d84621b668\") " pod="openstack/glance-default-external-api-0" Jan 22 09:48:23 crc kubenswrapper[4811]: I0122 09:48:23.032157 4811 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"a988491b-cfb6-4a81-8da5-e3d84621b668\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/glance-default-external-api-0" Jan 22 09:48:23 crc kubenswrapper[4811]: I0122 09:48:23.035920 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a988491b-cfb6-4a81-8da5-e3d84621b668-scripts\") pod \"glance-default-external-api-0\" (UID: \"a988491b-cfb6-4a81-8da5-e3d84621b668\") " pod="openstack/glance-default-external-api-0" Jan 22 09:48:23 crc kubenswrapper[4811]: I0122 09:48:23.036952 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a988491b-cfb6-4a81-8da5-e3d84621b668-config-data\") pod \"glance-default-external-api-0\" (UID: \"a988491b-cfb6-4a81-8da5-e3d84621b668\") " pod="openstack/glance-default-external-api-0" Jan 22 09:48:23 crc kubenswrapper[4811]: I0122 09:48:23.037282 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/a988491b-cfb6-4a81-8da5-e3d84621b668-ceph\") pod \"glance-default-external-api-0\" (UID: \"a988491b-cfb6-4a81-8da5-e3d84621b668\") " pod="openstack/glance-default-external-api-0" Jan 22 09:48:23 crc kubenswrapper[4811]: I0122 09:48:23.038110 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a988491b-cfb6-4a81-8da5-e3d84621b668-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"a988491b-cfb6-4a81-8da5-e3d84621b668\") " pod="openstack/glance-default-external-api-0" Jan 22 09:48:23 crc kubenswrapper[4811]: I0122 09:48:23.043634 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a988491b-cfb6-4a81-8da5-e3d84621b668-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"a988491b-cfb6-4a81-8da5-e3d84621b668\") " pod="openstack/glance-default-external-api-0" Jan 22 09:48:23 crc kubenswrapper[4811]: I0122 09:48:23.046214 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4fpnw\" (UniqueName: \"kubernetes.io/projected/a988491b-cfb6-4a81-8da5-e3d84621b668-kube-api-access-4fpnw\") pod \"glance-default-external-api-0\" (UID: \"a988491b-cfb6-4a81-8da5-e3d84621b668\") " pod="openstack/glance-default-external-api-0" Jan 22 09:48:23 crc kubenswrapper[4811]: I0122 09:48:23.064877 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"a988491b-cfb6-4a81-8da5-e3d84621b668\") " pod="openstack/glance-default-external-api-0" Jan 22 09:48:23 crc kubenswrapper[4811]: I0122 09:48:23.111005 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 22 09:48:23 crc kubenswrapper[4811]: I0122 09:48:23.144202 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 22 09:48:23 crc kubenswrapper[4811]: I0122 09:48:23.702226 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-84776b8f5f-8h2x2"] Jan 22 09:48:23 crc kubenswrapper[4811]: I0122 09:48:23.765125 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-d6c54dc74-r2gjr"] Jan 22 09:48:23 crc kubenswrapper[4811]: I0122 09:48:23.835891 4811 generic.go:334] "Generic (PLEG): container finished" podID="10c22353-10d6-4de5-b438-369773462111" containerID="e0fed3130963339053b6efb6a796a5e782059b3227ebde0d00e159a45f82532c" exitCode=0 Jan 22 09:48:23 crc kubenswrapper[4811]: I0122 09:48:23.853486 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-84776b8f5f-8h2x2" event={"ID":"080675ca-91da-4c39-a901-fef7f8496220","Type":"ContainerStarted","Data":"96a10a839f606201c36c8209ef85468ddb61a4f6052902d2d13a6640d0c3e388"} Jan 22 09:48:23 crc kubenswrapper[4811]: I0122 09:48:23.861649 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"747abd8a-15a3-42fe-b8bd-a74f2e03c00c","Type":"ContainerStarted","Data":"cf5716458c883ed336376644daa25aaa52f9b36258c079d76186fedef1b89cfb"} Jan 22 09:48:23 crc kubenswrapper[4811]: I0122 09:48:23.861684 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 22 09:48:23 crc kubenswrapper[4811]: I0122 09:48:23.861710 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"747abd8a-15a3-42fe-b8bd-a74f2e03c00c","Type":"ContainerStarted","Data":"be448bfc898a9ad8b96fad3c90450afbf53e17ae9d35dca1224d6eda0c8a6192"} Jan 22 09:48:23 crc kubenswrapper[4811]: I0122 09:48:23.861719 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-66c8-account-create-update-2ts7l" event={"ID":"10c22353-10d6-4de5-b438-369773462111","Type":"ContainerDied","Data":"e0fed3130963339053b6efb6a796a5e782059b3227ebde0d00e159a45f82532c"} Jan 22 09:48:23 crc kubenswrapper[4811]: I0122 09:48:23.861755 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-66c8-account-create-update-2ts7l" event={"ID":"10c22353-10d6-4de5-b438-369773462111","Type":"ContainerStarted","Data":"0ddbc57922a22e1aea1de9433f6e95d9ac8afd58dfd7334b4a107592a0df27ea"} Jan 22 09:48:23 crc kubenswrapper[4811]: I0122 09:48:23.861767 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-79dbbfc64c-27fss" event={"ID":"d2c00677-42e9-4694-9a73-a020bcb17a98","Type":"ContainerStarted","Data":"52dcbce182134ba88ce993c9ce5850dd7cc5002d2c67d1b08d7f37eae5cbe022"} Jan 22 09:48:23 crc kubenswrapper[4811]: I0122 09:48:23.859159 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-d6c54dc74-r2gjr" Jan 22 09:48:23 crc kubenswrapper[4811]: I0122 09:48:23.869644 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Jan 22 09:48:23 crc kubenswrapper[4811]: I0122 09:48:23.964217 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-d6c54dc74-r2gjr"] Jan 22 09:48:23 crc kubenswrapper[4811]: I0122 09:48:23.980140 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-79dbbfc64c-27fss"] Jan 22 09:48:24 crc kubenswrapper[4811]: I0122 09:48:24.004547 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/660d9785-0f9b-4953-af76-580ed227c244-combined-ca-bundle\") pod \"horizon-d6c54dc74-r2gjr\" (UID: \"660d9785-0f9b-4953-af76-580ed227c244\") " pod="openstack/horizon-d6c54dc74-r2gjr" Jan 22 09:48:24 crc kubenswrapper[4811]: I0122 09:48:24.004591 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/660d9785-0f9b-4953-af76-580ed227c244-horizon-secret-key\") pod \"horizon-d6c54dc74-r2gjr\" (UID: \"660d9785-0f9b-4953-af76-580ed227c244\") " pod="openstack/horizon-d6c54dc74-r2gjr" Jan 22 09:48:24 crc kubenswrapper[4811]: I0122 09:48:24.004699 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4dj8\" (UniqueName: \"kubernetes.io/projected/660d9785-0f9b-4953-af76-580ed227c244-kube-api-access-b4dj8\") pod \"horizon-d6c54dc74-r2gjr\" (UID: \"660d9785-0f9b-4953-af76-580ed227c244\") " pod="openstack/horizon-d6c54dc74-r2gjr" Jan 22 09:48:24 crc kubenswrapper[4811]: I0122 09:48:24.004716 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/660d9785-0f9b-4953-af76-580ed227c244-logs\") pod \"horizon-d6c54dc74-r2gjr\" (UID: \"660d9785-0f9b-4953-af76-580ed227c244\") " pod="openstack/horizon-d6c54dc74-r2gjr" Jan 22 09:48:24 crc kubenswrapper[4811]: I0122 09:48:24.004731 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/660d9785-0f9b-4953-af76-580ed227c244-config-data\") pod \"horizon-d6c54dc74-r2gjr\" (UID: \"660d9785-0f9b-4953-af76-580ed227c244\") " pod="openstack/horizon-d6c54dc74-r2gjr" Jan 22 09:48:24 crc kubenswrapper[4811]: I0122 09:48:24.004767 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/660d9785-0f9b-4953-af76-580ed227c244-horizon-tls-certs\") pod \"horizon-d6c54dc74-r2gjr\" (UID: \"660d9785-0f9b-4953-af76-580ed227c244\") " pod="openstack/horizon-d6c54dc74-r2gjr" Jan 22 09:48:24 crc kubenswrapper[4811]: I0122 09:48:24.004801 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/660d9785-0f9b-4953-af76-580ed227c244-scripts\") pod \"horizon-d6c54dc74-r2gjr\" (UID: \"660d9785-0f9b-4953-af76-580ed227c244\") " pod="openstack/horizon-d6c54dc74-r2gjr" Jan 22 09:48:24 crc kubenswrapper[4811]: I0122 09:48:24.029502 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-backup-0" podStartSLOduration=2.780411434 podStartE2EDuration="4.029486647s" podCreationTimestamp="2026-01-22 09:48:20 +0000 UTC" firstStartedPulling="2026-01-22 09:48:21.750424197 +0000 UTC m=+2546.072611320" lastFinishedPulling="2026-01-22 09:48:22.99949941 +0000 UTC m=+2547.321686533" observedRunningTime="2026-01-22 09:48:23.886243483 +0000 UTC m=+2548.208430606" watchObservedRunningTime="2026-01-22 09:48:24.029486647 +0000 UTC m=+2548.351673770" Jan 22 09:48:24 crc kubenswrapper[4811]: I0122 09:48:24.044223 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="334471a6-41fb-41ca-9ab7-fbd5fb2621b9" path="/var/lib/kubelet/pods/334471a6-41fb-41ca-9ab7-fbd5fb2621b9/volumes" Jan 22 09:48:24 crc kubenswrapper[4811]: I0122 09:48:24.044761 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e9e74be-0659-45b1-87ae-ddcf867678e0" path="/var/lib/kubelet/pods/3e9e74be-0659-45b1-87ae-ddcf867678e0/volumes" Jan 22 09:48:24 crc kubenswrapper[4811]: I0122 09:48:24.045124 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-f75b46fc8-4l2b8"] Jan 22 09:48:24 crc kubenswrapper[4811]: I0122 09:48:24.046322 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 22 09:48:24 crc kubenswrapper[4811]: I0122 09:48:24.046338 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-f75b46fc8-4l2b8"] Jan 22 09:48:24 crc kubenswrapper[4811]: I0122 09:48:24.046401 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-f75b46fc8-4l2b8" Jan 22 09:48:24 crc kubenswrapper[4811]: I0122 09:48:24.106494 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4dj8\" (UniqueName: \"kubernetes.io/projected/660d9785-0f9b-4953-af76-580ed227c244-kube-api-access-b4dj8\") pod \"horizon-d6c54dc74-r2gjr\" (UID: \"660d9785-0f9b-4953-af76-580ed227c244\") " pod="openstack/horizon-d6c54dc74-r2gjr" Jan 22 09:48:24 crc kubenswrapper[4811]: I0122 09:48:24.106549 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/660d9785-0f9b-4953-af76-580ed227c244-logs\") pod \"horizon-d6c54dc74-r2gjr\" (UID: \"660d9785-0f9b-4953-af76-580ed227c244\") " pod="openstack/horizon-d6c54dc74-r2gjr" Jan 22 09:48:24 crc kubenswrapper[4811]: I0122 09:48:24.106571 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/660d9785-0f9b-4953-af76-580ed227c244-config-data\") pod \"horizon-d6c54dc74-r2gjr\" (UID: \"660d9785-0f9b-4953-af76-580ed227c244\") " pod="openstack/horizon-d6c54dc74-r2gjr" Jan 22 09:48:24 crc kubenswrapper[4811]: I0122 09:48:24.106685 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/660d9785-0f9b-4953-af76-580ed227c244-horizon-tls-certs\") pod \"horizon-d6c54dc74-r2gjr\" (UID: \"660d9785-0f9b-4953-af76-580ed227c244\") " pod="openstack/horizon-d6c54dc74-r2gjr" Jan 22 09:48:24 crc kubenswrapper[4811]: I0122 09:48:24.106744 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/660d9785-0f9b-4953-af76-580ed227c244-scripts\") pod \"horizon-d6c54dc74-r2gjr\" (UID: \"660d9785-0f9b-4953-af76-580ed227c244\") " pod="openstack/horizon-d6c54dc74-r2gjr" Jan 22 09:48:24 crc kubenswrapper[4811]: I0122 09:48:24.106851 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/660d9785-0f9b-4953-af76-580ed227c244-combined-ca-bundle\") pod \"horizon-d6c54dc74-r2gjr\" (UID: \"660d9785-0f9b-4953-af76-580ed227c244\") " pod="openstack/horizon-d6c54dc74-r2gjr" Jan 22 09:48:24 crc kubenswrapper[4811]: I0122 09:48:24.106903 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/660d9785-0f9b-4953-af76-580ed227c244-horizon-secret-key\") pod \"horizon-d6c54dc74-r2gjr\" (UID: \"660d9785-0f9b-4953-af76-580ed227c244\") " pod="openstack/horizon-d6c54dc74-r2gjr" Jan 22 09:48:24 crc kubenswrapper[4811]: I0122 09:48:24.108608 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/660d9785-0f9b-4953-af76-580ed227c244-scripts\") pod \"horizon-d6c54dc74-r2gjr\" (UID: \"660d9785-0f9b-4953-af76-580ed227c244\") " pod="openstack/horizon-d6c54dc74-r2gjr" Jan 22 09:48:24 crc kubenswrapper[4811]: I0122 09:48:24.109540 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/660d9785-0f9b-4953-af76-580ed227c244-logs\") pod \"horizon-d6c54dc74-r2gjr\" (UID: \"660d9785-0f9b-4953-af76-580ed227c244\") " pod="openstack/horizon-d6c54dc74-r2gjr" Jan 22 09:48:24 crc kubenswrapper[4811]: I0122 09:48:24.110019 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/660d9785-0f9b-4953-af76-580ed227c244-config-data\") pod \"horizon-d6c54dc74-r2gjr\" (UID: \"660d9785-0f9b-4953-af76-580ed227c244\") " pod="openstack/horizon-d6c54dc74-r2gjr" Jan 22 09:48:24 crc kubenswrapper[4811]: I0122 09:48:24.114934 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 22 09:48:24 crc kubenswrapper[4811]: I0122 09:48:24.137041 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 22 09:48:24 crc kubenswrapper[4811]: I0122 09:48:24.137080 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/660d9785-0f9b-4953-af76-580ed227c244-combined-ca-bundle\") pod \"horizon-d6c54dc74-r2gjr\" (UID: \"660d9785-0f9b-4953-af76-580ed227c244\") " pod="openstack/horizon-d6c54dc74-r2gjr" Jan 22 09:48:24 crc kubenswrapper[4811]: I0122 09:48:24.137476 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/660d9785-0f9b-4953-af76-580ed227c244-horizon-secret-key\") pod \"horizon-d6c54dc74-r2gjr\" (UID: \"660d9785-0f9b-4953-af76-580ed227c244\") " pod="openstack/horizon-d6c54dc74-r2gjr" Jan 22 09:48:24 crc kubenswrapper[4811]: I0122 09:48:24.155866 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/660d9785-0f9b-4953-af76-580ed227c244-horizon-tls-certs\") pod \"horizon-d6c54dc74-r2gjr\" (UID: \"660d9785-0f9b-4953-af76-580ed227c244\") " pod="openstack/horizon-d6c54dc74-r2gjr" Jan 22 09:48:24 crc kubenswrapper[4811]: I0122 09:48:24.161008 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4dj8\" (UniqueName: \"kubernetes.io/projected/660d9785-0f9b-4953-af76-580ed227c244-kube-api-access-b4dj8\") pod \"horizon-d6c54dc74-r2gjr\" (UID: \"660d9785-0f9b-4953-af76-580ed227c244\") " pod="openstack/horizon-d6c54dc74-r2gjr" Jan 22 09:48:24 crc kubenswrapper[4811]: I0122 09:48:24.205575 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-d6c54dc74-r2gjr" Jan 22 09:48:24 crc kubenswrapper[4811]: I0122 09:48:24.209293 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c9eef01-268a-4d3c-b3c3-f30cd80694e0-combined-ca-bundle\") pod \"horizon-f75b46fc8-4l2b8\" (UID: \"9c9eef01-268a-4d3c-b3c3-f30cd80694e0\") " pod="openstack/horizon-f75b46fc8-4l2b8" Jan 22 09:48:24 crc kubenswrapper[4811]: I0122 09:48:24.209686 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9c9eef01-268a-4d3c-b3c3-f30cd80694e0-config-data\") pod \"horizon-f75b46fc8-4l2b8\" (UID: \"9c9eef01-268a-4d3c-b3c3-f30cd80694e0\") " pod="openstack/horizon-f75b46fc8-4l2b8" Jan 22 09:48:24 crc kubenswrapper[4811]: I0122 09:48:24.209813 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c9eef01-268a-4d3c-b3c3-f30cd80694e0-horizon-tls-certs\") pod \"horizon-f75b46fc8-4l2b8\" (UID: \"9c9eef01-268a-4d3c-b3c3-f30cd80694e0\") " pod="openstack/horizon-f75b46fc8-4l2b8" Jan 22 09:48:24 crc kubenswrapper[4811]: I0122 09:48:24.209847 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9c9eef01-268a-4d3c-b3c3-f30cd80694e0-horizon-secret-key\") pod \"horizon-f75b46fc8-4l2b8\" (UID: \"9c9eef01-268a-4d3c-b3c3-f30cd80694e0\") " pod="openstack/horizon-f75b46fc8-4l2b8" Jan 22 09:48:24 crc kubenswrapper[4811]: I0122 09:48:24.209941 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9c9eef01-268a-4d3c-b3c3-f30cd80694e0-logs\") pod \"horizon-f75b46fc8-4l2b8\" (UID: \"9c9eef01-268a-4d3c-b3c3-f30cd80694e0\") " pod="openstack/horizon-f75b46fc8-4l2b8" Jan 22 09:48:24 crc kubenswrapper[4811]: I0122 09:48:24.209992 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9c9eef01-268a-4d3c-b3c3-f30cd80694e0-scripts\") pod \"horizon-f75b46fc8-4l2b8\" (UID: \"9c9eef01-268a-4d3c-b3c3-f30cd80694e0\") " pod="openstack/horizon-f75b46fc8-4l2b8" Jan 22 09:48:24 crc kubenswrapper[4811]: I0122 09:48:24.210012 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86kv7\" (UniqueName: \"kubernetes.io/projected/9c9eef01-268a-4d3c-b3c3-f30cd80694e0-kube-api-access-86kv7\") pod \"horizon-f75b46fc8-4l2b8\" (UID: \"9c9eef01-268a-4d3c-b3c3-f30cd80694e0\") " pod="openstack/horizon-f75b46fc8-4l2b8" Jan 22 09:48:24 crc kubenswrapper[4811]: W0122 09:48:24.210681 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda7336bf4_f489_4e7e_a736_49be04734cc3.slice/crio-b564bdcffbc0782e91e29734040357c33a68a4c8bb0306c5f6644ce6aa9fe6c4 WatchSource:0}: Error finding container b564bdcffbc0782e91e29734040357c33a68a4c8bb0306c5f6644ce6aa9fe6c4: Status 404 returned error can't find the container with id b564bdcffbc0782e91e29734040357c33a68a4c8bb0306c5f6644ce6aa9fe6c4 Jan 22 09:48:24 crc kubenswrapper[4811]: I0122 09:48:24.308058 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-mp7qp" Jan 22 09:48:24 crc kubenswrapper[4811]: I0122 09:48:24.317822 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9c9eef01-268a-4d3c-b3c3-f30cd80694e0-logs\") pod \"horizon-f75b46fc8-4l2b8\" (UID: \"9c9eef01-268a-4d3c-b3c3-f30cd80694e0\") " pod="openstack/horizon-f75b46fc8-4l2b8" Jan 22 09:48:24 crc kubenswrapper[4811]: I0122 09:48:24.317869 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9c9eef01-268a-4d3c-b3c3-f30cd80694e0-scripts\") pod \"horizon-f75b46fc8-4l2b8\" (UID: \"9c9eef01-268a-4d3c-b3c3-f30cd80694e0\") " pod="openstack/horizon-f75b46fc8-4l2b8" Jan 22 09:48:24 crc kubenswrapper[4811]: I0122 09:48:24.317890 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86kv7\" (UniqueName: \"kubernetes.io/projected/9c9eef01-268a-4d3c-b3c3-f30cd80694e0-kube-api-access-86kv7\") pod \"horizon-f75b46fc8-4l2b8\" (UID: \"9c9eef01-268a-4d3c-b3c3-f30cd80694e0\") " pod="openstack/horizon-f75b46fc8-4l2b8" Jan 22 09:48:24 crc kubenswrapper[4811]: I0122 09:48:24.317925 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c9eef01-268a-4d3c-b3c3-f30cd80694e0-combined-ca-bundle\") pod \"horizon-f75b46fc8-4l2b8\" (UID: \"9c9eef01-268a-4d3c-b3c3-f30cd80694e0\") " pod="openstack/horizon-f75b46fc8-4l2b8" Jan 22 09:48:24 crc kubenswrapper[4811]: I0122 09:48:24.318017 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9c9eef01-268a-4d3c-b3c3-f30cd80694e0-config-data\") pod \"horizon-f75b46fc8-4l2b8\" (UID: \"9c9eef01-268a-4d3c-b3c3-f30cd80694e0\") " pod="openstack/horizon-f75b46fc8-4l2b8" Jan 22 09:48:24 crc kubenswrapper[4811]: I0122 09:48:24.318055 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c9eef01-268a-4d3c-b3c3-f30cd80694e0-horizon-tls-certs\") pod \"horizon-f75b46fc8-4l2b8\" (UID: \"9c9eef01-268a-4d3c-b3c3-f30cd80694e0\") " pod="openstack/horizon-f75b46fc8-4l2b8" Jan 22 09:48:24 crc kubenswrapper[4811]: I0122 09:48:24.318076 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9c9eef01-268a-4d3c-b3c3-f30cd80694e0-horizon-secret-key\") pod \"horizon-f75b46fc8-4l2b8\" (UID: \"9c9eef01-268a-4d3c-b3c3-f30cd80694e0\") " pod="openstack/horizon-f75b46fc8-4l2b8" Jan 22 09:48:24 crc kubenswrapper[4811]: I0122 09:48:24.318141 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9c9eef01-268a-4d3c-b3c3-f30cd80694e0-logs\") pod \"horizon-f75b46fc8-4l2b8\" (UID: \"9c9eef01-268a-4d3c-b3c3-f30cd80694e0\") " pod="openstack/horizon-f75b46fc8-4l2b8" Jan 22 09:48:24 crc kubenswrapper[4811]: I0122 09:48:24.318734 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9c9eef01-268a-4d3c-b3c3-f30cd80694e0-scripts\") pod \"horizon-f75b46fc8-4l2b8\" (UID: \"9c9eef01-268a-4d3c-b3c3-f30cd80694e0\") " pod="openstack/horizon-f75b46fc8-4l2b8" Jan 22 09:48:24 crc kubenswrapper[4811]: I0122 09:48:24.326587 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c9eef01-268a-4d3c-b3c3-f30cd80694e0-horizon-tls-certs\") pod \"horizon-f75b46fc8-4l2b8\" (UID: \"9c9eef01-268a-4d3c-b3c3-f30cd80694e0\") " pod="openstack/horizon-f75b46fc8-4l2b8" Jan 22 09:48:24 crc kubenswrapper[4811]: I0122 09:48:24.330600 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c9eef01-268a-4d3c-b3c3-f30cd80694e0-combined-ca-bundle\") pod \"horizon-f75b46fc8-4l2b8\" (UID: \"9c9eef01-268a-4d3c-b3c3-f30cd80694e0\") " pod="openstack/horizon-f75b46fc8-4l2b8" Jan 22 09:48:24 crc kubenswrapper[4811]: I0122 09:48:24.333873 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9c9eef01-268a-4d3c-b3c3-f30cd80694e0-horizon-secret-key\") pod \"horizon-f75b46fc8-4l2b8\" (UID: \"9c9eef01-268a-4d3c-b3c3-f30cd80694e0\") " pod="openstack/horizon-f75b46fc8-4l2b8" Jan 22 09:48:24 crc kubenswrapper[4811]: I0122 09:48:24.345483 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9c9eef01-268a-4d3c-b3c3-f30cd80694e0-config-data\") pod \"horizon-f75b46fc8-4l2b8\" (UID: \"9c9eef01-268a-4d3c-b3c3-f30cd80694e0\") " pod="openstack/horizon-f75b46fc8-4l2b8" Jan 22 09:48:24 crc kubenswrapper[4811]: I0122 09:48:24.346133 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86kv7\" (UniqueName: \"kubernetes.io/projected/9c9eef01-268a-4d3c-b3c3-f30cd80694e0-kube-api-access-86kv7\") pod \"horizon-f75b46fc8-4l2b8\" (UID: \"9c9eef01-268a-4d3c-b3c3-f30cd80694e0\") " pod="openstack/horizon-f75b46fc8-4l2b8" Jan 22 09:48:24 crc kubenswrapper[4811]: I0122 09:48:24.395693 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-f75b46fc8-4l2b8" Jan 22 09:48:24 crc kubenswrapper[4811]: I0122 09:48:24.419326 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r25s2\" (UniqueName: \"kubernetes.io/projected/60012e62-6a64-4ac5-8d6b-9fc52699dad4-kube-api-access-r25s2\") pod \"60012e62-6a64-4ac5-8d6b-9fc52699dad4\" (UID: \"60012e62-6a64-4ac5-8d6b-9fc52699dad4\") " Jan 22 09:48:24 crc kubenswrapper[4811]: I0122 09:48:24.419496 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/60012e62-6a64-4ac5-8d6b-9fc52699dad4-operator-scripts\") pod \"60012e62-6a64-4ac5-8d6b-9fc52699dad4\" (UID: \"60012e62-6a64-4ac5-8d6b-9fc52699dad4\") " Jan 22 09:48:24 crc kubenswrapper[4811]: I0122 09:48:24.422276 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/60012e62-6a64-4ac5-8d6b-9fc52699dad4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "60012e62-6a64-4ac5-8d6b-9fc52699dad4" (UID: "60012e62-6a64-4ac5-8d6b-9fc52699dad4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:48:24 crc kubenswrapper[4811]: I0122 09:48:24.427421 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60012e62-6a64-4ac5-8d6b-9fc52699dad4-kube-api-access-r25s2" (OuterVolumeSpecName: "kube-api-access-r25s2") pod "60012e62-6a64-4ac5-8d6b-9fc52699dad4" (UID: "60012e62-6a64-4ac5-8d6b-9fc52699dad4"). InnerVolumeSpecName "kube-api-access-r25s2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:48:24 crc kubenswrapper[4811]: I0122 09:48:24.526864 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r25s2\" (UniqueName: \"kubernetes.io/projected/60012e62-6a64-4ac5-8d6b-9fc52699dad4-kube-api-access-r25s2\") on node \"crc\" DevicePath \"\"" Jan 22 09:48:24 crc kubenswrapper[4811]: I0122 09:48:24.527113 4811 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/60012e62-6a64-4ac5-8d6b-9fc52699dad4-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 09:48:24 crc kubenswrapper[4811]: I0122 09:48:24.950050 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-d6c54dc74-r2gjr"] Jan 22 09:48:25 crc kubenswrapper[4811]: I0122 09:48:25.003178 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-mp7qp" event={"ID":"60012e62-6a64-4ac5-8d6b-9fc52699dad4","Type":"ContainerDied","Data":"a17228c456610ce56f9f85504ca2e375b001d959149d0ace5e57942f005681a2"} Jan 22 09:48:25 crc kubenswrapper[4811]: I0122 09:48:25.003210 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a17228c456610ce56f9f85504ca2e375b001d959149d0ace5e57942f005681a2" Jan 22 09:48:25 crc kubenswrapper[4811]: I0122 09:48:25.003261 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-mp7qp" Jan 22 09:48:25 crc kubenswrapper[4811]: I0122 09:48:25.045746 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"e3a4d222-4fbd-4c90-9bb0-d787f257d7c0","Type":"ContainerStarted","Data":"53272e3a6c18d989e83a97f2f2dd1a7c417daa97fe1b6b409f95e08d7acc90c4"} Jan 22 09:48:25 crc kubenswrapper[4811]: I0122 09:48:25.087593 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a988491b-cfb6-4a81-8da5-e3d84621b668","Type":"ContainerStarted","Data":"ec850bcc750f5902f207c91695691a278da75a8ed0fdbe52871249f57461cd02"} Jan 22 09:48:25 crc kubenswrapper[4811]: I0122 09:48:25.093229 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a7336bf4-f489-4e7e-a736-49be04734cc3","Type":"ContainerStarted","Data":"b564bdcffbc0782e91e29734040357c33a68a4c8bb0306c5f6644ce6aa9fe6c4"} Jan 22 09:48:25 crc kubenswrapper[4811]: I0122 09:48:25.529138 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-f75b46fc8-4l2b8"] Jan 22 09:48:25 crc kubenswrapper[4811]: I0122 09:48:25.599716 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-backup-0" Jan 22 09:48:25 crc kubenswrapper[4811]: I0122 09:48:25.781354 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-66c8-account-create-update-2ts7l" Jan 22 09:48:25 crc kubenswrapper[4811]: I0122 09:48:25.899664 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/10c22353-10d6-4de5-b438-369773462111-operator-scripts\") pod \"10c22353-10d6-4de5-b438-369773462111\" (UID: \"10c22353-10d6-4de5-b438-369773462111\") " Jan 22 09:48:25 crc kubenswrapper[4811]: I0122 09:48:25.900012 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b7nxf\" (UniqueName: \"kubernetes.io/projected/10c22353-10d6-4de5-b438-369773462111-kube-api-access-b7nxf\") pod \"10c22353-10d6-4de5-b438-369773462111\" (UID: \"10c22353-10d6-4de5-b438-369773462111\") " Jan 22 09:48:25 crc kubenswrapper[4811]: I0122 09:48:25.903927 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/10c22353-10d6-4de5-b438-369773462111-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "10c22353-10d6-4de5-b438-369773462111" (UID: "10c22353-10d6-4de5-b438-369773462111"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:48:25 crc kubenswrapper[4811]: I0122 09:48:25.907137 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10c22353-10d6-4de5-b438-369773462111-kube-api-access-b7nxf" (OuterVolumeSpecName: "kube-api-access-b7nxf") pod "10c22353-10d6-4de5-b438-369773462111" (UID: "10c22353-10d6-4de5-b438-369773462111"). InnerVolumeSpecName "kube-api-access-b7nxf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:48:26 crc kubenswrapper[4811]: I0122 09:48:26.002402 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b7nxf\" (UniqueName: \"kubernetes.io/projected/10c22353-10d6-4de5-b438-369773462111-kube-api-access-b7nxf\") on node \"crc\" DevicePath \"\"" Jan 22 09:48:26 crc kubenswrapper[4811]: I0122 09:48:26.002428 4811 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/10c22353-10d6-4de5-b438-369773462111-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 09:48:26 crc kubenswrapper[4811]: I0122 09:48:26.119850 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-f75b46fc8-4l2b8" event={"ID":"9c9eef01-268a-4d3c-b3c3-f30cd80694e0","Type":"ContainerStarted","Data":"ddf1dcd84220a7a697e6822782ffa7b9d886146eeecd96cdac17e34b49aa0020"} Jan 22 09:48:26 crc kubenswrapper[4811]: I0122 09:48:26.130140 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a988491b-cfb6-4a81-8da5-e3d84621b668","Type":"ContainerStarted","Data":"e825ea882301b2bae8bb17420b010d8d954c4d6d0b924ecfd29ca8312ec8cdcb"} Jan 22 09:48:26 crc kubenswrapper[4811]: I0122 09:48:26.154949 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a7336bf4-f489-4e7e-a736-49be04734cc3","Type":"ContainerStarted","Data":"2f62001d3f77649864e22ef2ff5e7be94ed6c24301491740cb75a9511b3d7ca4"} Jan 22 09:48:26 crc kubenswrapper[4811]: I0122 09:48:26.186077 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-66c8-account-create-update-2ts7l" Jan 22 09:48:26 crc kubenswrapper[4811]: I0122 09:48:26.186475 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-66c8-account-create-update-2ts7l" event={"ID":"10c22353-10d6-4de5-b438-369773462111","Type":"ContainerDied","Data":"0ddbc57922a22e1aea1de9433f6e95d9ac8afd58dfd7334b4a107592a0df27ea"} Jan 22 09:48:26 crc kubenswrapper[4811]: I0122 09:48:26.186509 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0ddbc57922a22e1aea1de9433f6e95d9ac8afd58dfd7334b4a107592a0df27ea" Jan 22 09:48:26 crc kubenswrapper[4811]: I0122 09:48:26.195077 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-d6c54dc74-r2gjr" event={"ID":"660d9785-0f9b-4953-af76-580ed227c244","Type":"ContainerStarted","Data":"78dadc9b007792aea77b34db856205d08cf7d3e2e9185bd629f26f6e76ca6608"} Jan 22 09:48:26 crc kubenswrapper[4811]: I0122 09:48:26.203271 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"e3a4d222-4fbd-4c90-9bb0-d787f257d7c0","Type":"ContainerStarted","Data":"47d943dda4f9220b85f6d46327d2bd67e725af02299c54f504d1581cc8ddbad9"} Jan 22 09:48:26 crc kubenswrapper[4811]: I0122 09:48:26.823734 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-volume-volume1-0" podStartSLOduration=5.444912654 podStartE2EDuration="6.823714788s" podCreationTimestamp="2026-01-22 09:48:20 +0000 UTC" firstStartedPulling="2026-01-22 09:48:22.220363754 +0000 UTC m=+2546.542550878" lastFinishedPulling="2026-01-22 09:48:23.599165889 +0000 UTC m=+2547.921353012" observedRunningTime="2026-01-22 09:48:26.231029431 +0000 UTC m=+2550.553216554" watchObservedRunningTime="2026-01-22 09:48:26.823714788 +0000 UTC m=+2551.145901911" Jan 22 09:48:27 crc kubenswrapper[4811]: I0122 09:48:27.227126 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="a988491b-cfb6-4a81-8da5-e3d84621b668" containerName="glance-log" containerID="cri-o://e825ea882301b2bae8bb17420b010d8d954c4d6d0b924ecfd29ca8312ec8cdcb" gracePeriod=30 Jan 22 09:48:27 crc kubenswrapper[4811]: I0122 09:48:27.227166 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a988491b-cfb6-4a81-8da5-e3d84621b668","Type":"ContainerStarted","Data":"b4e74f72308eecc7f5d17a49c07bd905603783c45ffaee90af958a3f0c912769"} Jan 22 09:48:27 crc kubenswrapper[4811]: I0122 09:48:27.227199 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="a988491b-cfb6-4a81-8da5-e3d84621b668" containerName="glance-httpd" containerID="cri-o://b4e74f72308eecc7f5d17a49c07bd905603783c45ffaee90af958a3f0c912769" gracePeriod=30 Jan 22 09:48:27 crc kubenswrapper[4811]: I0122 09:48:27.244407 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a7336bf4-f489-4e7e-a736-49be04734cc3","Type":"ContainerStarted","Data":"47f7fbf698fb83a9dc049c85ba40dd9e40a4fadf9cddef67d439c33c96c6111d"} Jan 22 09:48:27 crc kubenswrapper[4811]: I0122 09:48:27.244713 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="a7336bf4-f489-4e7e-a736-49be04734cc3" containerName="glance-log" containerID="cri-o://2f62001d3f77649864e22ef2ff5e7be94ed6c24301491740cb75a9511b3d7ca4" gracePeriod=30 Jan 22 09:48:27 crc kubenswrapper[4811]: I0122 09:48:27.244815 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="a7336bf4-f489-4e7e-a736-49be04734cc3" containerName="glance-httpd" containerID="cri-o://47f7fbf698fb83a9dc049c85ba40dd9e40a4fadf9cddef67d439c33c96c6111d" gracePeriod=30 Jan 22 09:48:27 crc kubenswrapper[4811]: I0122 09:48:27.253356 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.253330838 podStartE2EDuration="5.253330838s" podCreationTimestamp="2026-01-22 09:48:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:48:27.247865223 +0000 UTC m=+2551.570052336" watchObservedRunningTime="2026-01-22 09:48:27.253330838 +0000 UTC m=+2551.575517961" Jan 22 09:48:27 crc kubenswrapper[4811]: I0122 09:48:27.299233 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=5.29921009 podStartE2EDuration="5.29921009s" podCreationTimestamp="2026-01-22 09:48:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:48:27.279176849 +0000 UTC m=+2551.601363972" watchObservedRunningTime="2026-01-22 09:48:27.29921009 +0000 UTC m=+2551.621397213" Jan 22 09:48:27 crc kubenswrapper[4811]: I0122 09:48:27.971425 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 22 09:48:27 crc kubenswrapper[4811]: I0122 09:48:27.990435 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a988491b-cfb6-4a81-8da5-e3d84621b668-config-data\") pod \"a988491b-cfb6-4a81-8da5-e3d84621b668\" (UID: \"a988491b-cfb6-4a81-8da5-e3d84621b668\") " Jan 22 09:48:27 crc kubenswrapper[4811]: I0122 09:48:27.990659 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a988491b-cfb6-4a81-8da5-e3d84621b668-combined-ca-bundle\") pod \"a988491b-cfb6-4a81-8da5-e3d84621b668\" (UID: \"a988491b-cfb6-4a81-8da5-e3d84621b668\") " Jan 22 09:48:27 crc kubenswrapper[4811]: I0122 09:48:27.990949 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a988491b-cfb6-4a81-8da5-e3d84621b668-httpd-run\") pod \"a988491b-cfb6-4a81-8da5-e3d84621b668\" (UID: \"a988491b-cfb6-4a81-8da5-e3d84621b668\") " Jan 22 09:48:27 crc kubenswrapper[4811]: I0122 09:48:27.991248 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a988491b-cfb6-4a81-8da5-e3d84621b668-public-tls-certs\") pod \"a988491b-cfb6-4a81-8da5-e3d84621b668\" (UID: \"a988491b-cfb6-4a81-8da5-e3d84621b668\") " Jan 22 09:48:27 crc kubenswrapper[4811]: I0122 09:48:27.991285 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"a988491b-cfb6-4a81-8da5-e3d84621b668\" (UID: \"a988491b-cfb6-4a81-8da5-e3d84621b668\") " Jan 22 09:48:27 crc kubenswrapper[4811]: I0122 09:48:27.991428 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a988491b-cfb6-4a81-8da5-e3d84621b668-scripts\") pod \"a988491b-cfb6-4a81-8da5-e3d84621b668\" (UID: \"a988491b-cfb6-4a81-8da5-e3d84621b668\") " Jan 22 09:48:27 crc kubenswrapper[4811]: I0122 09:48:27.991740 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a988491b-cfb6-4a81-8da5-e3d84621b668-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "a988491b-cfb6-4a81-8da5-e3d84621b668" (UID: "a988491b-cfb6-4a81-8da5-e3d84621b668"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:48:27 crc kubenswrapper[4811]: I0122 09:48:27.992023 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a988491b-cfb6-4a81-8da5-e3d84621b668-logs\") pod \"a988491b-cfb6-4a81-8da5-e3d84621b668\" (UID: \"a988491b-cfb6-4a81-8da5-e3d84621b668\") " Jan 22 09:48:27 crc kubenswrapper[4811]: I0122 09:48:27.992057 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/a988491b-cfb6-4a81-8da5-e3d84621b668-ceph\") pod \"a988491b-cfb6-4a81-8da5-e3d84621b668\" (UID: \"a988491b-cfb6-4a81-8da5-e3d84621b668\") " Jan 22 09:48:27 crc kubenswrapper[4811]: I0122 09:48:27.992162 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4fpnw\" (UniqueName: \"kubernetes.io/projected/a988491b-cfb6-4a81-8da5-e3d84621b668-kube-api-access-4fpnw\") pod \"a988491b-cfb6-4a81-8da5-e3d84621b668\" (UID: \"a988491b-cfb6-4a81-8da5-e3d84621b668\") " Jan 22 09:48:27 crc kubenswrapper[4811]: I0122 09:48:27.994707 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a988491b-cfb6-4a81-8da5-e3d84621b668-logs" (OuterVolumeSpecName: "logs") pod "a988491b-cfb6-4a81-8da5-e3d84621b668" (UID: "a988491b-cfb6-4a81-8da5-e3d84621b668"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:48:28 crc kubenswrapper[4811]: I0122 09:48:28.000949 4811 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a988491b-cfb6-4a81-8da5-e3d84621b668-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 22 09:48:28 crc kubenswrapper[4811]: I0122 09:48:28.001060 4811 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a988491b-cfb6-4a81-8da5-e3d84621b668-logs\") on node \"crc\" DevicePath \"\"" Jan 22 09:48:28 crc kubenswrapper[4811]: I0122 09:48:28.008172 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a988491b-cfb6-4a81-8da5-e3d84621b668-scripts" (OuterVolumeSpecName: "scripts") pod "a988491b-cfb6-4a81-8da5-e3d84621b668" (UID: "a988491b-cfb6-4a81-8da5-e3d84621b668"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:48:28 crc kubenswrapper[4811]: I0122 09:48:28.011430 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a988491b-cfb6-4a81-8da5-e3d84621b668-kube-api-access-4fpnw" (OuterVolumeSpecName: "kube-api-access-4fpnw") pod "a988491b-cfb6-4a81-8da5-e3d84621b668" (UID: "a988491b-cfb6-4a81-8da5-e3d84621b668"). InnerVolumeSpecName "kube-api-access-4fpnw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:48:28 crc kubenswrapper[4811]: I0122 09:48:28.011483 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a988491b-cfb6-4a81-8da5-e3d84621b668-ceph" (OuterVolumeSpecName: "ceph") pod "a988491b-cfb6-4a81-8da5-e3d84621b668" (UID: "a988491b-cfb6-4a81-8da5-e3d84621b668"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:48:28 crc kubenswrapper[4811]: I0122 09:48:28.028720 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "glance") pod "a988491b-cfb6-4a81-8da5-e3d84621b668" (UID: "a988491b-cfb6-4a81-8da5-e3d84621b668"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 22 09:48:28 crc kubenswrapper[4811]: I0122 09:48:28.084513 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a988491b-cfb6-4a81-8da5-e3d84621b668-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "a988491b-cfb6-4a81-8da5-e3d84621b668" (UID: "a988491b-cfb6-4a81-8da5-e3d84621b668"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:48:28 crc kubenswrapper[4811]: I0122 09:48:28.093171 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a988491b-cfb6-4a81-8da5-e3d84621b668-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a988491b-cfb6-4a81-8da5-e3d84621b668" (UID: "a988491b-cfb6-4a81-8da5-e3d84621b668"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:48:28 crc kubenswrapper[4811]: I0122 09:48:28.105898 4811 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a988491b-cfb6-4a81-8da5-e3d84621b668-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 09:48:28 crc kubenswrapper[4811]: I0122 09:48:28.105927 4811 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a988491b-cfb6-4a81-8da5-e3d84621b668-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 09:48:28 crc kubenswrapper[4811]: I0122 09:48:28.105947 4811 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Jan 22 09:48:28 crc kubenswrapper[4811]: I0122 09:48:28.105957 4811 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a988491b-cfb6-4a81-8da5-e3d84621b668-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 09:48:28 crc kubenswrapper[4811]: I0122 09:48:28.105968 4811 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/a988491b-cfb6-4a81-8da5-e3d84621b668-ceph\") on node \"crc\" DevicePath \"\"" Jan 22 09:48:28 crc kubenswrapper[4811]: I0122 09:48:28.105976 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4fpnw\" (UniqueName: \"kubernetes.io/projected/a988491b-cfb6-4a81-8da5-e3d84621b668-kube-api-access-4fpnw\") on node \"crc\" DevicePath \"\"" Jan 22 09:48:28 crc kubenswrapper[4811]: I0122 09:48:28.121796 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a988491b-cfb6-4a81-8da5-e3d84621b668-config-data" (OuterVolumeSpecName: "config-data") pod "a988491b-cfb6-4a81-8da5-e3d84621b668" (UID: "a988491b-cfb6-4a81-8da5-e3d84621b668"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:48:28 crc kubenswrapper[4811]: I0122 09:48:28.127288 4811 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Jan 22 09:48:28 crc kubenswrapper[4811]: I0122 09:48:28.139532 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 22 09:48:28 crc kubenswrapper[4811]: I0122 09:48:28.206605 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7336bf4-f489-4e7e-a736-49be04734cc3-internal-tls-certs\") pod \"a7336bf4-f489-4e7e-a736-49be04734cc3\" (UID: \"a7336bf4-f489-4e7e-a736-49be04734cc3\") " Jan 22 09:48:28 crc kubenswrapper[4811]: I0122 09:48:28.206743 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7336bf4-f489-4e7e-a736-49be04734cc3-combined-ca-bundle\") pod \"a7336bf4-f489-4e7e-a736-49be04734cc3\" (UID: \"a7336bf4-f489-4e7e-a736-49be04734cc3\") " Jan 22 09:48:28 crc kubenswrapper[4811]: I0122 09:48:28.206938 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"a7336bf4-f489-4e7e-a736-49be04734cc3\" (UID: \"a7336bf4-f489-4e7e-a736-49be04734cc3\") " Jan 22 09:48:28 crc kubenswrapper[4811]: I0122 09:48:28.206988 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a7336bf4-f489-4e7e-a736-49be04734cc3-httpd-run\") pod \"a7336bf4-f489-4e7e-a736-49be04734cc3\" (UID: \"a7336bf4-f489-4e7e-a736-49be04734cc3\") " Jan 22 09:48:28 crc kubenswrapper[4811]: I0122 09:48:28.207027 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a7336bf4-f489-4e7e-a736-49be04734cc3-logs\") pod \"a7336bf4-f489-4e7e-a736-49be04734cc3\" (UID: \"a7336bf4-f489-4e7e-a736-49be04734cc3\") " Jan 22 09:48:28 crc kubenswrapper[4811]: I0122 09:48:28.207214 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a7336bf4-f489-4e7e-a736-49be04734cc3-scripts\") pod \"a7336bf4-f489-4e7e-a736-49be04734cc3\" (UID: \"a7336bf4-f489-4e7e-a736-49be04734cc3\") " Jan 22 09:48:28 crc kubenswrapper[4811]: I0122 09:48:28.207281 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7336bf4-f489-4e7e-a736-49be04734cc3-config-data\") pod \"a7336bf4-f489-4e7e-a736-49be04734cc3\" (UID: \"a7336bf4-f489-4e7e-a736-49be04734cc3\") " Jan 22 09:48:28 crc kubenswrapper[4811]: I0122 09:48:28.207306 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/a7336bf4-f489-4e7e-a736-49be04734cc3-ceph\") pod \"a7336bf4-f489-4e7e-a736-49be04734cc3\" (UID: \"a7336bf4-f489-4e7e-a736-49be04734cc3\") " Jan 22 09:48:28 crc kubenswrapper[4811]: I0122 09:48:28.207351 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pv9kg\" (UniqueName: \"kubernetes.io/projected/a7336bf4-f489-4e7e-a736-49be04734cc3-kube-api-access-pv9kg\") pod \"a7336bf4-f489-4e7e-a736-49be04734cc3\" (UID: \"a7336bf4-f489-4e7e-a736-49be04734cc3\") " Jan 22 09:48:28 crc kubenswrapper[4811]: I0122 09:48:28.208017 4811 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a988491b-cfb6-4a81-8da5-e3d84621b668-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 09:48:28 crc kubenswrapper[4811]: I0122 09:48:28.208029 4811 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Jan 22 09:48:28 crc kubenswrapper[4811]: I0122 09:48:28.209077 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7336bf4-f489-4e7e-a736-49be04734cc3-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "a7336bf4-f489-4e7e-a736-49be04734cc3" (UID: "a7336bf4-f489-4e7e-a736-49be04734cc3"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:48:28 crc kubenswrapper[4811]: I0122 09:48:28.209395 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7336bf4-f489-4e7e-a736-49be04734cc3-logs" (OuterVolumeSpecName: "logs") pod "a7336bf4-f489-4e7e-a736-49be04734cc3" (UID: "a7336bf4-f489-4e7e-a736-49be04734cc3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:48:28 crc kubenswrapper[4811]: I0122 09:48:28.213272 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7336bf4-f489-4e7e-a736-49be04734cc3-scripts" (OuterVolumeSpecName: "scripts") pod "a7336bf4-f489-4e7e-a736-49be04734cc3" (UID: "a7336bf4-f489-4e7e-a736-49be04734cc3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:48:28 crc kubenswrapper[4811]: I0122 09:48:28.215807 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7336bf4-f489-4e7e-a736-49be04734cc3-ceph" (OuterVolumeSpecName: "ceph") pod "a7336bf4-f489-4e7e-a736-49be04734cc3" (UID: "a7336bf4-f489-4e7e-a736-49be04734cc3"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:48:28 crc kubenswrapper[4811]: I0122 09:48:28.217823 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "glance") pod "a7336bf4-f489-4e7e-a736-49be04734cc3" (UID: "a7336bf4-f489-4e7e-a736-49be04734cc3"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 22 09:48:28 crc kubenswrapper[4811]: I0122 09:48:28.217967 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7336bf4-f489-4e7e-a736-49be04734cc3-kube-api-access-pv9kg" (OuterVolumeSpecName: "kube-api-access-pv9kg") pod "a7336bf4-f489-4e7e-a736-49be04734cc3" (UID: "a7336bf4-f489-4e7e-a736-49be04734cc3"). InnerVolumeSpecName "kube-api-access-pv9kg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:48:28 crc kubenswrapper[4811]: I0122 09:48:28.242502 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7336bf4-f489-4e7e-a736-49be04734cc3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a7336bf4-f489-4e7e-a736-49be04734cc3" (UID: "a7336bf4-f489-4e7e-a736-49be04734cc3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:48:28 crc kubenswrapper[4811]: I0122 09:48:28.298355 4811 generic.go:334] "Generic (PLEG): container finished" podID="a988491b-cfb6-4a81-8da5-e3d84621b668" containerID="b4e74f72308eecc7f5d17a49c07bd905603783c45ffaee90af958a3f0c912769" exitCode=0 Jan 22 09:48:28 crc kubenswrapper[4811]: I0122 09:48:28.299032 4811 generic.go:334] "Generic (PLEG): container finished" podID="a988491b-cfb6-4a81-8da5-e3d84621b668" containerID="e825ea882301b2bae8bb17420b010d8d954c4d6d0b924ecfd29ca8312ec8cdcb" exitCode=143 Jan 22 09:48:28 crc kubenswrapper[4811]: I0122 09:48:28.298915 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 22 09:48:28 crc kubenswrapper[4811]: I0122 09:48:28.298967 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a988491b-cfb6-4a81-8da5-e3d84621b668","Type":"ContainerDied","Data":"b4e74f72308eecc7f5d17a49c07bd905603783c45ffaee90af958a3f0c912769"} Jan 22 09:48:28 crc kubenswrapper[4811]: I0122 09:48:28.299611 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a988491b-cfb6-4a81-8da5-e3d84621b668","Type":"ContainerDied","Data":"e825ea882301b2bae8bb17420b010d8d954c4d6d0b924ecfd29ca8312ec8cdcb"} Jan 22 09:48:28 crc kubenswrapper[4811]: I0122 09:48:28.299654 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a988491b-cfb6-4a81-8da5-e3d84621b668","Type":"ContainerDied","Data":"ec850bcc750f5902f207c91695691a278da75a8ed0fdbe52871249f57461cd02"} Jan 22 09:48:28 crc kubenswrapper[4811]: I0122 09:48:28.299686 4811 scope.go:117] "RemoveContainer" containerID="b4e74f72308eecc7f5d17a49c07bd905603783c45ffaee90af958a3f0c912769" Jan 22 09:48:28 crc kubenswrapper[4811]: I0122 09:48:28.302703 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7336bf4-f489-4e7e-a736-49be04734cc3-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "a7336bf4-f489-4e7e-a736-49be04734cc3" (UID: "a7336bf4-f489-4e7e-a736-49be04734cc3"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:48:28 crc kubenswrapper[4811]: I0122 09:48:28.307147 4811 generic.go:334] "Generic (PLEG): container finished" podID="a7336bf4-f489-4e7e-a736-49be04734cc3" containerID="47f7fbf698fb83a9dc049c85ba40dd9e40a4fadf9cddef67d439c33c96c6111d" exitCode=0 Jan 22 09:48:28 crc kubenswrapper[4811]: I0122 09:48:28.307174 4811 generic.go:334] "Generic (PLEG): container finished" podID="a7336bf4-f489-4e7e-a736-49be04734cc3" containerID="2f62001d3f77649864e22ef2ff5e7be94ed6c24301491740cb75a9511b3d7ca4" exitCode=143 Jan 22 09:48:28 crc kubenswrapper[4811]: I0122 09:48:28.307193 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a7336bf4-f489-4e7e-a736-49be04734cc3","Type":"ContainerDied","Data":"47f7fbf698fb83a9dc049c85ba40dd9e40a4fadf9cddef67d439c33c96c6111d"} Jan 22 09:48:28 crc kubenswrapper[4811]: I0122 09:48:28.307215 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a7336bf4-f489-4e7e-a736-49be04734cc3","Type":"ContainerDied","Data":"2f62001d3f77649864e22ef2ff5e7be94ed6c24301491740cb75a9511b3d7ca4"} Jan 22 09:48:28 crc kubenswrapper[4811]: I0122 09:48:28.307227 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a7336bf4-f489-4e7e-a736-49be04734cc3","Type":"ContainerDied","Data":"b564bdcffbc0782e91e29734040357c33a68a4c8bb0306c5f6644ce6aa9fe6c4"} Jan 22 09:48:28 crc kubenswrapper[4811]: I0122 09:48:28.307290 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 22 09:48:28 crc kubenswrapper[4811]: I0122 09:48:28.310198 4811 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a7336bf4-f489-4e7e-a736-49be04734cc3-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 09:48:28 crc kubenswrapper[4811]: I0122 09:48:28.310225 4811 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/a7336bf4-f489-4e7e-a736-49be04734cc3-ceph\") on node \"crc\" DevicePath \"\"" Jan 22 09:48:28 crc kubenswrapper[4811]: I0122 09:48:28.310235 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pv9kg\" (UniqueName: \"kubernetes.io/projected/a7336bf4-f489-4e7e-a736-49be04734cc3-kube-api-access-pv9kg\") on node \"crc\" DevicePath \"\"" Jan 22 09:48:28 crc kubenswrapper[4811]: I0122 09:48:28.310247 4811 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7336bf4-f489-4e7e-a736-49be04734cc3-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 09:48:28 crc kubenswrapper[4811]: I0122 09:48:28.310256 4811 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7336bf4-f489-4e7e-a736-49be04734cc3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 09:48:28 crc kubenswrapper[4811]: I0122 09:48:28.310277 4811 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Jan 22 09:48:28 crc kubenswrapper[4811]: I0122 09:48:28.310287 4811 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a7336bf4-f489-4e7e-a736-49be04734cc3-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 22 09:48:28 crc kubenswrapper[4811]: I0122 09:48:28.310297 4811 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a7336bf4-f489-4e7e-a736-49be04734cc3-logs\") on node \"crc\" DevicePath \"\"" Jan 22 09:48:28 crc kubenswrapper[4811]: I0122 09:48:28.310792 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7336bf4-f489-4e7e-a736-49be04734cc3-config-data" (OuterVolumeSpecName: "config-data") pod "a7336bf4-f489-4e7e-a736-49be04734cc3" (UID: "a7336bf4-f489-4e7e-a736-49be04734cc3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:48:28 crc kubenswrapper[4811]: I0122 09:48:28.358257 4811 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Jan 22 09:48:28 crc kubenswrapper[4811]: I0122 09:48:28.378020 4811 scope.go:117] "RemoveContainer" containerID="e825ea882301b2bae8bb17420b010d8d954c4d6d0b924ecfd29ca8312ec8cdcb" Jan 22 09:48:28 crc kubenswrapper[4811]: I0122 09:48:28.381700 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 22 09:48:28 crc kubenswrapper[4811]: I0122 09:48:28.397722 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 22 09:48:28 crc kubenswrapper[4811]: I0122 09:48:28.408002 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 22 09:48:28 crc kubenswrapper[4811]: E0122 09:48:28.409149 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7336bf4-f489-4e7e-a736-49be04734cc3" containerName="glance-log" Jan 22 09:48:28 crc kubenswrapper[4811]: I0122 09:48:28.409176 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7336bf4-f489-4e7e-a736-49be04734cc3" containerName="glance-log" Jan 22 09:48:28 crc kubenswrapper[4811]: E0122 09:48:28.409199 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a988491b-cfb6-4a81-8da5-e3d84621b668" containerName="glance-httpd" Jan 22 09:48:28 crc kubenswrapper[4811]: I0122 09:48:28.409208 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="a988491b-cfb6-4a81-8da5-e3d84621b668" containerName="glance-httpd" Jan 22 09:48:28 crc kubenswrapper[4811]: E0122 09:48:28.409218 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7336bf4-f489-4e7e-a736-49be04734cc3" containerName="glance-httpd" Jan 22 09:48:28 crc kubenswrapper[4811]: I0122 09:48:28.409224 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7336bf4-f489-4e7e-a736-49be04734cc3" containerName="glance-httpd" Jan 22 09:48:28 crc kubenswrapper[4811]: E0122 09:48:28.409237 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60012e62-6a64-4ac5-8d6b-9fc52699dad4" containerName="mariadb-database-create" Jan 22 09:48:28 crc kubenswrapper[4811]: I0122 09:48:28.409471 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="60012e62-6a64-4ac5-8d6b-9fc52699dad4" containerName="mariadb-database-create" Jan 22 09:48:28 crc kubenswrapper[4811]: E0122 09:48:28.409494 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a988491b-cfb6-4a81-8da5-e3d84621b668" containerName="glance-log" Jan 22 09:48:28 crc kubenswrapper[4811]: I0122 09:48:28.409501 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="a988491b-cfb6-4a81-8da5-e3d84621b668" containerName="glance-log" Jan 22 09:48:28 crc kubenswrapper[4811]: E0122 09:48:28.409540 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10c22353-10d6-4de5-b438-369773462111" containerName="mariadb-account-create-update" Jan 22 09:48:28 crc kubenswrapper[4811]: I0122 09:48:28.409550 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="10c22353-10d6-4de5-b438-369773462111" containerName="mariadb-account-create-update" Jan 22 09:48:28 crc kubenswrapper[4811]: I0122 09:48:28.409872 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="10c22353-10d6-4de5-b438-369773462111" containerName="mariadb-account-create-update" Jan 22 09:48:28 crc kubenswrapper[4811]: I0122 09:48:28.409904 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="a988491b-cfb6-4a81-8da5-e3d84621b668" containerName="glance-log" Jan 22 09:48:28 crc kubenswrapper[4811]: I0122 09:48:28.409913 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="a988491b-cfb6-4a81-8da5-e3d84621b668" containerName="glance-httpd" Jan 22 09:48:28 crc kubenswrapper[4811]: I0122 09:48:28.409933 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7336bf4-f489-4e7e-a736-49be04734cc3" containerName="glance-log" Jan 22 09:48:28 crc kubenswrapper[4811]: I0122 09:48:28.409946 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7336bf4-f489-4e7e-a736-49be04734cc3" containerName="glance-httpd" Jan 22 09:48:28 crc kubenswrapper[4811]: I0122 09:48:28.409957 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="60012e62-6a64-4ac5-8d6b-9fc52699dad4" containerName="mariadb-database-create" Jan 22 09:48:28 crc kubenswrapper[4811]: I0122 09:48:28.411769 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 22 09:48:28 crc kubenswrapper[4811]: I0122 09:48:28.412415 4811 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Jan 22 09:48:28 crc kubenswrapper[4811]: I0122 09:48:28.412439 4811 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7336bf4-f489-4e7e-a736-49be04734cc3-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 09:48:28 crc kubenswrapper[4811]: I0122 09:48:28.421195 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 22 09:48:28 crc kubenswrapper[4811]: I0122 09:48:28.423612 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 22 09:48:28 crc kubenswrapper[4811]: I0122 09:48:28.423667 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 22 09:48:28 crc kubenswrapper[4811]: I0122 09:48:28.431922 4811 scope.go:117] "RemoveContainer" containerID="b4e74f72308eecc7f5d17a49c07bd905603783c45ffaee90af958a3f0c912769" Jan 22 09:48:28 crc kubenswrapper[4811]: E0122 09:48:28.444650 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b4e74f72308eecc7f5d17a49c07bd905603783c45ffaee90af958a3f0c912769\": container with ID starting with b4e74f72308eecc7f5d17a49c07bd905603783c45ffaee90af958a3f0c912769 not found: ID does not exist" containerID="b4e74f72308eecc7f5d17a49c07bd905603783c45ffaee90af958a3f0c912769" Jan 22 09:48:28 crc kubenswrapper[4811]: I0122 09:48:28.444754 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4e74f72308eecc7f5d17a49c07bd905603783c45ffaee90af958a3f0c912769"} err="failed to get container status \"b4e74f72308eecc7f5d17a49c07bd905603783c45ffaee90af958a3f0c912769\": rpc error: code = NotFound desc = could not find container \"b4e74f72308eecc7f5d17a49c07bd905603783c45ffaee90af958a3f0c912769\": container with ID starting with b4e74f72308eecc7f5d17a49c07bd905603783c45ffaee90af958a3f0c912769 not found: ID does not exist" Jan 22 09:48:28 crc kubenswrapper[4811]: I0122 09:48:28.444843 4811 scope.go:117] "RemoveContainer" containerID="e825ea882301b2bae8bb17420b010d8d954c4d6d0b924ecfd29ca8312ec8cdcb" Jan 22 09:48:28 crc kubenswrapper[4811]: E0122 09:48:28.446378 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e825ea882301b2bae8bb17420b010d8d954c4d6d0b924ecfd29ca8312ec8cdcb\": container with ID starting with e825ea882301b2bae8bb17420b010d8d954c4d6d0b924ecfd29ca8312ec8cdcb not found: ID does not exist" containerID="e825ea882301b2bae8bb17420b010d8d954c4d6d0b924ecfd29ca8312ec8cdcb" Jan 22 09:48:28 crc kubenswrapper[4811]: I0122 09:48:28.446466 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e825ea882301b2bae8bb17420b010d8d954c4d6d0b924ecfd29ca8312ec8cdcb"} err="failed to get container status \"e825ea882301b2bae8bb17420b010d8d954c4d6d0b924ecfd29ca8312ec8cdcb\": rpc error: code = NotFound desc = could not find container \"e825ea882301b2bae8bb17420b010d8d954c4d6d0b924ecfd29ca8312ec8cdcb\": container with ID starting with e825ea882301b2bae8bb17420b010d8d954c4d6d0b924ecfd29ca8312ec8cdcb not found: ID does not exist" Jan 22 09:48:28 crc kubenswrapper[4811]: I0122 09:48:28.446531 4811 scope.go:117] "RemoveContainer" containerID="b4e74f72308eecc7f5d17a49c07bd905603783c45ffaee90af958a3f0c912769" Jan 22 09:48:28 crc kubenswrapper[4811]: I0122 09:48:28.447138 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4e74f72308eecc7f5d17a49c07bd905603783c45ffaee90af958a3f0c912769"} err="failed to get container status \"b4e74f72308eecc7f5d17a49c07bd905603783c45ffaee90af958a3f0c912769\": rpc error: code = NotFound desc = could not find container \"b4e74f72308eecc7f5d17a49c07bd905603783c45ffaee90af958a3f0c912769\": container with ID starting with b4e74f72308eecc7f5d17a49c07bd905603783c45ffaee90af958a3f0c912769 not found: ID does not exist" Jan 22 09:48:28 crc kubenswrapper[4811]: I0122 09:48:28.447209 4811 scope.go:117] "RemoveContainer" containerID="e825ea882301b2bae8bb17420b010d8d954c4d6d0b924ecfd29ca8312ec8cdcb" Jan 22 09:48:28 crc kubenswrapper[4811]: I0122 09:48:28.453097 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e825ea882301b2bae8bb17420b010d8d954c4d6d0b924ecfd29ca8312ec8cdcb"} err="failed to get container status \"e825ea882301b2bae8bb17420b010d8d954c4d6d0b924ecfd29ca8312ec8cdcb\": rpc error: code = NotFound desc = could not find container \"e825ea882301b2bae8bb17420b010d8d954c4d6d0b924ecfd29ca8312ec8cdcb\": container with ID starting with e825ea882301b2bae8bb17420b010d8d954c4d6d0b924ecfd29ca8312ec8cdcb not found: ID does not exist" Jan 22 09:48:28 crc kubenswrapper[4811]: I0122 09:48:28.453190 4811 scope.go:117] "RemoveContainer" containerID="47f7fbf698fb83a9dc049c85ba40dd9e40a4fadf9cddef67d439c33c96c6111d" Jan 22 09:48:28 crc kubenswrapper[4811]: I0122 09:48:28.516538 4811 scope.go:117] "RemoveContainer" containerID="2f62001d3f77649864e22ef2ff5e7be94ed6c24301491740cb75a9511b3d7ca4" Jan 22 09:48:28 crc kubenswrapper[4811]: I0122 09:48:28.597901 4811 scope.go:117] "RemoveContainer" containerID="47f7fbf698fb83a9dc049c85ba40dd9e40a4fadf9cddef67d439c33c96c6111d" Jan 22 09:48:28 crc kubenswrapper[4811]: E0122 09:48:28.598431 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"47f7fbf698fb83a9dc049c85ba40dd9e40a4fadf9cddef67d439c33c96c6111d\": container with ID starting with 47f7fbf698fb83a9dc049c85ba40dd9e40a4fadf9cddef67d439c33c96c6111d not found: ID does not exist" containerID="47f7fbf698fb83a9dc049c85ba40dd9e40a4fadf9cddef67d439c33c96c6111d" Jan 22 09:48:28 crc kubenswrapper[4811]: I0122 09:48:28.598483 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47f7fbf698fb83a9dc049c85ba40dd9e40a4fadf9cddef67d439c33c96c6111d"} err="failed to get container status \"47f7fbf698fb83a9dc049c85ba40dd9e40a4fadf9cddef67d439c33c96c6111d\": rpc error: code = NotFound desc = could not find container \"47f7fbf698fb83a9dc049c85ba40dd9e40a4fadf9cddef67d439c33c96c6111d\": container with ID starting with 47f7fbf698fb83a9dc049c85ba40dd9e40a4fadf9cddef67d439c33c96c6111d not found: ID does not exist" Jan 22 09:48:28 crc kubenswrapper[4811]: I0122 09:48:28.598518 4811 scope.go:117] "RemoveContainer" containerID="2f62001d3f77649864e22ef2ff5e7be94ed6c24301491740cb75a9511b3d7ca4" Jan 22 09:48:28 crc kubenswrapper[4811]: E0122 09:48:28.599031 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f62001d3f77649864e22ef2ff5e7be94ed6c24301491740cb75a9511b3d7ca4\": container with ID starting with 2f62001d3f77649864e22ef2ff5e7be94ed6c24301491740cb75a9511b3d7ca4 not found: ID does not exist" containerID="2f62001d3f77649864e22ef2ff5e7be94ed6c24301491740cb75a9511b3d7ca4" Jan 22 09:48:28 crc kubenswrapper[4811]: I0122 09:48:28.599084 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f62001d3f77649864e22ef2ff5e7be94ed6c24301491740cb75a9511b3d7ca4"} err="failed to get container status \"2f62001d3f77649864e22ef2ff5e7be94ed6c24301491740cb75a9511b3d7ca4\": rpc error: code = NotFound desc = could not find container \"2f62001d3f77649864e22ef2ff5e7be94ed6c24301491740cb75a9511b3d7ca4\": container with ID starting with 2f62001d3f77649864e22ef2ff5e7be94ed6c24301491740cb75a9511b3d7ca4 not found: ID does not exist" Jan 22 09:48:28 crc kubenswrapper[4811]: I0122 09:48:28.599118 4811 scope.go:117] "RemoveContainer" containerID="47f7fbf698fb83a9dc049c85ba40dd9e40a4fadf9cddef67d439c33c96c6111d" Jan 22 09:48:28 crc kubenswrapper[4811]: I0122 09:48:28.599508 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47f7fbf698fb83a9dc049c85ba40dd9e40a4fadf9cddef67d439c33c96c6111d"} err="failed to get container status \"47f7fbf698fb83a9dc049c85ba40dd9e40a4fadf9cddef67d439c33c96c6111d\": rpc error: code = NotFound desc = could not find container \"47f7fbf698fb83a9dc049c85ba40dd9e40a4fadf9cddef67d439c33c96c6111d\": container with ID starting with 47f7fbf698fb83a9dc049c85ba40dd9e40a4fadf9cddef67d439c33c96c6111d not found: ID does not exist" Jan 22 09:48:28 crc kubenswrapper[4811]: I0122 09:48:28.599542 4811 scope.go:117] "RemoveContainer" containerID="2f62001d3f77649864e22ef2ff5e7be94ed6c24301491740cb75a9511b3d7ca4" Jan 22 09:48:28 crc kubenswrapper[4811]: I0122 09:48:28.600010 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f62001d3f77649864e22ef2ff5e7be94ed6c24301491740cb75a9511b3d7ca4"} err="failed to get container status \"2f62001d3f77649864e22ef2ff5e7be94ed6c24301491740cb75a9511b3d7ca4\": rpc error: code = NotFound desc = could not find container \"2f62001d3f77649864e22ef2ff5e7be94ed6c24301491740cb75a9511b3d7ca4\": container with ID starting with 2f62001d3f77649864e22ef2ff5e7be94ed6c24301491740cb75a9511b3d7ca4 not found: ID does not exist" Jan 22 09:48:28 crc kubenswrapper[4811]: I0122 09:48:28.619440 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"4494328e-eef4-42b6-993f-654585a11db3\") " pod="openstack/glance-default-external-api-0" Jan 22 09:48:28 crc kubenswrapper[4811]: I0122 09:48:28.620269 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4494328e-eef4-42b6-993f-654585a11db3-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"4494328e-eef4-42b6-993f-654585a11db3\") " pod="openstack/glance-default-external-api-0" Jan 22 09:48:28 crc kubenswrapper[4811]: I0122 09:48:28.620433 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/4494328e-eef4-42b6-993f-654585a11db3-ceph\") pod \"glance-default-external-api-0\" (UID: \"4494328e-eef4-42b6-993f-654585a11db3\") " pod="openstack/glance-default-external-api-0" Jan 22 09:48:28 crc kubenswrapper[4811]: I0122 09:48:28.620523 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4494328e-eef4-42b6-993f-654585a11db3-scripts\") pod \"glance-default-external-api-0\" (UID: \"4494328e-eef4-42b6-993f-654585a11db3\") " pod="openstack/glance-default-external-api-0" Jan 22 09:48:28 crc kubenswrapper[4811]: I0122 09:48:28.620667 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4494328e-eef4-42b6-993f-654585a11db3-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"4494328e-eef4-42b6-993f-654585a11db3\") " pod="openstack/glance-default-external-api-0" Jan 22 09:48:28 crc kubenswrapper[4811]: I0122 09:48:28.620822 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4494328e-eef4-42b6-993f-654585a11db3-config-data\") pod \"glance-default-external-api-0\" (UID: \"4494328e-eef4-42b6-993f-654585a11db3\") " pod="openstack/glance-default-external-api-0" Jan 22 09:48:28 crc kubenswrapper[4811]: I0122 09:48:28.620910 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4494328e-eef4-42b6-993f-654585a11db3-logs\") pod \"glance-default-external-api-0\" (UID: \"4494328e-eef4-42b6-993f-654585a11db3\") " pod="openstack/glance-default-external-api-0" Jan 22 09:48:28 crc kubenswrapper[4811]: I0122 09:48:28.620994 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4494328e-eef4-42b6-993f-654585a11db3-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"4494328e-eef4-42b6-993f-654585a11db3\") " pod="openstack/glance-default-external-api-0" Jan 22 09:48:28 crc kubenswrapper[4811]: I0122 09:48:28.621127 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tq69c\" (UniqueName: \"kubernetes.io/projected/4494328e-eef4-42b6-993f-654585a11db3-kube-api-access-tq69c\") pod \"glance-default-external-api-0\" (UID: \"4494328e-eef4-42b6-993f-654585a11db3\") " pod="openstack/glance-default-external-api-0" Jan 22 09:48:28 crc kubenswrapper[4811]: I0122 09:48:28.701791 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 22 09:48:28 crc kubenswrapper[4811]: I0122 09:48:28.721692 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 22 09:48:28 crc kubenswrapper[4811]: I0122 09:48:28.723475 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tq69c\" (UniqueName: \"kubernetes.io/projected/4494328e-eef4-42b6-993f-654585a11db3-kube-api-access-tq69c\") pod \"glance-default-external-api-0\" (UID: \"4494328e-eef4-42b6-993f-654585a11db3\") " pod="openstack/glance-default-external-api-0" Jan 22 09:48:28 crc kubenswrapper[4811]: I0122 09:48:28.723773 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"4494328e-eef4-42b6-993f-654585a11db3\") " pod="openstack/glance-default-external-api-0" Jan 22 09:48:28 crc kubenswrapper[4811]: I0122 09:48:28.723824 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4494328e-eef4-42b6-993f-654585a11db3-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"4494328e-eef4-42b6-993f-654585a11db3\") " pod="openstack/glance-default-external-api-0" Jan 22 09:48:28 crc kubenswrapper[4811]: I0122 09:48:28.723877 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/4494328e-eef4-42b6-993f-654585a11db3-ceph\") pod \"glance-default-external-api-0\" (UID: \"4494328e-eef4-42b6-993f-654585a11db3\") " pod="openstack/glance-default-external-api-0" Jan 22 09:48:28 crc kubenswrapper[4811]: I0122 09:48:28.723907 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4494328e-eef4-42b6-993f-654585a11db3-scripts\") pod \"glance-default-external-api-0\" (UID: \"4494328e-eef4-42b6-993f-654585a11db3\") " pod="openstack/glance-default-external-api-0" Jan 22 09:48:28 crc kubenswrapper[4811]: I0122 09:48:28.723961 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4494328e-eef4-42b6-993f-654585a11db3-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"4494328e-eef4-42b6-993f-654585a11db3\") " pod="openstack/glance-default-external-api-0" Jan 22 09:48:28 crc kubenswrapper[4811]: I0122 09:48:28.724028 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4494328e-eef4-42b6-993f-654585a11db3-config-data\") pod \"glance-default-external-api-0\" (UID: \"4494328e-eef4-42b6-993f-654585a11db3\") " pod="openstack/glance-default-external-api-0" Jan 22 09:48:28 crc kubenswrapper[4811]: I0122 09:48:28.724053 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4494328e-eef4-42b6-993f-654585a11db3-logs\") pod \"glance-default-external-api-0\" (UID: \"4494328e-eef4-42b6-993f-654585a11db3\") " pod="openstack/glance-default-external-api-0" Jan 22 09:48:28 crc kubenswrapper[4811]: I0122 09:48:28.724088 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4494328e-eef4-42b6-993f-654585a11db3-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"4494328e-eef4-42b6-993f-654585a11db3\") " pod="openstack/glance-default-external-api-0" Jan 22 09:48:28 crc kubenswrapper[4811]: I0122 09:48:28.725492 4811 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"4494328e-eef4-42b6-993f-654585a11db3\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/glance-default-external-api-0" Jan 22 09:48:28 crc kubenswrapper[4811]: I0122 09:48:28.725761 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4494328e-eef4-42b6-993f-654585a11db3-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"4494328e-eef4-42b6-993f-654585a11db3\") " pod="openstack/glance-default-external-api-0" Jan 22 09:48:28 crc kubenswrapper[4811]: I0122 09:48:28.730039 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4494328e-eef4-42b6-993f-654585a11db3-logs\") pod \"glance-default-external-api-0\" (UID: \"4494328e-eef4-42b6-993f-654585a11db3\") " pod="openstack/glance-default-external-api-0" Jan 22 09:48:28 crc kubenswrapper[4811]: I0122 09:48:28.742817 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 22 09:48:28 crc kubenswrapper[4811]: I0122 09:48:28.756510 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 22 09:48:28 crc kubenswrapper[4811]: I0122 09:48:28.758563 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4494328e-eef4-42b6-993f-654585a11db3-scripts\") pod \"glance-default-external-api-0\" (UID: \"4494328e-eef4-42b6-993f-654585a11db3\") " pod="openstack/glance-default-external-api-0" Jan 22 09:48:28 crc kubenswrapper[4811]: I0122 09:48:28.759516 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4494328e-eef4-42b6-993f-654585a11db3-config-data\") pod \"glance-default-external-api-0\" (UID: \"4494328e-eef4-42b6-993f-654585a11db3\") " pod="openstack/glance-default-external-api-0" Jan 22 09:48:28 crc kubenswrapper[4811]: I0122 09:48:28.766796 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 22 09:48:28 crc kubenswrapper[4811]: I0122 09:48:28.767003 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 22 09:48:28 crc kubenswrapper[4811]: I0122 09:48:28.770390 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4494328e-eef4-42b6-993f-654585a11db3-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"4494328e-eef4-42b6-993f-654585a11db3\") " pod="openstack/glance-default-external-api-0" Jan 22 09:48:28 crc kubenswrapper[4811]: I0122 09:48:28.771276 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/4494328e-eef4-42b6-993f-654585a11db3-ceph\") pod \"glance-default-external-api-0\" (UID: \"4494328e-eef4-42b6-993f-654585a11db3\") " pod="openstack/glance-default-external-api-0" Jan 22 09:48:28 crc kubenswrapper[4811]: I0122 09:48:28.775318 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tq69c\" (UniqueName: \"kubernetes.io/projected/4494328e-eef4-42b6-993f-654585a11db3-kube-api-access-tq69c\") pod \"glance-default-external-api-0\" (UID: \"4494328e-eef4-42b6-993f-654585a11db3\") " pod="openstack/glance-default-external-api-0" Jan 22 09:48:28 crc kubenswrapper[4811]: I0122 09:48:28.783242 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4494328e-eef4-42b6-993f-654585a11db3-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"4494328e-eef4-42b6-993f-654585a11db3\") " pod="openstack/glance-default-external-api-0" Jan 22 09:48:28 crc kubenswrapper[4811]: I0122 09:48:28.789256 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 22 09:48:28 crc kubenswrapper[4811]: I0122 09:48:28.821808 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"4494328e-eef4-42b6-993f-654585a11db3\") " pod="openstack/glance-default-external-api-0" Jan 22 09:48:28 crc kubenswrapper[4811]: I0122 09:48:28.930022 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ff328b8-5b8b-4a66-85e9-b083d86f2811-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"4ff328b8-5b8b-4a66-85e9-b083d86f2811\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:48:28 crc kubenswrapper[4811]: I0122 09:48:28.930199 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/4ff328b8-5b8b-4a66-85e9-b083d86f2811-ceph\") pod \"glance-default-internal-api-0\" (UID: \"4ff328b8-5b8b-4a66-85e9-b083d86f2811\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:48:28 crc kubenswrapper[4811]: I0122 09:48:28.930254 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4ff328b8-5b8b-4a66-85e9-b083d86f2811-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"4ff328b8-5b8b-4a66-85e9-b083d86f2811\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:48:28 crc kubenswrapper[4811]: I0122 09:48:28.930320 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ff328b8-5b8b-4a66-85e9-b083d86f2811-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"4ff328b8-5b8b-4a66-85e9-b083d86f2811\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:48:28 crc kubenswrapper[4811]: I0122 09:48:28.930345 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"4ff328b8-5b8b-4a66-85e9-b083d86f2811\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:48:28 crc kubenswrapper[4811]: I0122 09:48:28.930370 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfnww\" (UniqueName: \"kubernetes.io/projected/4ff328b8-5b8b-4a66-85e9-b083d86f2811-kube-api-access-xfnww\") pod \"glance-default-internal-api-0\" (UID: \"4ff328b8-5b8b-4a66-85e9-b083d86f2811\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:48:28 crc kubenswrapper[4811]: I0122 09:48:28.930448 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ff328b8-5b8b-4a66-85e9-b083d86f2811-config-data\") pod \"glance-default-internal-api-0\" (UID: \"4ff328b8-5b8b-4a66-85e9-b083d86f2811\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:48:28 crc kubenswrapper[4811]: I0122 09:48:28.930837 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4ff328b8-5b8b-4a66-85e9-b083d86f2811-scripts\") pod \"glance-default-internal-api-0\" (UID: \"4ff328b8-5b8b-4a66-85e9-b083d86f2811\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:48:28 crc kubenswrapper[4811]: I0122 09:48:28.931002 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4ff328b8-5b8b-4a66-85e9-b083d86f2811-logs\") pod \"glance-default-internal-api-0\" (UID: \"4ff328b8-5b8b-4a66-85e9-b083d86f2811\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:48:29 crc kubenswrapper[4811]: I0122 09:48:29.032764 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4ff328b8-5b8b-4a66-85e9-b083d86f2811-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"4ff328b8-5b8b-4a66-85e9-b083d86f2811\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:48:29 crc kubenswrapper[4811]: I0122 09:48:29.033119 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ff328b8-5b8b-4a66-85e9-b083d86f2811-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"4ff328b8-5b8b-4a66-85e9-b083d86f2811\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:48:29 crc kubenswrapper[4811]: I0122 09:48:29.033146 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"4ff328b8-5b8b-4a66-85e9-b083d86f2811\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:48:29 crc kubenswrapper[4811]: I0122 09:48:29.033169 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xfnww\" (UniqueName: \"kubernetes.io/projected/4ff328b8-5b8b-4a66-85e9-b083d86f2811-kube-api-access-xfnww\") pod \"glance-default-internal-api-0\" (UID: \"4ff328b8-5b8b-4a66-85e9-b083d86f2811\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:48:29 crc kubenswrapper[4811]: I0122 09:48:29.033229 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ff328b8-5b8b-4a66-85e9-b083d86f2811-config-data\") pod \"glance-default-internal-api-0\" (UID: \"4ff328b8-5b8b-4a66-85e9-b083d86f2811\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:48:29 crc kubenswrapper[4811]: I0122 09:48:29.033291 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4ff328b8-5b8b-4a66-85e9-b083d86f2811-scripts\") pod \"glance-default-internal-api-0\" (UID: \"4ff328b8-5b8b-4a66-85e9-b083d86f2811\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:48:29 crc kubenswrapper[4811]: I0122 09:48:29.033375 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4ff328b8-5b8b-4a66-85e9-b083d86f2811-logs\") pod \"glance-default-internal-api-0\" (UID: \"4ff328b8-5b8b-4a66-85e9-b083d86f2811\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:48:29 crc kubenswrapper[4811]: I0122 09:48:29.033422 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ff328b8-5b8b-4a66-85e9-b083d86f2811-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"4ff328b8-5b8b-4a66-85e9-b083d86f2811\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:48:29 crc kubenswrapper[4811]: I0122 09:48:29.033479 4811 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"4ff328b8-5b8b-4a66-85e9-b083d86f2811\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-internal-api-0" Jan 22 09:48:29 crc kubenswrapper[4811]: I0122 09:48:29.033498 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/4ff328b8-5b8b-4a66-85e9-b083d86f2811-ceph\") pod \"glance-default-internal-api-0\" (UID: \"4ff328b8-5b8b-4a66-85e9-b083d86f2811\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:48:29 crc kubenswrapper[4811]: I0122 09:48:29.034504 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4ff328b8-5b8b-4a66-85e9-b083d86f2811-logs\") pod \"glance-default-internal-api-0\" (UID: \"4ff328b8-5b8b-4a66-85e9-b083d86f2811\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:48:29 crc kubenswrapper[4811]: I0122 09:48:29.043514 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ff328b8-5b8b-4a66-85e9-b083d86f2811-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"4ff328b8-5b8b-4a66-85e9-b083d86f2811\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:48:29 crc kubenswrapper[4811]: I0122 09:48:29.043857 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4ff328b8-5b8b-4a66-85e9-b083d86f2811-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"4ff328b8-5b8b-4a66-85e9-b083d86f2811\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:48:29 crc kubenswrapper[4811]: I0122 09:48:29.044362 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/4ff328b8-5b8b-4a66-85e9-b083d86f2811-ceph\") pod \"glance-default-internal-api-0\" (UID: \"4ff328b8-5b8b-4a66-85e9-b083d86f2811\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:48:29 crc kubenswrapper[4811]: I0122 09:48:29.045721 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ff328b8-5b8b-4a66-85e9-b083d86f2811-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"4ff328b8-5b8b-4a66-85e9-b083d86f2811\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:48:29 crc kubenswrapper[4811]: I0122 09:48:29.045845 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 22 09:48:29 crc kubenswrapper[4811]: I0122 09:48:29.052457 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ff328b8-5b8b-4a66-85e9-b083d86f2811-config-data\") pod \"glance-default-internal-api-0\" (UID: \"4ff328b8-5b8b-4a66-85e9-b083d86f2811\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:48:29 crc kubenswrapper[4811]: I0122 09:48:29.087177 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4ff328b8-5b8b-4a66-85e9-b083d86f2811-scripts\") pod \"glance-default-internal-api-0\" (UID: \"4ff328b8-5b8b-4a66-85e9-b083d86f2811\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:48:29 crc kubenswrapper[4811]: I0122 09:48:29.101658 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfnww\" (UniqueName: \"kubernetes.io/projected/4ff328b8-5b8b-4a66-85e9-b083d86f2811-kube-api-access-xfnww\") pod \"glance-default-internal-api-0\" (UID: \"4ff328b8-5b8b-4a66-85e9-b083d86f2811\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:48:29 crc kubenswrapper[4811]: I0122 09:48:29.126604 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"4ff328b8-5b8b-4a66-85e9-b083d86f2811\") " pod="openstack/glance-default-internal-api-0" Jan 22 09:48:29 crc kubenswrapper[4811]: I0122 09:48:29.194264 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 22 09:48:29 crc kubenswrapper[4811]: I0122 09:48:29.775394 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 22 09:48:30 crc kubenswrapper[4811]: I0122 09:48:30.051123 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7336bf4-f489-4e7e-a736-49be04734cc3" path="/var/lib/kubelet/pods/a7336bf4-f489-4e7e-a736-49be04734cc3/volumes" Jan 22 09:48:30 crc kubenswrapper[4811]: I0122 09:48:30.056655 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a988491b-cfb6-4a81-8da5-e3d84621b668" path="/var/lib/kubelet/pods/a988491b-cfb6-4a81-8da5-e3d84621b668/volumes" Jan 22 09:48:30 crc kubenswrapper[4811]: I0122 09:48:30.057529 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 22 09:48:30 crc kubenswrapper[4811]: I0122 09:48:30.379950 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4494328e-eef4-42b6-993f-654585a11db3","Type":"ContainerStarted","Data":"ed261cc85eeda3e282d74bb8b475d9c9d9e96a9f130004d4b957a8931d76d83f"} Jan 22 09:48:30 crc kubenswrapper[4811]: I0122 09:48:30.388466 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4ff328b8-5b8b-4a66-85e9-b083d86f2811","Type":"ContainerStarted","Data":"53dc3e385a5e11545997897112d728b4541bb729c553c5772d6c6a2d2c495bc6"} Jan 22 09:48:30 crc kubenswrapper[4811]: I0122 09:48:30.539097 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-volume-volume1-0" Jan 22 09:48:30 crc kubenswrapper[4811]: I0122 09:48:30.808460 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-volume-volume1-0" Jan 22 09:48:30 crc kubenswrapper[4811]: I0122 09:48:30.952593 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-backup-0" Jan 22 09:48:31 crc kubenswrapper[4811]: I0122 09:48:31.421835 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-db-sync-wlfrs"] Jan 22 09:48:31 crc kubenswrapper[4811]: I0122 09:48:31.426015 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-wlfrs" Jan 22 09:48:31 crc kubenswrapper[4811]: I0122 09:48:31.428542 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-manila-dockercfg-hnqt9" Jan 22 09:48:31 crc kubenswrapper[4811]: I0122 09:48:31.436406 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-config-data" Jan 22 09:48:31 crc kubenswrapper[4811]: I0122 09:48:31.438468 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7356c23-bed8-4798-b71e-9e29994fd1e6-config-data\") pod \"manila-db-sync-wlfrs\" (UID: \"c7356c23-bed8-4798-b71e-9e29994fd1e6\") " pod="openstack/manila-db-sync-wlfrs" Jan 22 09:48:31 crc kubenswrapper[4811]: I0122 09:48:31.438689 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7356c23-bed8-4798-b71e-9e29994fd1e6-combined-ca-bundle\") pod \"manila-db-sync-wlfrs\" (UID: \"c7356c23-bed8-4798-b71e-9e29994fd1e6\") " pod="openstack/manila-db-sync-wlfrs" Jan 22 09:48:31 crc kubenswrapper[4811]: I0122 09:48:31.438798 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlml4\" (UniqueName: \"kubernetes.io/projected/c7356c23-bed8-4798-b71e-9e29994fd1e6-kube-api-access-wlml4\") pod \"manila-db-sync-wlfrs\" (UID: \"c7356c23-bed8-4798-b71e-9e29994fd1e6\") " pod="openstack/manila-db-sync-wlfrs" Jan 22 09:48:31 crc kubenswrapper[4811]: I0122 09:48:31.438863 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/c7356c23-bed8-4798-b71e-9e29994fd1e6-job-config-data\") pod \"manila-db-sync-wlfrs\" (UID: \"c7356c23-bed8-4798-b71e-9e29994fd1e6\") " pod="openstack/manila-db-sync-wlfrs" Jan 22 09:48:31 crc kubenswrapper[4811]: I0122 09:48:31.440037 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-sync-wlfrs"] Jan 22 09:48:31 crc kubenswrapper[4811]: I0122 09:48:31.463719 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4ff328b8-5b8b-4a66-85e9-b083d86f2811","Type":"ContainerStarted","Data":"42b2b8b0c1e09e68816da5125d602e4485d342182af9d6e2e0aa2248c2f68291"} Jan 22 09:48:31 crc kubenswrapper[4811]: I0122 09:48:31.470274 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4494328e-eef4-42b6-993f-654585a11db3","Type":"ContainerStarted","Data":"5c922825df856bc096262121d987c663343eb7b113741f37e0b8575213c1265b"} Jan 22 09:48:31 crc kubenswrapper[4811]: I0122 09:48:31.540904 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/c7356c23-bed8-4798-b71e-9e29994fd1e6-job-config-data\") pod \"manila-db-sync-wlfrs\" (UID: \"c7356c23-bed8-4798-b71e-9e29994fd1e6\") " pod="openstack/manila-db-sync-wlfrs" Jan 22 09:48:31 crc kubenswrapper[4811]: I0122 09:48:31.541069 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7356c23-bed8-4798-b71e-9e29994fd1e6-config-data\") pod \"manila-db-sync-wlfrs\" (UID: \"c7356c23-bed8-4798-b71e-9e29994fd1e6\") " pod="openstack/manila-db-sync-wlfrs" Jan 22 09:48:31 crc kubenswrapper[4811]: I0122 09:48:31.541655 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7356c23-bed8-4798-b71e-9e29994fd1e6-combined-ca-bundle\") pod \"manila-db-sync-wlfrs\" (UID: \"c7356c23-bed8-4798-b71e-9e29994fd1e6\") " pod="openstack/manila-db-sync-wlfrs" Jan 22 09:48:31 crc kubenswrapper[4811]: I0122 09:48:31.542237 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wlml4\" (UniqueName: \"kubernetes.io/projected/c7356c23-bed8-4798-b71e-9e29994fd1e6-kube-api-access-wlml4\") pod \"manila-db-sync-wlfrs\" (UID: \"c7356c23-bed8-4798-b71e-9e29994fd1e6\") " pod="openstack/manila-db-sync-wlfrs" Jan 22 09:48:31 crc kubenswrapper[4811]: I0122 09:48:31.554647 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/c7356c23-bed8-4798-b71e-9e29994fd1e6-job-config-data\") pod \"manila-db-sync-wlfrs\" (UID: \"c7356c23-bed8-4798-b71e-9e29994fd1e6\") " pod="openstack/manila-db-sync-wlfrs" Jan 22 09:48:31 crc kubenswrapper[4811]: I0122 09:48:31.554957 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7356c23-bed8-4798-b71e-9e29994fd1e6-combined-ca-bundle\") pod \"manila-db-sync-wlfrs\" (UID: \"c7356c23-bed8-4798-b71e-9e29994fd1e6\") " pod="openstack/manila-db-sync-wlfrs" Jan 22 09:48:31 crc kubenswrapper[4811]: I0122 09:48:31.556388 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlml4\" (UniqueName: \"kubernetes.io/projected/c7356c23-bed8-4798-b71e-9e29994fd1e6-kube-api-access-wlml4\") pod \"manila-db-sync-wlfrs\" (UID: \"c7356c23-bed8-4798-b71e-9e29994fd1e6\") " pod="openstack/manila-db-sync-wlfrs" Jan 22 09:48:31 crc kubenswrapper[4811]: I0122 09:48:31.567155 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7356c23-bed8-4798-b71e-9e29994fd1e6-config-data\") pod \"manila-db-sync-wlfrs\" (UID: \"c7356c23-bed8-4798-b71e-9e29994fd1e6\") " pod="openstack/manila-db-sync-wlfrs" Jan 22 09:48:31 crc kubenswrapper[4811]: I0122 09:48:31.758183 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-wlfrs" Jan 22 09:48:32 crc kubenswrapper[4811]: I0122 09:48:32.487978 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4494328e-eef4-42b6-993f-654585a11db3","Type":"ContainerStarted","Data":"d8bb68e49cb402e66a8078afc96b9cc5a957f58119fb7d497bcfab5a7b8199b2"} Jan 22 09:48:32 crc kubenswrapper[4811]: I0122 09:48:32.496898 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4ff328b8-5b8b-4a66-85e9-b083d86f2811","Type":"ContainerStarted","Data":"43d8016e1cebe5afbbe1822b916d27dc851cba3e006e662315f3452dffe0b106"} Jan 22 09:48:32 crc kubenswrapper[4811]: I0122 09:48:32.639582 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.63956406 podStartE2EDuration="4.63956406s" podCreationTimestamp="2026-01-22 09:48:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:48:32.638710823 +0000 UTC m=+2556.960897945" watchObservedRunningTime="2026-01-22 09:48:32.63956406 +0000 UTC m=+2556.961751183" Jan 22 09:48:32 crc kubenswrapper[4811]: I0122 09:48:32.644130 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.644121212 podStartE2EDuration="4.644121212s" podCreationTimestamp="2026-01-22 09:48:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:48:32.594799249 +0000 UTC m=+2556.916986373" watchObservedRunningTime="2026-01-22 09:48:32.644121212 +0000 UTC m=+2556.966308335" Jan 22 09:48:32 crc kubenswrapper[4811]: I0122 09:48:32.753517 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-sync-wlfrs"] Jan 22 09:48:33 crc kubenswrapper[4811]: I0122 09:48:33.509155 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-wlfrs" event={"ID":"c7356c23-bed8-4798-b71e-9e29994fd1e6","Type":"ContainerStarted","Data":"0b16eda58058f9835cb973795c8fb842b139a509e6b6f9486459205b3ae918b7"} Jan 22 09:48:38 crc kubenswrapper[4811]: I0122 09:48:38.554664 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-d6c54dc74-r2gjr" event={"ID":"660d9785-0f9b-4953-af76-580ed227c244","Type":"ContainerStarted","Data":"d932f2adbda59824841f6cb86225f24b2e877e9f69ebadd4e657ce3e01adf42c"} Jan 22 09:48:38 crc kubenswrapper[4811]: I0122 09:48:38.555217 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-d6c54dc74-r2gjr" event={"ID":"660d9785-0f9b-4953-af76-580ed227c244","Type":"ContainerStarted","Data":"015e311ea5ef98e58145ed177feb6471c51ef5a722b72a6005d3bb6fbf8d8201"} Jan 22 09:48:38 crc kubenswrapper[4811]: I0122 09:48:38.567512 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-79dbbfc64c-27fss" event={"ID":"d2c00677-42e9-4694-9a73-a020bcb17a98","Type":"ContainerStarted","Data":"75f9c9ae46eadb35159ed1468edc3f2674789f8eb89f0644a6c69d7e26349ad5"} Jan 22 09:48:38 crc kubenswrapper[4811]: I0122 09:48:38.567574 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-79dbbfc64c-27fss" podUID="d2c00677-42e9-4694-9a73-a020bcb17a98" containerName="horizon" containerID="cri-o://75f9c9ae46eadb35159ed1468edc3f2674789f8eb89f0644a6c69d7e26349ad5" gracePeriod=30 Jan 22 09:48:38 crc kubenswrapper[4811]: I0122 09:48:38.567609 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-79dbbfc64c-27fss" event={"ID":"d2c00677-42e9-4694-9a73-a020bcb17a98","Type":"ContainerStarted","Data":"948296facdc97260934827eb7261690c45a258af647887da415085bdd163b1b9"} Jan 22 09:48:38 crc kubenswrapper[4811]: I0122 09:48:38.567549 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-79dbbfc64c-27fss" podUID="d2c00677-42e9-4694-9a73-a020bcb17a98" containerName="horizon-log" containerID="cri-o://948296facdc97260934827eb7261690c45a258af647887da415085bdd163b1b9" gracePeriod=30 Jan 22 09:48:38 crc kubenswrapper[4811]: I0122 09:48:38.573373 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-84776b8f5f-8h2x2" podUID="080675ca-91da-4c39-a901-fef7f8496220" containerName="horizon-log" containerID="cri-o://8c63244c0280484a029a7c883eb5e14321d435dad5a1f103e8afa75de14982ea" gracePeriod=30 Jan 22 09:48:38 crc kubenswrapper[4811]: I0122 09:48:38.573380 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-84776b8f5f-8h2x2" event={"ID":"080675ca-91da-4c39-a901-fef7f8496220","Type":"ContainerStarted","Data":"e244d2590d776e1829f2b330a4cec949ee28e11c9644fb7e2c9d04358d0ee031"} Jan 22 09:48:38 crc kubenswrapper[4811]: I0122 09:48:38.573543 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-84776b8f5f-8h2x2" event={"ID":"080675ca-91da-4c39-a901-fef7f8496220","Type":"ContainerStarted","Data":"8c63244c0280484a029a7c883eb5e14321d435dad5a1f103e8afa75de14982ea"} Jan 22 09:48:38 crc kubenswrapper[4811]: I0122 09:48:38.573557 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-84776b8f5f-8h2x2" podUID="080675ca-91da-4c39-a901-fef7f8496220" containerName="horizon" containerID="cri-o://e244d2590d776e1829f2b330a4cec949ee28e11c9644fb7e2c9d04358d0ee031" gracePeriod=30 Jan 22 09:48:38 crc kubenswrapper[4811]: I0122 09:48:38.583643 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-f75b46fc8-4l2b8" event={"ID":"9c9eef01-268a-4d3c-b3c3-f30cd80694e0","Type":"ContainerStarted","Data":"68627323802bb052c62dae548fd01e69d6b090313f6619edc1c98eefa708093f"} Jan 22 09:48:38 crc kubenswrapper[4811]: I0122 09:48:38.583677 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-f75b46fc8-4l2b8" event={"ID":"9c9eef01-268a-4d3c-b3c3-f30cd80694e0","Type":"ContainerStarted","Data":"d5699479ad1dd1eef92306cc2dbdb6ccebaccfd5bf4418d1cf1e1c2419c0a6ba"} Jan 22 09:48:38 crc kubenswrapper[4811]: I0122 09:48:38.596339 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-d6c54dc74-r2gjr" podStartSLOduration=2.949296292 podStartE2EDuration="15.596326455s" podCreationTimestamp="2026-01-22 09:48:23 +0000 UTC" firstStartedPulling="2026-01-22 09:48:25.085856107 +0000 UTC m=+2549.408043230" lastFinishedPulling="2026-01-22 09:48:37.73288627 +0000 UTC m=+2562.055073393" observedRunningTime="2026-01-22 09:48:38.575245168 +0000 UTC m=+2562.897432291" watchObservedRunningTime="2026-01-22 09:48:38.596326455 +0000 UTC m=+2562.918513578" Jan 22 09:48:38 crc kubenswrapper[4811]: I0122 09:48:38.606883 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-79dbbfc64c-27fss" podStartSLOduration=2.846785077 podStartE2EDuration="17.606855387s" podCreationTimestamp="2026-01-22 09:48:21 +0000 UTC" firstStartedPulling="2026-01-22 09:48:22.986487977 +0000 UTC m=+2547.308675100" lastFinishedPulling="2026-01-22 09:48:37.746558287 +0000 UTC m=+2562.068745410" observedRunningTime="2026-01-22 09:48:38.593154334 +0000 UTC m=+2562.915341457" watchObservedRunningTime="2026-01-22 09:48:38.606855387 +0000 UTC m=+2562.929042499" Jan 22 09:48:38 crc kubenswrapper[4811]: I0122 09:48:38.628735 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-84776b8f5f-8h2x2" podStartSLOduration=3.9096080349999998 podStartE2EDuration="18.628674125s" podCreationTimestamp="2026-01-22 09:48:20 +0000 UTC" firstStartedPulling="2026-01-22 09:48:22.985977084 +0000 UTC m=+2547.308164207" lastFinishedPulling="2026-01-22 09:48:37.705043173 +0000 UTC m=+2562.027230297" observedRunningTime="2026-01-22 09:48:38.617285332 +0000 UTC m=+2562.939472455" watchObservedRunningTime="2026-01-22 09:48:38.628674125 +0000 UTC m=+2562.950861248" Jan 22 09:48:38 crc kubenswrapper[4811]: I0122 09:48:38.660463 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-f75b46fc8-4l2b8" podStartSLOduration=3.389471684 podStartE2EDuration="15.660440168s" podCreationTimestamp="2026-01-22 09:48:23 +0000 UTC" firstStartedPulling="2026-01-22 09:48:25.509125202 +0000 UTC m=+2549.831312325" lastFinishedPulling="2026-01-22 09:48:37.780093686 +0000 UTC m=+2562.102280809" observedRunningTime="2026-01-22 09:48:38.641273579 +0000 UTC m=+2562.963460703" watchObservedRunningTime="2026-01-22 09:48:38.660440168 +0000 UTC m=+2562.982627290" Jan 22 09:48:39 crc kubenswrapper[4811]: I0122 09:48:39.046525 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 22 09:48:39 crc kubenswrapper[4811]: I0122 09:48:39.047045 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 22 09:48:39 crc kubenswrapper[4811]: I0122 09:48:39.076105 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 22 09:48:39 crc kubenswrapper[4811]: I0122 09:48:39.107734 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 22 09:48:39 crc kubenswrapper[4811]: I0122 09:48:39.195309 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 22 09:48:39 crc kubenswrapper[4811]: I0122 09:48:39.195348 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 22 09:48:39 crc kubenswrapper[4811]: I0122 09:48:39.232580 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 22 09:48:39 crc kubenswrapper[4811]: I0122 09:48:39.245249 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 22 09:48:39 crc kubenswrapper[4811]: I0122 09:48:39.595851 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 22 09:48:39 crc kubenswrapper[4811]: I0122 09:48:39.596076 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 22 09:48:39 crc kubenswrapper[4811]: I0122 09:48:39.596207 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 22 09:48:39 crc kubenswrapper[4811]: I0122 09:48:39.596248 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 22 09:48:41 crc kubenswrapper[4811]: I0122 09:48:41.383714 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-84776b8f5f-8h2x2" Jan 22 09:48:41 crc kubenswrapper[4811]: I0122 09:48:41.574879 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-79dbbfc64c-27fss" Jan 22 09:48:42 crc kubenswrapper[4811]: I0122 09:48:42.134595 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 22 09:48:42 crc kubenswrapper[4811]: I0122 09:48:42.135028 4811 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 22 09:48:42 crc kubenswrapper[4811]: I0122 09:48:42.140046 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 22 09:48:42 crc kubenswrapper[4811]: I0122 09:48:42.232901 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 22 09:48:42 crc kubenswrapper[4811]: I0122 09:48:42.233013 4811 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 22 09:48:42 crc kubenswrapper[4811]: I0122 09:48:42.239714 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 22 09:48:44 crc kubenswrapper[4811]: I0122 09:48:44.206700 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-d6c54dc74-r2gjr" Jan 22 09:48:44 crc kubenswrapper[4811]: I0122 09:48:44.206971 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-d6c54dc74-r2gjr" Jan 22 09:48:44 crc kubenswrapper[4811]: I0122 09:48:44.397432 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-f75b46fc8-4l2b8" Jan 22 09:48:44 crc kubenswrapper[4811]: I0122 09:48:44.397483 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-f75b46fc8-4l2b8" Jan 22 09:48:46 crc kubenswrapper[4811]: I0122 09:48:46.661057 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-wlfrs" event={"ID":"c7356c23-bed8-4798-b71e-9e29994fd1e6","Type":"ContainerStarted","Data":"987d46eb8e54394ac1e23c730227f52ea9b87b2b86c5502229530ee870f34b55"} Jan 22 09:48:46 crc kubenswrapper[4811]: I0122 09:48:46.681081 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-db-sync-wlfrs" podStartSLOduration=2.969576526 podStartE2EDuration="15.681063446s" podCreationTimestamp="2026-01-22 09:48:31 +0000 UTC" firstStartedPulling="2026-01-22 09:48:32.764833928 +0000 UTC m=+2557.087021051" lastFinishedPulling="2026-01-22 09:48:45.476320848 +0000 UTC m=+2569.798507971" observedRunningTime="2026-01-22 09:48:46.675961566 +0000 UTC m=+2570.998148679" watchObservedRunningTime="2026-01-22 09:48:46.681063446 +0000 UTC m=+2571.003250568" Jan 22 09:48:50 crc kubenswrapper[4811]: I0122 09:48:50.700358 4811 generic.go:334] "Generic (PLEG): container finished" podID="c7356c23-bed8-4798-b71e-9e29994fd1e6" containerID="987d46eb8e54394ac1e23c730227f52ea9b87b2b86c5502229530ee870f34b55" exitCode=0 Jan 22 09:48:50 crc kubenswrapper[4811]: I0122 09:48:50.701044 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-wlfrs" event={"ID":"c7356c23-bed8-4798-b71e-9e29994fd1e6","Type":"ContainerDied","Data":"987d46eb8e54394ac1e23c730227f52ea9b87b2b86c5502229530ee870f34b55"} Jan 22 09:48:52 crc kubenswrapper[4811]: I0122 09:48:52.215345 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-wlfrs" Jan 22 09:48:52 crc kubenswrapper[4811]: I0122 09:48:52.283020 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wlml4\" (UniqueName: \"kubernetes.io/projected/c7356c23-bed8-4798-b71e-9e29994fd1e6-kube-api-access-wlml4\") pod \"c7356c23-bed8-4798-b71e-9e29994fd1e6\" (UID: \"c7356c23-bed8-4798-b71e-9e29994fd1e6\") " Jan 22 09:48:52 crc kubenswrapper[4811]: I0122 09:48:52.283119 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7356c23-bed8-4798-b71e-9e29994fd1e6-combined-ca-bundle\") pod \"c7356c23-bed8-4798-b71e-9e29994fd1e6\" (UID: \"c7356c23-bed8-4798-b71e-9e29994fd1e6\") " Jan 22 09:48:52 crc kubenswrapper[4811]: I0122 09:48:52.283172 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7356c23-bed8-4798-b71e-9e29994fd1e6-config-data\") pod \"c7356c23-bed8-4798-b71e-9e29994fd1e6\" (UID: \"c7356c23-bed8-4798-b71e-9e29994fd1e6\") " Jan 22 09:48:52 crc kubenswrapper[4811]: I0122 09:48:52.289749 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7356c23-bed8-4798-b71e-9e29994fd1e6-kube-api-access-wlml4" (OuterVolumeSpecName: "kube-api-access-wlml4") pod "c7356c23-bed8-4798-b71e-9e29994fd1e6" (UID: "c7356c23-bed8-4798-b71e-9e29994fd1e6"). InnerVolumeSpecName "kube-api-access-wlml4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:48:52 crc kubenswrapper[4811]: I0122 09:48:52.311981 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7356c23-bed8-4798-b71e-9e29994fd1e6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c7356c23-bed8-4798-b71e-9e29994fd1e6" (UID: "c7356c23-bed8-4798-b71e-9e29994fd1e6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:48:52 crc kubenswrapper[4811]: I0122 09:48:52.315523 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7356c23-bed8-4798-b71e-9e29994fd1e6-config-data" (OuterVolumeSpecName: "config-data") pod "c7356c23-bed8-4798-b71e-9e29994fd1e6" (UID: "c7356c23-bed8-4798-b71e-9e29994fd1e6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:48:52 crc kubenswrapper[4811]: I0122 09:48:52.385985 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/c7356c23-bed8-4798-b71e-9e29994fd1e6-job-config-data\") pod \"c7356c23-bed8-4798-b71e-9e29994fd1e6\" (UID: \"c7356c23-bed8-4798-b71e-9e29994fd1e6\") " Jan 22 09:48:52 crc kubenswrapper[4811]: I0122 09:48:52.387726 4811 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7356c23-bed8-4798-b71e-9e29994fd1e6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 09:48:52 crc kubenswrapper[4811]: I0122 09:48:52.387804 4811 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7356c23-bed8-4798-b71e-9e29994fd1e6-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 09:48:52 crc kubenswrapper[4811]: I0122 09:48:52.387865 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wlml4\" (UniqueName: \"kubernetes.io/projected/c7356c23-bed8-4798-b71e-9e29994fd1e6-kube-api-access-wlml4\") on node \"crc\" DevicePath \"\"" Jan 22 09:48:52 crc kubenswrapper[4811]: I0122 09:48:52.395738 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7356c23-bed8-4798-b71e-9e29994fd1e6-job-config-data" (OuterVolumeSpecName: "job-config-data") pod "c7356c23-bed8-4798-b71e-9e29994fd1e6" (UID: "c7356c23-bed8-4798-b71e-9e29994fd1e6"). InnerVolumeSpecName "job-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:48:52 crc kubenswrapper[4811]: I0122 09:48:52.490980 4811 reconciler_common.go:293] "Volume detached for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/c7356c23-bed8-4798-b71e-9e29994fd1e6-job-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 09:48:52 crc kubenswrapper[4811]: I0122 09:48:52.719495 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-wlfrs" event={"ID":"c7356c23-bed8-4798-b71e-9e29994fd1e6","Type":"ContainerDied","Data":"0b16eda58058f9835cb973795c8fb842b139a509e6b6f9486459205b3ae918b7"} Jan 22 09:48:52 crc kubenswrapper[4811]: I0122 09:48:52.719552 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0b16eda58058f9835cb973795c8fb842b139a509e6b6f9486459205b3ae918b7" Jan 22 09:48:52 crc kubenswrapper[4811]: I0122 09:48:52.719688 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-wlfrs" Jan 22 09:48:53 crc kubenswrapper[4811]: I0122 09:48:53.108971 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-share-share1-0"] Jan 22 09:48:53 crc kubenswrapper[4811]: E0122 09:48:53.109803 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7356c23-bed8-4798-b71e-9e29994fd1e6" containerName="manila-db-sync" Jan 22 09:48:53 crc kubenswrapper[4811]: I0122 09:48:53.109825 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7356c23-bed8-4798-b71e-9e29994fd1e6" containerName="manila-db-sync" Jan 22 09:48:53 crc kubenswrapper[4811]: I0122 09:48:53.110049 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7356c23-bed8-4798-b71e-9e29994fd1e6" containerName="manila-db-sync" Jan 22 09:48:53 crc kubenswrapper[4811]: I0122 09:48:53.111089 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Jan 22 09:48:53 crc kubenswrapper[4811]: I0122 09:48:53.120000 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-scheduler-0"] Jan 22 09:48:53 crc kubenswrapper[4811]: I0122 09:48:53.126133 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Jan 22 09:48:53 crc kubenswrapper[4811]: I0122 09:48:53.128025 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-share-share1-config-data" Jan 22 09:48:53 crc kubenswrapper[4811]: I0122 09:48:53.128678 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-config-data" Jan 22 09:48:53 crc kubenswrapper[4811]: I0122 09:48:53.128871 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-manila-dockercfg-hnqt9" Jan 22 09:48:53 crc kubenswrapper[4811]: I0122 09:48:53.129040 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scripts" Jan 22 09:48:53 crc kubenswrapper[4811]: I0122 09:48:53.129662 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scheduler-config-data" Jan 22 09:48:53 crc kubenswrapper[4811]: I0122 09:48:53.148793 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Jan 22 09:48:53 crc kubenswrapper[4811]: I0122 09:48:53.157473 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Jan 22 09:48:53 crc kubenswrapper[4811]: I0122 09:48:53.217958 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0326dd33-6447-432e-9f9f-1ee950f5c82d-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"0326dd33-6447-432e-9f9f-1ee950f5c82d\") " pod="openstack/manila-share-share1-0" Jan 22 09:48:53 crc kubenswrapper[4811]: I0122 09:48:53.218020 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df635a7c-5d1e-4116-af78-c06f331d0a8b-config-data\") pod \"manila-scheduler-0\" (UID: \"df635a7c-5d1e-4116-af78-c06f331d0a8b\") " pod="openstack/manila-scheduler-0" Jan 22 09:48:53 crc kubenswrapper[4811]: I0122 09:48:53.218055 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0326dd33-6447-432e-9f9f-1ee950f5c82d-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"0326dd33-6447-432e-9f9f-1ee950f5c82d\") " pod="openstack/manila-share-share1-0" Jan 22 09:48:53 crc kubenswrapper[4811]: I0122 09:48:53.218145 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df635a7c-5d1e-4116-af78-c06f331d0a8b-scripts\") pod \"manila-scheduler-0\" (UID: \"df635a7c-5d1e-4116-af78-c06f331d0a8b\") " pod="openstack/manila-scheduler-0" Jan 22 09:48:53 crc kubenswrapper[4811]: I0122 09:48:53.218295 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/df635a7c-5d1e-4116-af78-c06f331d0a8b-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"df635a7c-5d1e-4116-af78-c06f331d0a8b\") " pod="openstack/manila-scheduler-0" Jan 22 09:48:53 crc kubenswrapper[4811]: I0122 09:48:53.218327 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0326dd33-6447-432e-9f9f-1ee950f5c82d-scripts\") pod \"manila-share-share1-0\" (UID: \"0326dd33-6447-432e-9f9f-1ee950f5c82d\") " pod="openstack/manila-share-share1-0" Jan 22 09:48:53 crc kubenswrapper[4811]: I0122 09:48:53.218356 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0326dd33-6447-432e-9f9f-1ee950f5c82d-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"0326dd33-6447-432e-9f9f-1ee950f5c82d\") " pod="openstack/manila-share-share1-0" Jan 22 09:48:53 crc kubenswrapper[4811]: I0122 09:48:53.218383 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wprcr\" (UniqueName: \"kubernetes.io/projected/df635a7c-5d1e-4116-af78-c06f331d0a8b-kube-api-access-wprcr\") pod \"manila-scheduler-0\" (UID: \"df635a7c-5d1e-4116-af78-c06f331d0a8b\") " pod="openstack/manila-scheduler-0" Jan 22 09:48:53 crc kubenswrapper[4811]: I0122 09:48:53.218442 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/0326dd33-6447-432e-9f9f-1ee950f5c82d-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"0326dd33-6447-432e-9f9f-1ee950f5c82d\") " pod="openstack/manila-share-share1-0" Jan 22 09:48:53 crc kubenswrapper[4811]: I0122 09:48:53.218503 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0326dd33-6447-432e-9f9f-1ee950f5c82d-config-data\") pod \"manila-share-share1-0\" (UID: \"0326dd33-6447-432e-9f9f-1ee950f5c82d\") " pod="openstack/manila-share-share1-0" Jan 22 09:48:53 crc kubenswrapper[4811]: I0122 09:48:53.218533 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df635a7c-5d1e-4116-af78-c06f331d0a8b-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"df635a7c-5d1e-4116-af78-c06f331d0a8b\") " pod="openstack/manila-scheduler-0" Jan 22 09:48:53 crc kubenswrapper[4811]: I0122 09:48:53.218580 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptrb2\" (UniqueName: \"kubernetes.io/projected/0326dd33-6447-432e-9f9f-1ee950f5c82d-kube-api-access-ptrb2\") pod \"manila-share-share1-0\" (UID: \"0326dd33-6447-432e-9f9f-1ee950f5c82d\") " pod="openstack/manila-share-share1-0" Jan 22 09:48:53 crc kubenswrapper[4811]: I0122 09:48:53.218694 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/0326dd33-6447-432e-9f9f-1ee950f5c82d-ceph\") pod \"manila-share-share1-0\" (UID: \"0326dd33-6447-432e-9f9f-1ee950f5c82d\") " pod="openstack/manila-share-share1-0" Jan 22 09:48:53 crc kubenswrapper[4811]: I0122 09:48:53.218737 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/df635a7c-5d1e-4116-af78-c06f331d0a8b-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"df635a7c-5d1e-4116-af78-c06f331d0a8b\") " pod="openstack/manila-scheduler-0" Jan 22 09:48:53 crc kubenswrapper[4811]: I0122 09:48:53.290831 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-595b86679f-c5h4r"] Jan 22 09:48:53 crc kubenswrapper[4811]: I0122 09:48:53.292522 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-595b86679f-c5h4r" Jan 22 09:48:53 crc kubenswrapper[4811]: I0122 09:48:53.319837 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/0326dd33-6447-432e-9f9f-1ee950f5c82d-ceph\") pod \"manila-share-share1-0\" (UID: \"0326dd33-6447-432e-9f9f-1ee950f5c82d\") " pod="openstack/manila-share-share1-0" Jan 22 09:48:53 crc kubenswrapper[4811]: I0122 09:48:53.319873 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/df635a7c-5d1e-4116-af78-c06f331d0a8b-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"df635a7c-5d1e-4116-af78-c06f331d0a8b\") " pod="openstack/manila-scheduler-0" Jan 22 09:48:53 crc kubenswrapper[4811]: I0122 09:48:53.319902 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0326dd33-6447-432e-9f9f-1ee950f5c82d-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"0326dd33-6447-432e-9f9f-1ee950f5c82d\") " pod="openstack/manila-share-share1-0" Jan 22 09:48:53 crc kubenswrapper[4811]: I0122 09:48:53.319924 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df635a7c-5d1e-4116-af78-c06f331d0a8b-config-data\") pod \"manila-scheduler-0\" (UID: \"df635a7c-5d1e-4116-af78-c06f331d0a8b\") " pod="openstack/manila-scheduler-0" Jan 22 09:48:53 crc kubenswrapper[4811]: I0122 09:48:53.319944 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0326dd33-6447-432e-9f9f-1ee950f5c82d-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"0326dd33-6447-432e-9f9f-1ee950f5c82d\") " pod="openstack/manila-share-share1-0" Jan 22 09:48:53 crc kubenswrapper[4811]: I0122 09:48:53.319970 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4943dd74-260e-4c75-af13-64455ecded8f-dns-svc\") pod \"dnsmasq-dns-595b86679f-c5h4r\" (UID: \"4943dd74-260e-4c75-af13-64455ecded8f\") " pod="openstack/dnsmasq-dns-595b86679f-c5h4r" Jan 22 09:48:53 crc kubenswrapper[4811]: I0122 09:48:53.319998 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df635a7c-5d1e-4116-af78-c06f331d0a8b-scripts\") pod \"manila-scheduler-0\" (UID: \"df635a7c-5d1e-4116-af78-c06f331d0a8b\") " pod="openstack/manila-scheduler-0" Jan 22 09:48:53 crc kubenswrapper[4811]: I0122 09:48:53.320019 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4943dd74-260e-4c75-af13-64455ecded8f-config\") pod \"dnsmasq-dns-595b86679f-c5h4r\" (UID: \"4943dd74-260e-4c75-af13-64455ecded8f\") " pod="openstack/dnsmasq-dns-595b86679f-c5h4r" Jan 22 09:48:53 crc kubenswrapper[4811]: I0122 09:48:53.320034 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/4943dd74-260e-4c75-af13-64455ecded8f-openstack-edpm-ipam\") pod \"dnsmasq-dns-595b86679f-c5h4r\" (UID: \"4943dd74-260e-4c75-af13-64455ecded8f\") " pod="openstack/dnsmasq-dns-595b86679f-c5h4r" Jan 22 09:48:53 crc kubenswrapper[4811]: I0122 09:48:53.320081 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4943dd74-260e-4c75-af13-64455ecded8f-ovsdbserver-sb\") pod \"dnsmasq-dns-595b86679f-c5h4r\" (UID: \"4943dd74-260e-4c75-af13-64455ecded8f\") " pod="openstack/dnsmasq-dns-595b86679f-c5h4r" Jan 22 09:48:53 crc kubenswrapper[4811]: I0122 09:48:53.320101 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4943dd74-260e-4c75-af13-64455ecded8f-ovsdbserver-nb\") pod \"dnsmasq-dns-595b86679f-c5h4r\" (UID: \"4943dd74-260e-4c75-af13-64455ecded8f\") " pod="openstack/dnsmasq-dns-595b86679f-c5h4r" Jan 22 09:48:53 crc kubenswrapper[4811]: I0122 09:48:53.320123 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/df635a7c-5d1e-4116-af78-c06f331d0a8b-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"df635a7c-5d1e-4116-af78-c06f331d0a8b\") " pod="openstack/manila-scheduler-0" Jan 22 09:48:53 crc kubenswrapper[4811]: I0122 09:48:53.320144 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0326dd33-6447-432e-9f9f-1ee950f5c82d-scripts\") pod \"manila-share-share1-0\" (UID: \"0326dd33-6447-432e-9f9f-1ee950f5c82d\") " pod="openstack/manila-share-share1-0" Jan 22 09:48:53 crc kubenswrapper[4811]: I0122 09:48:53.320161 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0326dd33-6447-432e-9f9f-1ee950f5c82d-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"0326dd33-6447-432e-9f9f-1ee950f5c82d\") " pod="openstack/manila-share-share1-0" Jan 22 09:48:53 crc kubenswrapper[4811]: I0122 09:48:53.320179 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wprcr\" (UniqueName: \"kubernetes.io/projected/df635a7c-5d1e-4116-af78-c06f331d0a8b-kube-api-access-wprcr\") pod \"manila-scheduler-0\" (UID: \"df635a7c-5d1e-4116-af78-c06f331d0a8b\") " pod="openstack/manila-scheduler-0" Jan 22 09:48:53 crc kubenswrapper[4811]: I0122 09:48:53.320207 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/0326dd33-6447-432e-9f9f-1ee950f5c82d-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"0326dd33-6447-432e-9f9f-1ee950f5c82d\") " pod="openstack/manila-share-share1-0" Jan 22 09:48:53 crc kubenswrapper[4811]: I0122 09:48:53.320229 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0326dd33-6447-432e-9f9f-1ee950f5c82d-config-data\") pod \"manila-share-share1-0\" (UID: \"0326dd33-6447-432e-9f9f-1ee950f5c82d\") " pod="openstack/manila-share-share1-0" Jan 22 09:48:53 crc kubenswrapper[4811]: I0122 09:48:53.320248 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df635a7c-5d1e-4116-af78-c06f331d0a8b-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"df635a7c-5d1e-4116-af78-c06f331d0a8b\") " pod="openstack/manila-scheduler-0" Jan 22 09:48:53 crc kubenswrapper[4811]: I0122 09:48:53.320268 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvhmg\" (UniqueName: \"kubernetes.io/projected/4943dd74-260e-4c75-af13-64455ecded8f-kube-api-access-cvhmg\") pod \"dnsmasq-dns-595b86679f-c5h4r\" (UID: \"4943dd74-260e-4c75-af13-64455ecded8f\") " pod="openstack/dnsmasq-dns-595b86679f-c5h4r" Jan 22 09:48:53 crc kubenswrapper[4811]: I0122 09:48:53.320294 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ptrb2\" (UniqueName: \"kubernetes.io/projected/0326dd33-6447-432e-9f9f-1ee950f5c82d-kube-api-access-ptrb2\") pod \"manila-share-share1-0\" (UID: \"0326dd33-6447-432e-9f9f-1ee950f5c82d\") " pod="openstack/manila-share-share1-0" Jan 22 09:48:53 crc kubenswrapper[4811]: I0122 09:48:53.321409 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/df635a7c-5d1e-4116-af78-c06f331d0a8b-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"df635a7c-5d1e-4116-af78-c06f331d0a8b\") " pod="openstack/manila-scheduler-0" Jan 22 09:48:53 crc kubenswrapper[4811]: I0122 09:48:53.323097 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/0326dd33-6447-432e-9f9f-1ee950f5c82d-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"0326dd33-6447-432e-9f9f-1ee950f5c82d\") " pod="openstack/manila-share-share1-0" Jan 22 09:48:53 crc kubenswrapper[4811]: I0122 09:48:53.323151 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0326dd33-6447-432e-9f9f-1ee950f5c82d-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"0326dd33-6447-432e-9f9f-1ee950f5c82d\") " pod="openstack/manila-share-share1-0" Jan 22 09:48:53 crc kubenswrapper[4811]: I0122 09:48:53.328774 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-595b86679f-c5h4r"] Jan 22 09:48:53 crc kubenswrapper[4811]: I0122 09:48:53.333026 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0326dd33-6447-432e-9f9f-1ee950f5c82d-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"0326dd33-6447-432e-9f9f-1ee950f5c82d\") " pod="openstack/manila-share-share1-0" Jan 22 09:48:53 crc kubenswrapper[4811]: I0122 09:48:53.333439 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df635a7c-5d1e-4116-af78-c06f331d0a8b-config-data\") pod \"manila-scheduler-0\" (UID: \"df635a7c-5d1e-4116-af78-c06f331d0a8b\") " pod="openstack/manila-scheduler-0" Jan 22 09:48:53 crc kubenswrapper[4811]: I0122 09:48:53.336474 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/df635a7c-5d1e-4116-af78-c06f331d0a8b-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"df635a7c-5d1e-4116-af78-c06f331d0a8b\") " pod="openstack/manila-scheduler-0" Jan 22 09:48:53 crc kubenswrapper[4811]: I0122 09:48:53.337446 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0326dd33-6447-432e-9f9f-1ee950f5c82d-scripts\") pod \"manila-share-share1-0\" (UID: \"0326dd33-6447-432e-9f9f-1ee950f5c82d\") " pod="openstack/manila-share-share1-0" Jan 22 09:48:53 crc kubenswrapper[4811]: I0122 09:48:53.345730 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/0326dd33-6447-432e-9f9f-1ee950f5c82d-ceph\") pod \"manila-share-share1-0\" (UID: \"0326dd33-6447-432e-9f9f-1ee950f5c82d\") " pod="openstack/manila-share-share1-0" Jan 22 09:48:53 crc kubenswrapper[4811]: I0122 09:48:53.347359 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df635a7c-5d1e-4116-af78-c06f331d0a8b-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"df635a7c-5d1e-4116-af78-c06f331d0a8b\") " pod="openstack/manila-scheduler-0" Jan 22 09:48:53 crc kubenswrapper[4811]: I0122 09:48:53.351073 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0326dd33-6447-432e-9f9f-1ee950f5c82d-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"0326dd33-6447-432e-9f9f-1ee950f5c82d\") " pod="openstack/manila-share-share1-0" Jan 22 09:48:53 crc kubenswrapper[4811]: I0122 09:48:53.362371 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wprcr\" (UniqueName: \"kubernetes.io/projected/df635a7c-5d1e-4116-af78-c06f331d0a8b-kube-api-access-wprcr\") pod \"manila-scheduler-0\" (UID: \"df635a7c-5d1e-4116-af78-c06f331d0a8b\") " pod="openstack/manila-scheduler-0" Jan 22 09:48:53 crc kubenswrapper[4811]: I0122 09:48:53.364118 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0326dd33-6447-432e-9f9f-1ee950f5c82d-config-data\") pod \"manila-share-share1-0\" (UID: \"0326dd33-6447-432e-9f9f-1ee950f5c82d\") " pod="openstack/manila-share-share1-0" Jan 22 09:48:53 crc kubenswrapper[4811]: I0122 09:48:53.364927 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df635a7c-5d1e-4116-af78-c06f331d0a8b-scripts\") pod \"manila-scheduler-0\" (UID: \"df635a7c-5d1e-4116-af78-c06f331d0a8b\") " pod="openstack/manila-scheduler-0" Jan 22 09:48:53 crc kubenswrapper[4811]: I0122 09:48:53.373517 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ptrb2\" (UniqueName: \"kubernetes.io/projected/0326dd33-6447-432e-9f9f-1ee950f5c82d-kube-api-access-ptrb2\") pod \"manila-share-share1-0\" (UID: \"0326dd33-6447-432e-9f9f-1ee950f5c82d\") " pod="openstack/manila-share-share1-0" Jan 22 09:48:53 crc kubenswrapper[4811]: I0122 09:48:53.422947 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4943dd74-260e-4c75-af13-64455ecded8f-dns-svc\") pod \"dnsmasq-dns-595b86679f-c5h4r\" (UID: \"4943dd74-260e-4c75-af13-64455ecded8f\") " pod="openstack/dnsmasq-dns-595b86679f-c5h4r" Jan 22 09:48:53 crc kubenswrapper[4811]: I0122 09:48:53.423027 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4943dd74-260e-4c75-af13-64455ecded8f-config\") pod \"dnsmasq-dns-595b86679f-c5h4r\" (UID: \"4943dd74-260e-4c75-af13-64455ecded8f\") " pod="openstack/dnsmasq-dns-595b86679f-c5h4r" Jan 22 09:48:53 crc kubenswrapper[4811]: I0122 09:48:53.423046 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/4943dd74-260e-4c75-af13-64455ecded8f-openstack-edpm-ipam\") pod \"dnsmasq-dns-595b86679f-c5h4r\" (UID: \"4943dd74-260e-4c75-af13-64455ecded8f\") " pod="openstack/dnsmasq-dns-595b86679f-c5h4r" Jan 22 09:48:53 crc kubenswrapper[4811]: I0122 09:48:53.423085 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4943dd74-260e-4c75-af13-64455ecded8f-ovsdbserver-sb\") pod \"dnsmasq-dns-595b86679f-c5h4r\" (UID: \"4943dd74-260e-4c75-af13-64455ecded8f\") " pod="openstack/dnsmasq-dns-595b86679f-c5h4r" Jan 22 09:48:53 crc kubenswrapper[4811]: I0122 09:48:53.423103 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4943dd74-260e-4c75-af13-64455ecded8f-ovsdbserver-nb\") pod \"dnsmasq-dns-595b86679f-c5h4r\" (UID: \"4943dd74-260e-4c75-af13-64455ecded8f\") " pod="openstack/dnsmasq-dns-595b86679f-c5h4r" Jan 22 09:48:53 crc kubenswrapper[4811]: I0122 09:48:53.423165 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cvhmg\" (UniqueName: \"kubernetes.io/projected/4943dd74-260e-4c75-af13-64455ecded8f-kube-api-access-cvhmg\") pod \"dnsmasq-dns-595b86679f-c5h4r\" (UID: \"4943dd74-260e-4c75-af13-64455ecded8f\") " pod="openstack/dnsmasq-dns-595b86679f-c5h4r" Jan 22 09:48:53 crc kubenswrapper[4811]: I0122 09:48:53.424282 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4943dd74-260e-4c75-af13-64455ecded8f-dns-svc\") pod \"dnsmasq-dns-595b86679f-c5h4r\" (UID: \"4943dd74-260e-4c75-af13-64455ecded8f\") " pod="openstack/dnsmasq-dns-595b86679f-c5h4r" Jan 22 09:48:53 crc kubenswrapper[4811]: I0122 09:48:53.424967 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4943dd74-260e-4c75-af13-64455ecded8f-config\") pod \"dnsmasq-dns-595b86679f-c5h4r\" (UID: \"4943dd74-260e-4c75-af13-64455ecded8f\") " pod="openstack/dnsmasq-dns-595b86679f-c5h4r" Jan 22 09:48:53 crc kubenswrapper[4811]: I0122 09:48:53.425488 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/4943dd74-260e-4c75-af13-64455ecded8f-openstack-edpm-ipam\") pod \"dnsmasq-dns-595b86679f-c5h4r\" (UID: \"4943dd74-260e-4c75-af13-64455ecded8f\") " pod="openstack/dnsmasq-dns-595b86679f-c5h4r" Jan 22 09:48:53 crc kubenswrapper[4811]: I0122 09:48:53.426013 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4943dd74-260e-4c75-af13-64455ecded8f-ovsdbserver-sb\") pod \"dnsmasq-dns-595b86679f-c5h4r\" (UID: \"4943dd74-260e-4c75-af13-64455ecded8f\") " pod="openstack/dnsmasq-dns-595b86679f-c5h4r" Jan 22 09:48:53 crc kubenswrapper[4811]: I0122 09:48:53.426469 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4943dd74-260e-4c75-af13-64455ecded8f-ovsdbserver-nb\") pod \"dnsmasq-dns-595b86679f-c5h4r\" (UID: \"4943dd74-260e-4c75-af13-64455ecded8f\") " pod="openstack/dnsmasq-dns-595b86679f-c5h4r" Jan 22 09:48:53 crc kubenswrapper[4811]: I0122 09:48:53.465653 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-api-0"] Jan 22 09:48:53 crc kubenswrapper[4811]: I0122 09:48:53.468104 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Jan 22 09:48:53 crc kubenswrapper[4811]: I0122 09:48:53.468900 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvhmg\" (UniqueName: \"kubernetes.io/projected/4943dd74-260e-4c75-af13-64455ecded8f-kube-api-access-cvhmg\") pod \"dnsmasq-dns-595b86679f-c5h4r\" (UID: \"4943dd74-260e-4c75-af13-64455ecded8f\") " pod="openstack/dnsmasq-dns-595b86679f-c5h4r" Jan 22 09:48:53 crc kubenswrapper[4811]: I0122 09:48:53.470095 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Jan 22 09:48:53 crc kubenswrapper[4811]: I0122 09:48:53.470673 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-api-config-data" Jan 22 09:48:53 crc kubenswrapper[4811]: I0122 09:48:53.481772 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Jan 22 09:48:53 crc kubenswrapper[4811]: I0122 09:48:53.516544 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Jan 22 09:48:53 crc kubenswrapper[4811]: I0122 09:48:53.620734 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-595b86679f-c5h4r" Jan 22 09:48:53 crc kubenswrapper[4811]: I0122 09:48:53.628643 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d694d36-df2f-4b6b-a7ac-59afe98760e3-scripts\") pod \"manila-api-0\" (UID: \"0d694d36-df2f-4b6b-a7ac-59afe98760e3\") " pod="openstack/manila-api-0" Jan 22 09:48:53 crc kubenswrapper[4811]: I0122 09:48:53.628764 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0d694d36-df2f-4b6b-a7ac-59afe98760e3-logs\") pod \"manila-api-0\" (UID: \"0d694d36-df2f-4b6b-a7ac-59afe98760e3\") " pod="openstack/manila-api-0" Jan 22 09:48:53 crc kubenswrapper[4811]: I0122 09:48:53.628860 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0d694d36-df2f-4b6b-a7ac-59afe98760e3-config-data-custom\") pod \"manila-api-0\" (UID: \"0d694d36-df2f-4b6b-a7ac-59afe98760e3\") " pod="openstack/manila-api-0" Jan 22 09:48:53 crc kubenswrapper[4811]: I0122 09:48:53.633036 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d694d36-df2f-4b6b-a7ac-59afe98760e3-config-data\") pod \"manila-api-0\" (UID: \"0d694d36-df2f-4b6b-a7ac-59afe98760e3\") " pod="openstack/manila-api-0" Jan 22 09:48:53 crc kubenswrapper[4811]: I0122 09:48:53.633096 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmdtt\" (UniqueName: \"kubernetes.io/projected/0d694d36-df2f-4b6b-a7ac-59afe98760e3-kube-api-access-lmdtt\") pod \"manila-api-0\" (UID: \"0d694d36-df2f-4b6b-a7ac-59afe98760e3\") " pod="openstack/manila-api-0" Jan 22 09:48:53 crc kubenswrapper[4811]: I0122 09:48:53.633150 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0d694d36-df2f-4b6b-a7ac-59afe98760e3-etc-machine-id\") pod \"manila-api-0\" (UID: \"0d694d36-df2f-4b6b-a7ac-59afe98760e3\") " pod="openstack/manila-api-0" Jan 22 09:48:53 crc kubenswrapper[4811]: I0122 09:48:53.633230 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d694d36-df2f-4b6b-a7ac-59afe98760e3-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"0d694d36-df2f-4b6b-a7ac-59afe98760e3\") " pod="openstack/manila-api-0" Jan 22 09:48:53 crc kubenswrapper[4811]: I0122 09:48:53.736399 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d694d36-df2f-4b6b-a7ac-59afe98760e3-config-data\") pod \"manila-api-0\" (UID: \"0d694d36-df2f-4b6b-a7ac-59afe98760e3\") " pod="openstack/manila-api-0" Jan 22 09:48:53 crc kubenswrapper[4811]: I0122 09:48:53.736472 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmdtt\" (UniqueName: \"kubernetes.io/projected/0d694d36-df2f-4b6b-a7ac-59afe98760e3-kube-api-access-lmdtt\") pod \"manila-api-0\" (UID: \"0d694d36-df2f-4b6b-a7ac-59afe98760e3\") " pod="openstack/manila-api-0" Jan 22 09:48:53 crc kubenswrapper[4811]: I0122 09:48:53.736535 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0d694d36-df2f-4b6b-a7ac-59afe98760e3-etc-machine-id\") pod \"manila-api-0\" (UID: \"0d694d36-df2f-4b6b-a7ac-59afe98760e3\") " pod="openstack/manila-api-0" Jan 22 09:48:53 crc kubenswrapper[4811]: I0122 09:48:53.736585 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d694d36-df2f-4b6b-a7ac-59afe98760e3-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"0d694d36-df2f-4b6b-a7ac-59afe98760e3\") " pod="openstack/manila-api-0" Jan 22 09:48:53 crc kubenswrapper[4811]: I0122 09:48:53.736670 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d694d36-df2f-4b6b-a7ac-59afe98760e3-scripts\") pod \"manila-api-0\" (UID: \"0d694d36-df2f-4b6b-a7ac-59afe98760e3\") " pod="openstack/manila-api-0" Jan 22 09:48:53 crc kubenswrapper[4811]: I0122 09:48:53.736742 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0d694d36-df2f-4b6b-a7ac-59afe98760e3-logs\") pod \"manila-api-0\" (UID: \"0d694d36-df2f-4b6b-a7ac-59afe98760e3\") " pod="openstack/manila-api-0" Jan 22 09:48:53 crc kubenswrapper[4811]: I0122 09:48:53.736797 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0d694d36-df2f-4b6b-a7ac-59afe98760e3-config-data-custom\") pod \"manila-api-0\" (UID: \"0d694d36-df2f-4b6b-a7ac-59afe98760e3\") " pod="openstack/manila-api-0" Jan 22 09:48:53 crc kubenswrapper[4811]: I0122 09:48:53.736913 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0d694d36-df2f-4b6b-a7ac-59afe98760e3-etc-machine-id\") pod \"manila-api-0\" (UID: \"0d694d36-df2f-4b6b-a7ac-59afe98760e3\") " pod="openstack/manila-api-0" Jan 22 09:48:53 crc kubenswrapper[4811]: I0122 09:48:53.740827 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0d694d36-df2f-4b6b-a7ac-59afe98760e3-logs\") pod \"manila-api-0\" (UID: \"0d694d36-df2f-4b6b-a7ac-59afe98760e3\") " pod="openstack/manila-api-0" Jan 22 09:48:53 crc kubenswrapper[4811]: I0122 09:48:53.744460 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d694d36-df2f-4b6b-a7ac-59afe98760e3-scripts\") pod \"manila-api-0\" (UID: \"0d694d36-df2f-4b6b-a7ac-59afe98760e3\") " pod="openstack/manila-api-0" Jan 22 09:48:53 crc kubenswrapper[4811]: I0122 09:48:53.745009 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0d694d36-df2f-4b6b-a7ac-59afe98760e3-config-data-custom\") pod \"manila-api-0\" (UID: \"0d694d36-df2f-4b6b-a7ac-59afe98760e3\") " pod="openstack/manila-api-0" Jan 22 09:48:53 crc kubenswrapper[4811]: I0122 09:48:53.746166 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d694d36-df2f-4b6b-a7ac-59afe98760e3-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"0d694d36-df2f-4b6b-a7ac-59afe98760e3\") " pod="openstack/manila-api-0" Jan 22 09:48:53 crc kubenswrapper[4811]: I0122 09:48:53.746806 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d694d36-df2f-4b6b-a7ac-59afe98760e3-config-data\") pod \"manila-api-0\" (UID: \"0d694d36-df2f-4b6b-a7ac-59afe98760e3\") " pod="openstack/manila-api-0" Jan 22 09:48:53 crc kubenswrapper[4811]: I0122 09:48:53.769192 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmdtt\" (UniqueName: \"kubernetes.io/projected/0d694d36-df2f-4b6b-a7ac-59afe98760e3-kube-api-access-lmdtt\") pod \"manila-api-0\" (UID: \"0d694d36-df2f-4b6b-a7ac-59afe98760e3\") " pod="openstack/manila-api-0" Jan 22 09:48:53 crc kubenswrapper[4811]: I0122 09:48:53.906481 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Jan 22 09:48:54 crc kubenswrapper[4811]: I0122 09:48:54.235567 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-d6c54dc74-r2gjr" podUID="660d9785-0f9b-4953-af76-580ed227c244" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.241:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.241:8443: connect: connection refused" Jan 22 09:48:54 crc kubenswrapper[4811]: I0122 09:48:54.389364 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Jan 22 09:48:54 crc kubenswrapper[4811]: I0122 09:48:54.402906 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-f75b46fc8-4l2b8" podUID="9c9eef01-268a-4d3c-b3c3-f30cd80694e0" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.242:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.242:8443: connect: connection refused" Jan 22 09:48:54 crc kubenswrapper[4811]: I0122 09:48:54.643089 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-595b86679f-c5h4r"] Jan 22 09:48:54 crc kubenswrapper[4811]: I0122 09:48:54.714203 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Jan 22 09:48:54 crc kubenswrapper[4811]: W0122 09:48:54.716859 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0326dd33_6447_432e_9f9f_1ee950f5c82d.slice/crio-277d0c04705dae46590f8c97c9a0c0b602843c97d0a283f00575bb9b26ac3e7d WatchSource:0}: Error finding container 277d0c04705dae46590f8c97c9a0c0b602843c97d0a283f00575bb9b26ac3e7d: Status 404 returned error can't find the container with id 277d0c04705dae46590f8c97c9a0c0b602843c97d0a283f00575bb9b26ac3e7d Jan 22 09:48:54 crc kubenswrapper[4811]: I0122 09:48:54.790671 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-595b86679f-c5h4r" event={"ID":"4943dd74-260e-4c75-af13-64455ecded8f","Type":"ContainerStarted","Data":"e762b23976348106f3adf130787760a5dab269c79bb85578135741633372813e"} Jan 22 09:48:54 crc kubenswrapper[4811]: I0122 09:48:54.810892 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"0326dd33-6447-432e-9f9f-1ee950f5c82d","Type":"ContainerStarted","Data":"277d0c04705dae46590f8c97c9a0c0b602843c97d0a283f00575bb9b26ac3e7d"} Jan 22 09:48:54 crc kubenswrapper[4811]: I0122 09:48:54.812207 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"df635a7c-5d1e-4116-af78-c06f331d0a8b","Type":"ContainerStarted","Data":"953057de622309c1565f208edb7b5974238f984cc8f694117fb63f9e5f82480d"} Jan 22 09:48:54 crc kubenswrapper[4811]: I0122 09:48:54.883294 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Jan 22 09:48:55 crc kubenswrapper[4811]: I0122 09:48:55.859066 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"0d694d36-df2f-4b6b-a7ac-59afe98760e3","Type":"ContainerStarted","Data":"67ef8b60d25a06631071e652ff8fda3d2318ed13ff9ef708ddf21cde4fc63c92"} Jan 22 09:48:55 crc kubenswrapper[4811]: I0122 09:48:55.859428 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"0d694d36-df2f-4b6b-a7ac-59afe98760e3","Type":"ContainerStarted","Data":"d7bdd38600d41186aad2e1cc45bc8d5163d8b63731441e83da87523f8174c4e4"} Jan 22 09:48:55 crc kubenswrapper[4811]: I0122 09:48:55.870810 4811 generic.go:334] "Generic (PLEG): container finished" podID="4943dd74-260e-4c75-af13-64455ecded8f" containerID="f10dd5828f3eb14a21ef569e6d720205d84ce18da1d01d8aac95d7897602a19e" exitCode=0 Jan 22 09:48:55 crc kubenswrapper[4811]: I0122 09:48:55.870848 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-595b86679f-c5h4r" event={"ID":"4943dd74-260e-4c75-af13-64455ecded8f","Type":"ContainerDied","Data":"f10dd5828f3eb14a21ef569e6d720205d84ce18da1d01d8aac95d7897602a19e"} Jan 22 09:48:55 crc kubenswrapper[4811]: I0122 09:48:55.937741 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-api-0"] Jan 22 09:48:56 crc kubenswrapper[4811]: I0122 09:48:56.884759 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"0d694d36-df2f-4b6b-a7ac-59afe98760e3","Type":"ContainerStarted","Data":"54adb151d01e7fea2aaf78d1cfe6fd721eae5a3e9b9acc79bf7336f9831021b2"} Jan 22 09:48:56 crc kubenswrapper[4811]: I0122 09:48:56.885229 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-api-0" podUID="0d694d36-df2f-4b6b-a7ac-59afe98760e3" containerName="manila-api-log" containerID="cri-o://67ef8b60d25a06631071e652ff8fda3d2318ed13ff9ef708ddf21cde4fc63c92" gracePeriod=30 Jan 22 09:48:56 crc kubenswrapper[4811]: I0122 09:48:56.885462 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/manila-api-0" Jan 22 09:48:56 crc kubenswrapper[4811]: I0122 09:48:56.885720 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-api-0" podUID="0d694d36-df2f-4b6b-a7ac-59afe98760e3" containerName="manila-api" containerID="cri-o://54adb151d01e7fea2aaf78d1cfe6fd721eae5a3e9b9acc79bf7336f9831021b2" gracePeriod=30 Jan 22 09:48:56 crc kubenswrapper[4811]: I0122 09:48:56.916582 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-api-0" podStartSLOduration=3.916565521 podStartE2EDuration="3.916565521s" podCreationTimestamp="2026-01-22 09:48:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:48:56.909122297 +0000 UTC m=+2581.231309420" watchObservedRunningTime="2026-01-22 09:48:56.916565521 +0000 UTC m=+2581.238752643" Jan 22 09:48:57 crc kubenswrapper[4811]: I0122 09:48:57.743431 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Jan 22 09:48:57 crc kubenswrapper[4811]: I0122 09:48:57.860117 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lmdtt\" (UniqueName: \"kubernetes.io/projected/0d694d36-df2f-4b6b-a7ac-59afe98760e3-kube-api-access-lmdtt\") pod \"0d694d36-df2f-4b6b-a7ac-59afe98760e3\" (UID: \"0d694d36-df2f-4b6b-a7ac-59afe98760e3\") " Jan 22 09:48:57 crc kubenswrapper[4811]: I0122 09:48:57.860314 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d694d36-df2f-4b6b-a7ac-59afe98760e3-scripts\") pod \"0d694d36-df2f-4b6b-a7ac-59afe98760e3\" (UID: \"0d694d36-df2f-4b6b-a7ac-59afe98760e3\") " Jan 22 09:48:57 crc kubenswrapper[4811]: I0122 09:48:57.860579 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0d694d36-df2f-4b6b-a7ac-59afe98760e3-logs\") pod \"0d694d36-df2f-4b6b-a7ac-59afe98760e3\" (UID: \"0d694d36-df2f-4b6b-a7ac-59afe98760e3\") " Jan 22 09:48:57 crc kubenswrapper[4811]: I0122 09:48:57.860701 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d694d36-df2f-4b6b-a7ac-59afe98760e3-config-data\") pod \"0d694d36-df2f-4b6b-a7ac-59afe98760e3\" (UID: \"0d694d36-df2f-4b6b-a7ac-59afe98760e3\") " Jan 22 09:48:57 crc kubenswrapper[4811]: I0122 09:48:57.860861 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0d694d36-df2f-4b6b-a7ac-59afe98760e3-etc-machine-id\") pod \"0d694d36-df2f-4b6b-a7ac-59afe98760e3\" (UID: \"0d694d36-df2f-4b6b-a7ac-59afe98760e3\") " Jan 22 09:48:57 crc kubenswrapper[4811]: I0122 09:48:57.860971 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0d694d36-df2f-4b6b-a7ac-59afe98760e3-config-data-custom\") pod \"0d694d36-df2f-4b6b-a7ac-59afe98760e3\" (UID: \"0d694d36-df2f-4b6b-a7ac-59afe98760e3\") " Jan 22 09:48:57 crc kubenswrapper[4811]: I0122 09:48:57.861092 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d694d36-df2f-4b6b-a7ac-59afe98760e3-combined-ca-bundle\") pod \"0d694d36-df2f-4b6b-a7ac-59afe98760e3\" (UID: \"0d694d36-df2f-4b6b-a7ac-59afe98760e3\") " Jan 22 09:48:57 crc kubenswrapper[4811]: I0122 09:48:57.860883 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d694d36-df2f-4b6b-a7ac-59afe98760e3-logs" (OuterVolumeSpecName: "logs") pod "0d694d36-df2f-4b6b-a7ac-59afe98760e3" (UID: "0d694d36-df2f-4b6b-a7ac-59afe98760e3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:48:57 crc kubenswrapper[4811]: I0122 09:48:57.861331 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0d694d36-df2f-4b6b-a7ac-59afe98760e3-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "0d694d36-df2f-4b6b-a7ac-59afe98760e3" (UID: "0d694d36-df2f-4b6b-a7ac-59afe98760e3"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 09:48:57 crc kubenswrapper[4811]: I0122 09:48:57.862488 4811 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0d694d36-df2f-4b6b-a7ac-59afe98760e3-logs\") on node \"crc\" DevicePath \"\"" Jan 22 09:48:57 crc kubenswrapper[4811]: I0122 09:48:57.862565 4811 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0d694d36-df2f-4b6b-a7ac-59afe98760e3-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 22 09:48:57 crc kubenswrapper[4811]: I0122 09:48:57.870367 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d694d36-df2f-4b6b-a7ac-59afe98760e3-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "0d694d36-df2f-4b6b-a7ac-59afe98760e3" (UID: "0d694d36-df2f-4b6b-a7ac-59afe98760e3"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:48:57 crc kubenswrapper[4811]: I0122 09:48:57.885198 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d694d36-df2f-4b6b-a7ac-59afe98760e3-kube-api-access-lmdtt" (OuterVolumeSpecName: "kube-api-access-lmdtt") pod "0d694d36-df2f-4b6b-a7ac-59afe98760e3" (UID: "0d694d36-df2f-4b6b-a7ac-59afe98760e3"). InnerVolumeSpecName "kube-api-access-lmdtt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:48:57 crc kubenswrapper[4811]: I0122 09:48:57.892450 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d694d36-df2f-4b6b-a7ac-59afe98760e3-scripts" (OuterVolumeSpecName: "scripts") pod "0d694d36-df2f-4b6b-a7ac-59afe98760e3" (UID: "0d694d36-df2f-4b6b-a7ac-59afe98760e3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:48:57 crc kubenswrapper[4811]: I0122 09:48:57.917336 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-595b86679f-c5h4r" event={"ID":"4943dd74-260e-4c75-af13-64455ecded8f","Type":"ContainerStarted","Data":"db413f9e2c61fa2ca8befbce79451e979f57f8d0fc16552eb6813ce3e78520ff"} Jan 22 09:48:57 crc kubenswrapper[4811]: I0122 09:48:57.917463 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-595b86679f-c5h4r" Jan 22 09:48:57 crc kubenswrapper[4811]: I0122 09:48:57.928839 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"df635a7c-5d1e-4116-af78-c06f331d0a8b","Type":"ContainerStarted","Data":"99436b7d124c7f6b06edf217c1a6b192ccdcfb23f2430ff7258a4532d7544b24"} Jan 22 09:48:57 crc kubenswrapper[4811]: I0122 09:48:57.942146 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-595b86679f-c5h4r" podStartSLOduration=4.942132144 podStartE2EDuration="4.942132144s" podCreationTimestamp="2026-01-22 09:48:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:48:57.936860585 +0000 UTC m=+2582.259047709" watchObservedRunningTime="2026-01-22 09:48:57.942132144 +0000 UTC m=+2582.264319267" Jan 22 09:48:57 crc kubenswrapper[4811]: I0122 09:48:57.947792 4811 generic.go:334] "Generic (PLEG): container finished" podID="0d694d36-df2f-4b6b-a7ac-59afe98760e3" containerID="54adb151d01e7fea2aaf78d1cfe6fd721eae5a3e9b9acc79bf7336f9831021b2" exitCode=143 Jan 22 09:48:57 crc kubenswrapper[4811]: I0122 09:48:57.947821 4811 generic.go:334] "Generic (PLEG): container finished" podID="0d694d36-df2f-4b6b-a7ac-59afe98760e3" containerID="67ef8b60d25a06631071e652ff8fda3d2318ed13ff9ef708ddf21cde4fc63c92" exitCode=143 Jan 22 09:48:57 crc kubenswrapper[4811]: I0122 09:48:57.947848 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"0d694d36-df2f-4b6b-a7ac-59afe98760e3","Type":"ContainerDied","Data":"54adb151d01e7fea2aaf78d1cfe6fd721eae5a3e9b9acc79bf7336f9831021b2"} Jan 22 09:48:57 crc kubenswrapper[4811]: I0122 09:48:57.947854 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Jan 22 09:48:57 crc kubenswrapper[4811]: I0122 09:48:57.947875 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"0d694d36-df2f-4b6b-a7ac-59afe98760e3","Type":"ContainerDied","Data":"67ef8b60d25a06631071e652ff8fda3d2318ed13ff9ef708ddf21cde4fc63c92"} Jan 22 09:48:57 crc kubenswrapper[4811]: I0122 09:48:57.947888 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"0d694d36-df2f-4b6b-a7ac-59afe98760e3","Type":"ContainerDied","Data":"d7bdd38600d41186aad2e1cc45bc8d5163d8b63731441e83da87523f8174c4e4"} Jan 22 09:48:57 crc kubenswrapper[4811]: I0122 09:48:57.947906 4811 scope.go:117] "RemoveContainer" containerID="54adb151d01e7fea2aaf78d1cfe6fd721eae5a3e9b9acc79bf7336f9831021b2" Jan 22 09:48:57 crc kubenswrapper[4811]: I0122 09:48:57.967007 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lmdtt\" (UniqueName: \"kubernetes.io/projected/0d694d36-df2f-4b6b-a7ac-59afe98760e3-kube-api-access-lmdtt\") on node \"crc\" DevicePath \"\"" Jan 22 09:48:57 crc kubenswrapper[4811]: I0122 09:48:57.967029 4811 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d694d36-df2f-4b6b-a7ac-59afe98760e3-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 09:48:57 crc kubenswrapper[4811]: I0122 09:48:57.967039 4811 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0d694d36-df2f-4b6b-a7ac-59afe98760e3-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 22 09:48:57 crc kubenswrapper[4811]: I0122 09:48:57.973731 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d694d36-df2f-4b6b-a7ac-59afe98760e3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0d694d36-df2f-4b6b-a7ac-59afe98760e3" (UID: "0d694d36-df2f-4b6b-a7ac-59afe98760e3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:48:58 crc kubenswrapper[4811]: I0122 09:48:58.040018 4811 scope.go:117] "RemoveContainer" containerID="67ef8b60d25a06631071e652ff8fda3d2318ed13ff9ef708ddf21cde4fc63c92" Jan 22 09:48:58 crc kubenswrapper[4811]: I0122 09:48:58.061151 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d694d36-df2f-4b6b-a7ac-59afe98760e3-config-data" (OuterVolumeSpecName: "config-data") pod "0d694d36-df2f-4b6b-a7ac-59afe98760e3" (UID: "0d694d36-df2f-4b6b-a7ac-59afe98760e3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:48:58 crc kubenswrapper[4811]: I0122 09:48:58.069617 4811 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d694d36-df2f-4b6b-a7ac-59afe98760e3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 09:48:58 crc kubenswrapper[4811]: I0122 09:48:58.069663 4811 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d694d36-df2f-4b6b-a7ac-59afe98760e3-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 09:48:58 crc kubenswrapper[4811]: I0122 09:48:58.075211 4811 scope.go:117] "RemoveContainer" containerID="54adb151d01e7fea2aaf78d1cfe6fd721eae5a3e9b9acc79bf7336f9831021b2" Jan 22 09:48:58 crc kubenswrapper[4811]: E0122 09:48:58.079131 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"54adb151d01e7fea2aaf78d1cfe6fd721eae5a3e9b9acc79bf7336f9831021b2\": container with ID starting with 54adb151d01e7fea2aaf78d1cfe6fd721eae5a3e9b9acc79bf7336f9831021b2 not found: ID does not exist" containerID="54adb151d01e7fea2aaf78d1cfe6fd721eae5a3e9b9acc79bf7336f9831021b2" Jan 22 09:48:58 crc kubenswrapper[4811]: I0122 09:48:58.079175 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54adb151d01e7fea2aaf78d1cfe6fd721eae5a3e9b9acc79bf7336f9831021b2"} err="failed to get container status \"54adb151d01e7fea2aaf78d1cfe6fd721eae5a3e9b9acc79bf7336f9831021b2\": rpc error: code = NotFound desc = could not find container \"54adb151d01e7fea2aaf78d1cfe6fd721eae5a3e9b9acc79bf7336f9831021b2\": container with ID starting with 54adb151d01e7fea2aaf78d1cfe6fd721eae5a3e9b9acc79bf7336f9831021b2 not found: ID does not exist" Jan 22 09:48:58 crc kubenswrapper[4811]: I0122 09:48:58.079199 4811 scope.go:117] "RemoveContainer" containerID="67ef8b60d25a06631071e652ff8fda3d2318ed13ff9ef708ddf21cde4fc63c92" Jan 22 09:48:58 crc kubenswrapper[4811]: E0122 09:48:58.080718 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"67ef8b60d25a06631071e652ff8fda3d2318ed13ff9ef708ddf21cde4fc63c92\": container with ID starting with 67ef8b60d25a06631071e652ff8fda3d2318ed13ff9ef708ddf21cde4fc63c92 not found: ID does not exist" containerID="67ef8b60d25a06631071e652ff8fda3d2318ed13ff9ef708ddf21cde4fc63c92" Jan 22 09:48:58 crc kubenswrapper[4811]: I0122 09:48:58.080753 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67ef8b60d25a06631071e652ff8fda3d2318ed13ff9ef708ddf21cde4fc63c92"} err="failed to get container status \"67ef8b60d25a06631071e652ff8fda3d2318ed13ff9ef708ddf21cde4fc63c92\": rpc error: code = NotFound desc = could not find container \"67ef8b60d25a06631071e652ff8fda3d2318ed13ff9ef708ddf21cde4fc63c92\": container with ID starting with 67ef8b60d25a06631071e652ff8fda3d2318ed13ff9ef708ddf21cde4fc63c92 not found: ID does not exist" Jan 22 09:48:58 crc kubenswrapper[4811]: I0122 09:48:58.080779 4811 scope.go:117] "RemoveContainer" containerID="54adb151d01e7fea2aaf78d1cfe6fd721eae5a3e9b9acc79bf7336f9831021b2" Jan 22 09:48:58 crc kubenswrapper[4811]: I0122 09:48:58.081226 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54adb151d01e7fea2aaf78d1cfe6fd721eae5a3e9b9acc79bf7336f9831021b2"} err="failed to get container status \"54adb151d01e7fea2aaf78d1cfe6fd721eae5a3e9b9acc79bf7336f9831021b2\": rpc error: code = NotFound desc = could not find container \"54adb151d01e7fea2aaf78d1cfe6fd721eae5a3e9b9acc79bf7336f9831021b2\": container with ID starting with 54adb151d01e7fea2aaf78d1cfe6fd721eae5a3e9b9acc79bf7336f9831021b2 not found: ID does not exist" Jan 22 09:48:58 crc kubenswrapper[4811]: I0122 09:48:58.081266 4811 scope.go:117] "RemoveContainer" containerID="67ef8b60d25a06631071e652ff8fda3d2318ed13ff9ef708ddf21cde4fc63c92" Jan 22 09:48:58 crc kubenswrapper[4811]: I0122 09:48:58.081738 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67ef8b60d25a06631071e652ff8fda3d2318ed13ff9ef708ddf21cde4fc63c92"} err="failed to get container status \"67ef8b60d25a06631071e652ff8fda3d2318ed13ff9ef708ddf21cde4fc63c92\": rpc error: code = NotFound desc = could not find container \"67ef8b60d25a06631071e652ff8fda3d2318ed13ff9ef708ddf21cde4fc63c92\": container with ID starting with 67ef8b60d25a06631071e652ff8fda3d2318ed13ff9ef708ddf21cde4fc63c92 not found: ID does not exist" Jan 22 09:48:58 crc kubenswrapper[4811]: I0122 09:48:58.276483 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-api-0"] Jan 22 09:48:58 crc kubenswrapper[4811]: I0122 09:48:58.284011 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-api-0"] Jan 22 09:48:58 crc kubenswrapper[4811]: I0122 09:48:58.299352 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-api-0"] Jan 22 09:48:58 crc kubenswrapper[4811]: E0122 09:48:58.299945 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d694d36-df2f-4b6b-a7ac-59afe98760e3" containerName="manila-api-log" Jan 22 09:48:58 crc kubenswrapper[4811]: I0122 09:48:58.307122 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d694d36-df2f-4b6b-a7ac-59afe98760e3" containerName="manila-api-log" Jan 22 09:48:58 crc kubenswrapper[4811]: E0122 09:48:58.307225 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d694d36-df2f-4b6b-a7ac-59afe98760e3" containerName="manila-api" Jan 22 09:48:58 crc kubenswrapper[4811]: I0122 09:48:58.307290 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d694d36-df2f-4b6b-a7ac-59afe98760e3" containerName="manila-api" Jan 22 09:48:58 crc kubenswrapper[4811]: I0122 09:48:58.307557 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d694d36-df2f-4b6b-a7ac-59afe98760e3" containerName="manila-api-log" Jan 22 09:48:58 crc kubenswrapper[4811]: I0122 09:48:58.307612 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d694d36-df2f-4b6b-a7ac-59afe98760e3" containerName="manila-api" Jan 22 09:48:58 crc kubenswrapper[4811]: I0122 09:48:58.308754 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Jan 22 09:48:58 crc kubenswrapper[4811]: I0122 09:48:58.313260 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-manila-internal-svc" Jan 22 09:48:58 crc kubenswrapper[4811]: I0122 09:48:58.313532 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-manila-public-svc" Jan 22 09:48:58 crc kubenswrapper[4811]: I0122 09:48:58.313825 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-api-config-data" Jan 22 09:48:58 crc kubenswrapper[4811]: I0122 09:48:58.324219 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Jan 22 09:48:58 crc kubenswrapper[4811]: I0122 09:48:58.482589 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8c25067a-ed34-4109-b8f4-d82320dedb05-etc-machine-id\") pod \"manila-api-0\" (UID: \"8c25067a-ed34-4109-b8f4-d82320dedb05\") " pod="openstack/manila-api-0" Jan 22 09:48:58 crc kubenswrapper[4811]: I0122 09:48:58.482669 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8c25067a-ed34-4109-b8f4-d82320dedb05-config-data-custom\") pod \"manila-api-0\" (UID: \"8c25067a-ed34-4109-b8f4-d82320dedb05\") " pod="openstack/manila-api-0" Jan 22 09:48:58 crc kubenswrapper[4811]: I0122 09:48:58.482698 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c25067a-ed34-4109-b8f4-d82320dedb05-config-data\") pod \"manila-api-0\" (UID: \"8c25067a-ed34-4109-b8f4-d82320dedb05\") " pod="openstack/manila-api-0" Jan 22 09:48:58 crc kubenswrapper[4811]: I0122 09:48:58.482726 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnxs7\" (UniqueName: \"kubernetes.io/projected/8c25067a-ed34-4109-b8f4-d82320dedb05-kube-api-access-nnxs7\") pod \"manila-api-0\" (UID: \"8c25067a-ed34-4109-b8f4-d82320dedb05\") " pod="openstack/manila-api-0" Jan 22 09:48:58 crc kubenswrapper[4811]: I0122 09:48:58.482826 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8c25067a-ed34-4109-b8f4-d82320dedb05-logs\") pod \"manila-api-0\" (UID: \"8c25067a-ed34-4109-b8f4-d82320dedb05\") " pod="openstack/manila-api-0" Jan 22 09:48:58 crc kubenswrapper[4811]: I0122 09:48:58.482973 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c25067a-ed34-4109-b8f4-d82320dedb05-internal-tls-certs\") pod \"manila-api-0\" (UID: \"8c25067a-ed34-4109-b8f4-d82320dedb05\") " pod="openstack/manila-api-0" Jan 22 09:48:58 crc kubenswrapper[4811]: I0122 09:48:58.483027 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c25067a-ed34-4109-b8f4-d82320dedb05-scripts\") pod \"manila-api-0\" (UID: \"8c25067a-ed34-4109-b8f4-d82320dedb05\") " pod="openstack/manila-api-0" Jan 22 09:48:58 crc kubenswrapper[4811]: I0122 09:48:58.483068 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c25067a-ed34-4109-b8f4-d82320dedb05-public-tls-certs\") pod \"manila-api-0\" (UID: \"8c25067a-ed34-4109-b8f4-d82320dedb05\") " pod="openstack/manila-api-0" Jan 22 09:48:58 crc kubenswrapper[4811]: I0122 09:48:58.483263 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c25067a-ed34-4109-b8f4-d82320dedb05-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"8c25067a-ed34-4109-b8f4-d82320dedb05\") " pod="openstack/manila-api-0" Jan 22 09:48:58 crc kubenswrapper[4811]: I0122 09:48:58.592119 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8c25067a-ed34-4109-b8f4-d82320dedb05-etc-machine-id\") pod \"manila-api-0\" (UID: \"8c25067a-ed34-4109-b8f4-d82320dedb05\") " pod="openstack/manila-api-0" Jan 22 09:48:58 crc kubenswrapper[4811]: I0122 09:48:58.592185 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8c25067a-ed34-4109-b8f4-d82320dedb05-config-data-custom\") pod \"manila-api-0\" (UID: \"8c25067a-ed34-4109-b8f4-d82320dedb05\") " pod="openstack/manila-api-0" Jan 22 09:48:58 crc kubenswrapper[4811]: I0122 09:48:58.592206 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c25067a-ed34-4109-b8f4-d82320dedb05-config-data\") pod \"manila-api-0\" (UID: \"8c25067a-ed34-4109-b8f4-d82320dedb05\") " pod="openstack/manila-api-0" Jan 22 09:48:58 crc kubenswrapper[4811]: I0122 09:48:58.592229 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nnxs7\" (UniqueName: \"kubernetes.io/projected/8c25067a-ed34-4109-b8f4-d82320dedb05-kube-api-access-nnxs7\") pod \"manila-api-0\" (UID: \"8c25067a-ed34-4109-b8f4-d82320dedb05\") " pod="openstack/manila-api-0" Jan 22 09:48:58 crc kubenswrapper[4811]: I0122 09:48:58.592278 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8c25067a-ed34-4109-b8f4-d82320dedb05-logs\") pod \"manila-api-0\" (UID: \"8c25067a-ed34-4109-b8f4-d82320dedb05\") " pod="openstack/manila-api-0" Jan 22 09:48:58 crc kubenswrapper[4811]: I0122 09:48:58.592364 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c25067a-ed34-4109-b8f4-d82320dedb05-internal-tls-certs\") pod \"manila-api-0\" (UID: \"8c25067a-ed34-4109-b8f4-d82320dedb05\") " pod="openstack/manila-api-0" Jan 22 09:48:58 crc kubenswrapper[4811]: I0122 09:48:58.592392 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c25067a-ed34-4109-b8f4-d82320dedb05-scripts\") pod \"manila-api-0\" (UID: \"8c25067a-ed34-4109-b8f4-d82320dedb05\") " pod="openstack/manila-api-0" Jan 22 09:48:58 crc kubenswrapper[4811]: I0122 09:48:58.592424 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c25067a-ed34-4109-b8f4-d82320dedb05-public-tls-certs\") pod \"manila-api-0\" (UID: \"8c25067a-ed34-4109-b8f4-d82320dedb05\") " pod="openstack/manila-api-0" Jan 22 09:48:58 crc kubenswrapper[4811]: I0122 09:48:58.592540 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c25067a-ed34-4109-b8f4-d82320dedb05-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"8c25067a-ed34-4109-b8f4-d82320dedb05\") " pod="openstack/manila-api-0" Jan 22 09:48:58 crc kubenswrapper[4811]: I0122 09:48:58.602987 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c25067a-ed34-4109-b8f4-d82320dedb05-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"8c25067a-ed34-4109-b8f4-d82320dedb05\") " pod="openstack/manila-api-0" Jan 22 09:48:58 crc kubenswrapper[4811]: I0122 09:48:58.604257 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8c25067a-ed34-4109-b8f4-d82320dedb05-logs\") pod \"manila-api-0\" (UID: \"8c25067a-ed34-4109-b8f4-d82320dedb05\") " pod="openstack/manila-api-0" Jan 22 09:48:58 crc kubenswrapper[4811]: I0122 09:48:58.604309 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8c25067a-ed34-4109-b8f4-d82320dedb05-etc-machine-id\") pod \"manila-api-0\" (UID: \"8c25067a-ed34-4109-b8f4-d82320dedb05\") " pod="openstack/manila-api-0" Jan 22 09:48:58 crc kubenswrapper[4811]: I0122 09:48:58.611101 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c25067a-ed34-4109-b8f4-d82320dedb05-scripts\") pod \"manila-api-0\" (UID: \"8c25067a-ed34-4109-b8f4-d82320dedb05\") " pod="openstack/manila-api-0" Jan 22 09:48:58 crc kubenswrapper[4811]: I0122 09:48:58.619744 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c25067a-ed34-4109-b8f4-d82320dedb05-config-data\") pod \"manila-api-0\" (UID: \"8c25067a-ed34-4109-b8f4-d82320dedb05\") " pod="openstack/manila-api-0" Jan 22 09:48:58 crc kubenswrapper[4811]: I0122 09:48:58.620338 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c25067a-ed34-4109-b8f4-d82320dedb05-public-tls-certs\") pod \"manila-api-0\" (UID: \"8c25067a-ed34-4109-b8f4-d82320dedb05\") " pod="openstack/manila-api-0" Jan 22 09:48:58 crc kubenswrapper[4811]: I0122 09:48:58.626304 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8c25067a-ed34-4109-b8f4-d82320dedb05-config-data-custom\") pod \"manila-api-0\" (UID: \"8c25067a-ed34-4109-b8f4-d82320dedb05\") " pod="openstack/manila-api-0" Jan 22 09:48:58 crc kubenswrapper[4811]: I0122 09:48:58.629487 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c25067a-ed34-4109-b8f4-d82320dedb05-internal-tls-certs\") pod \"manila-api-0\" (UID: \"8c25067a-ed34-4109-b8f4-d82320dedb05\") " pod="openstack/manila-api-0" Jan 22 09:48:58 crc kubenswrapper[4811]: I0122 09:48:58.672080 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nnxs7\" (UniqueName: \"kubernetes.io/projected/8c25067a-ed34-4109-b8f4-d82320dedb05-kube-api-access-nnxs7\") pod \"manila-api-0\" (UID: \"8c25067a-ed34-4109-b8f4-d82320dedb05\") " pod="openstack/manila-api-0" Jan 22 09:48:58 crc kubenswrapper[4811]: I0122 09:48:58.955697 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Jan 22 09:48:58 crc kubenswrapper[4811]: I0122 09:48:58.969099 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"df635a7c-5d1e-4116-af78-c06f331d0a8b","Type":"ContainerStarted","Data":"3b8bfc4e7a882e3f7d9a692d5f5dd1ece4446714b3a7a64adc55482750bd768a"} Jan 22 09:48:59 crc kubenswrapper[4811]: I0122 09:48:59.001334 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-scheduler-0" podStartSLOduration=4.8767508920000004 podStartE2EDuration="6.001318755s" podCreationTimestamp="2026-01-22 09:48:53 +0000 UTC" firstStartedPulling="2026-01-22 09:48:54.410231475 +0000 UTC m=+2578.732418598" lastFinishedPulling="2026-01-22 09:48:55.534799338 +0000 UTC m=+2579.856986461" observedRunningTime="2026-01-22 09:48:58.998681444 +0000 UTC m=+2583.320868556" watchObservedRunningTime="2026-01-22 09:48:59.001318755 +0000 UTC m=+2583.323505879" Jan 22 09:48:59 crc kubenswrapper[4811]: W0122 09:48:59.606774 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8c25067a_ed34_4109_b8f4_d82320dedb05.slice/crio-32f03dfc28250e919fb8a5c68ba8106b9115e4d1393124f663bf641595f8a17b WatchSource:0}: Error finding container 32f03dfc28250e919fb8a5c68ba8106b9115e4d1393124f663bf641595f8a17b: Status 404 returned error can't find the container with id 32f03dfc28250e919fb8a5c68ba8106b9115e4d1393124f663bf641595f8a17b Jan 22 09:48:59 crc kubenswrapper[4811]: I0122 09:48:59.609530 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Jan 22 09:48:59 crc kubenswrapper[4811]: I0122 09:48:59.979505 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"8c25067a-ed34-4109-b8f4-d82320dedb05","Type":"ContainerStarted","Data":"32f03dfc28250e919fb8a5c68ba8106b9115e4d1393124f663bf641595f8a17b"} Jan 22 09:49:00 crc kubenswrapper[4811]: I0122 09:49:00.005681 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d694d36-df2f-4b6b-a7ac-59afe98760e3" path="/var/lib/kubelet/pods/0d694d36-df2f-4b6b-a7ac-59afe98760e3/volumes" Jan 22 09:49:00 crc kubenswrapper[4811]: I0122 09:49:00.991688 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"8c25067a-ed34-4109-b8f4-d82320dedb05","Type":"ContainerStarted","Data":"56fe52700db3dc892a6cdce6a482b2513574e24a16247454464a9d375489cacb"} Jan 22 09:49:00 crc kubenswrapper[4811]: I0122 09:49:00.992212 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"8c25067a-ed34-4109-b8f4-d82320dedb05","Type":"ContainerStarted","Data":"dcb901458f7e3e4ce1c55c6bde93c5deaa396e17037a1f0dc05ec9ce3d997aee"} Jan 22 09:49:00 crc kubenswrapper[4811]: I0122 09:49:00.992231 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/manila-api-0" Jan 22 09:49:01 crc kubenswrapper[4811]: I0122 09:49:01.010972 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-api-0" podStartSLOduration=3.010954401 podStartE2EDuration="3.010954401s" podCreationTimestamp="2026-01-22 09:48:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:49:01.007542578 +0000 UTC m=+2585.329729702" watchObservedRunningTime="2026-01-22 09:49:01.010954401 +0000 UTC m=+2585.333141524" Jan 22 09:49:03 crc kubenswrapper[4811]: I0122 09:49:03.483131 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-scheduler-0" Jan 22 09:49:03 crc kubenswrapper[4811]: I0122 09:49:03.621793 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-595b86679f-c5h4r" Jan 22 09:49:03 crc kubenswrapper[4811]: I0122 09:49:03.701232 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-567fc67579-wl9zs"] Jan 22 09:49:03 crc kubenswrapper[4811]: I0122 09:49:03.701482 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-567fc67579-wl9zs" podUID="f8f61363-c6ef-4f43-86f2-4fe8068d6894" containerName="dnsmasq-dns" containerID="cri-o://fa6523d23ce8f89d599e37f791aceab652c82188a35ffd67527888dec76ccd9a" gracePeriod=10 Jan 22 09:49:04 crc kubenswrapper[4811]: I0122 09:49:04.033660 4811 generic.go:334] "Generic (PLEG): container finished" podID="f8f61363-c6ef-4f43-86f2-4fe8068d6894" containerID="fa6523d23ce8f89d599e37f791aceab652c82188a35ffd67527888dec76ccd9a" exitCode=0 Jan 22 09:49:04 crc kubenswrapper[4811]: I0122 09:49:04.033700 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-567fc67579-wl9zs" event={"ID":"f8f61363-c6ef-4f43-86f2-4fe8068d6894","Type":"ContainerDied","Data":"fa6523d23ce8f89d599e37f791aceab652c82188a35ffd67527888dec76ccd9a"} Jan 22 09:49:04 crc kubenswrapper[4811]: I0122 09:49:04.156923 4811 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-567fc67579-wl9zs" podUID="f8f61363-c6ef-4f43-86f2-4fe8068d6894" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.196:5353: connect: connection refused" Jan 22 09:49:06 crc kubenswrapper[4811]: I0122 09:49:06.210016 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-567fc67579-wl9zs" Jan 22 09:49:06 crc kubenswrapper[4811]: I0122 09:49:06.313096 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f8f61363-c6ef-4f43-86f2-4fe8068d6894-ovsdbserver-sb\") pod \"f8f61363-c6ef-4f43-86f2-4fe8068d6894\" (UID: \"f8f61363-c6ef-4f43-86f2-4fe8068d6894\") " Jan 22 09:49:06 crc kubenswrapper[4811]: I0122 09:49:06.313369 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rlm6n\" (UniqueName: \"kubernetes.io/projected/f8f61363-c6ef-4f43-86f2-4fe8068d6894-kube-api-access-rlm6n\") pod \"f8f61363-c6ef-4f43-86f2-4fe8068d6894\" (UID: \"f8f61363-c6ef-4f43-86f2-4fe8068d6894\") " Jan 22 09:49:06 crc kubenswrapper[4811]: I0122 09:49:06.313522 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f8f61363-c6ef-4f43-86f2-4fe8068d6894-ovsdbserver-nb\") pod \"f8f61363-c6ef-4f43-86f2-4fe8068d6894\" (UID: \"f8f61363-c6ef-4f43-86f2-4fe8068d6894\") " Jan 22 09:49:06 crc kubenswrapper[4811]: I0122 09:49:06.313546 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/f8f61363-c6ef-4f43-86f2-4fe8068d6894-openstack-edpm-ipam\") pod \"f8f61363-c6ef-4f43-86f2-4fe8068d6894\" (UID: \"f8f61363-c6ef-4f43-86f2-4fe8068d6894\") " Jan 22 09:49:06 crc kubenswrapper[4811]: I0122 09:49:06.313734 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f8f61363-c6ef-4f43-86f2-4fe8068d6894-config\") pod \"f8f61363-c6ef-4f43-86f2-4fe8068d6894\" (UID: \"f8f61363-c6ef-4f43-86f2-4fe8068d6894\") " Jan 22 09:49:06 crc kubenswrapper[4811]: I0122 09:49:06.313774 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f8f61363-c6ef-4f43-86f2-4fe8068d6894-dns-svc\") pod \"f8f61363-c6ef-4f43-86f2-4fe8068d6894\" (UID: \"f8f61363-c6ef-4f43-86f2-4fe8068d6894\") " Jan 22 09:49:06 crc kubenswrapper[4811]: I0122 09:49:06.332392 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8f61363-c6ef-4f43-86f2-4fe8068d6894-kube-api-access-rlm6n" (OuterVolumeSpecName: "kube-api-access-rlm6n") pod "f8f61363-c6ef-4f43-86f2-4fe8068d6894" (UID: "f8f61363-c6ef-4f43-86f2-4fe8068d6894"). InnerVolumeSpecName "kube-api-access-rlm6n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:49:06 crc kubenswrapper[4811]: I0122 09:49:06.374384 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f8f61363-c6ef-4f43-86f2-4fe8068d6894-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f8f61363-c6ef-4f43-86f2-4fe8068d6894" (UID: "f8f61363-c6ef-4f43-86f2-4fe8068d6894"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:49:06 crc kubenswrapper[4811]: I0122 09:49:06.381426 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f8f61363-c6ef-4f43-86f2-4fe8068d6894-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "f8f61363-c6ef-4f43-86f2-4fe8068d6894" (UID: "f8f61363-c6ef-4f43-86f2-4fe8068d6894"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:49:06 crc kubenswrapper[4811]: I0122 09:49:06.383398 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f8f61363-c6ef-4f43-86f2-4fe8068d6894-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f8f61363-c6ef-4f43-86f2-4fe8068d6894" (UID: "f8f61363-c6ef-4f43-86f2-4fe8068d6894"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:49:06 crc kubenswrapper[4811]: I0122 09:49:06.386606 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f8f61363-c6ef-4f43-86f2-4fe8068d6894-config" (OuterVolumeSpecName: "config") pod "f8f61363-c6ef-4f43-86f2-4fe8068d6894" (UID: "f8f61363-c6ef-4f43-86f2-4fe8068d6894"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:49:06 crc kubenswrapper[4811]: I0122 09:49:06.395498 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f8f61363-c6ef-4f43-86f2-4fe8068d6894-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f8f61363-c6ef-4f43-86f2-4fe8068d6894" (UID: "f8f61363-c6ef-4f43-86f2-4fe8068d6894"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:49:06 crc kubenswrapper[4811]: I0122 09:49:06.417317 4811 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f8f61363-c6ef-4f43-86f2-4fe8068d6894-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 22 09:49:06 crc kubenswrapper[4811]: I0122 09:49:06.417436 4811 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f8f61363-c6ef-4f43-86f2-4fe8068d6894-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 22 09:49:06 crc kubenswrapper[4811]: I0122 09:49:06.417495 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rlm6n\" (UniqueName: \"kubernetes.io/projected/f8f61363-c6ef-4f43-86f2-4fe8068d6894-kube-api-access-rlm6n\") on node \"crc\" DevicePath \"\"" Jan 22 09:49:06 crc kubenswrapper[4811]: I0122 09:49:06.417552 4811 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f8f61363-c6ef-4f43-86f2-4fe8068d6894-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 22 09:49:06 crc kubenswrapper[4811]: I0122 09:49:06.417613 4811 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/f8f61363-c6ef-4f43-86f2-4fe8068d6894-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 22 09:49:06 crc kubenswrapper[4811]: I0122 09:49:06.417696 4811 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f8f61363-c6ef-4f43-86f2-4fe8068d6894-config\") on node \"crc\" DevicePath \"\"" Jan 22 09:49:06 crc kubenswrapper[4811]: I0122 09:49:06.785933 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-f75b46fc8-4l2b8" Jan 22 09:49:06 crc kubenswrapper[4811]: I0122 09:49:06.792758 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-d6c54dc74-r2gjr" Jan 22 09:49:07 crc kubenswrapper[4811]: I0122 09:49:07.079611 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-567fc67579-wl9zs" event={"ID":"f8f61363-c6ef-4f43-86f2-4fe8068d6894","Type":"ContainerDied","Data":"e87c69c7e0715c1106a6fb5db32bfbe1af924b1eb96881c8fc6b8a9eb3cae52d"} Jan 22 09:49:07 crc kubenswrapper[4811]: I0122 09:49:07.079846 4811 scope.go:117] "RemoveContainer" containerID="fa6523d23ce8f89d599e37f791aceab652c82188a35ffd67527888dec76ccd9a" Jan 22 09:49:07 crc kubenswrapper[4811]: I0122 09:49:07.084306 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-567fc67579-wl9zs" Jan 22 09:49:07 crc kubenswrapper[4811]: I0122 09:49:07.089762 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"0326dd33-6447-432e-9f9f-1ee950f5c82d","Type":"ContainerStarted","Data":"7c32946b9428edaf8e1b92bc2a0d31c11fb011792f40315448537e0476fa8d17"} Jan 22 09:49:07 crc kubenswrapper[4811]: I0122 09:49:07.089818 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"0326dd33-6447-432e-9f9f-1ee950f5c82d","Type":"ContainerStarted","Data":"538d115fefd99bdede60d08b99b7a4acc76fc80a4cf3958a169d88803cf4a122"} Jan 22 09:49:07 crc kubenswrapper[4811]: I0122 09:49:07.118851 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-share-share1-0" podStartSLOduration=3.043541577 podStartE2EDuration="14.118794383s" podCreationTimestamp="2026-01-22 09:48:53 +0000 UTC" firstStartedPulling="2026-01-22 09:48:54.723898292 +0000 UTC m=+2579.046085415" lastFinishedPulling="2026-01-22 09:49:05.799151098 +0000 UTC m=+2590.121338221" observedRunningTime="2026-01-22 09:49:07.109338734 +0000 UTC m=+2591.431525857" watchObservedRunningTime="2026-01-22 09:49:07.118794383 +0000 UTC m=+2591.440981505" Jan 22 09:49:07 crc kubenswrapper[4811]: I0122 09:49:07.125387 4811 scope.go:117] "RemoveContainer" containerID="ab4ea92138bb16ad219a34e3802998408118da9121d1dbb3b4f97226861b5858" Jan 22 09:49:07 crc kubenswrapper[4811]: I0122 09:49:07.153999 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-567fc67579-wl9zs"] Jan 22 09:49:07 crc kubenswrapper[4811]: I0122 09:49:07.168045 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-567fc67579-wl9zs"] Jan 22 09:49:07 crc kubenswrapper[4811]: I0122 09:49:07.954348 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 22 09:49:07 crc kubenswrapper[4811]: I0122 09:49:07.954873 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d71dac51-18b5-44ad-b5f4-d3c46943a8dc" containerName="ceilometer-central-agent" containerID="cri-o://71fea412a387028262aca8c831f527b6b91c1a02915a19ea7219ff8a6866cf8c" gracePeriod=30 Jan 22 09:49:07 crc kubenswrapper[4811]: I0122 09:49:07.954996 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d71dac51-18b5-44ad-b5f4-d3c46943a8dc" containerName="proxy-httpd" containerID="cri-o://b7dec49aa88a4935c5cfa19723dd0eb4e3a95dcb633cfdaccfb56e8ae3eef017" gracePeriod=30 Jan 22 09:49:07 crc kubenswrapper[4811]: I0122 09:49:07.955032 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d71dac51-18b5-44ad-b5f4-d3c46943a8dc" containerName="sg-core" containerID="cri-o://4eacedf749444e027dc4d754bb9a46b1cc9e4d2c6604cfc8ac52aa4961d23721" gracePeriod=30 Jan 22 09:49:07 crc kubenswrapper[4811]: I0122 09:49:07.955062 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d71dac51-18b5-44ad-b5f4-d3c46943a8dc" containerName="ceilometer-notification-agent" containerID="cri-o://9ce4358d058a0d4623771855bcce81da227b8f8455d98febbb5126ab691d54e7" gracePeriod=30 Jan 22 09:49:08 crc kubenswrapper[4811]: I0122 09:49:08.005363 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8f61363-c6ef-4f43-86f2-4fe8068d6894" path="/var/lib/kubelet/pods/f8f61363-c6ef-4f43-86f2-4fe8068d6894/volumes" Jan 22 09:49:08 crc kubenswrapper[4811]: I0122 09:49:08.101562 4811 generic.go:334] "Generic (PLEG): container finished" podID="d71dac51-18b5-44ad-b5f4-d3c46943a8dc" containerID="4eacedf749444e027dc4d754bb9a46b1cc9e4d2c6604cfc8ac52aa4961d23721" exitCode=2 Jan 22 09:49:08 crc kubenswrapper[4811]: I0122 09:49:08.101725 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d71dac51-18b5-44ad-b5f4-d3c46943a8dc","Type":"ContainerDied","Data":"4eacedf749444e027dc4d754bb9a46b1cc9e4d2c6604cfc8ac52aa4961d23721"} Jan 22 09:49:08 crc kubenswrapper[4811]: I0122 09:49:08.881487 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-d6c54dc74-r2gjr" Jan 22 09:49:08 crc kubenswrapper[4811]: I0122 09:49:08.931329 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-f75b46fc8-4l2b8" Jan 22 09:49:09 crc kubenswrapper[4811]: I0122 09:49:09.083635 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-d6c54dc74-r2gjr"] Jan 22 09:49:09 crc kubenswrapper[4811]: I0122 09:49:09.152367 4811 generic.go:334] "Generic (PLEG): container finished" podID="d2c00677-42e9-4694-9a73-a020bcb17a98" containerID="75f9c9ae46eadb35159ed1468edc3f2674789f8eb89f0644a6c69d7e26349ad5" exitCode=137 Jan 22 09:49:09 crc kubenswrapper[4811]: I0122 09:49:09.152401 4811 generic.go:334] "Generic (PLEG): container finished" podID="d2c00677-42e9-4694-9a73-a020bcb17a98" containerID="948296facdc97260934827eb7261690c45a258af647887da415085bdd163b1b9" exitCode=137 Jan 22 09:49:09 crc kubenswrapper[4811]: I0122 09:49:09.152474 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-79dbbfc64c-27fss" event={"ID":"d2c00677-42e9-4694-9a73-a020bcb17a98","Type":"ContainerDied","Data":"75f9c9ae46eadb35159ed1468edc3f2674789f8eb89f0644a6c69d7e26349ad5"} Jan 22 09:49:09 crc kubenswrapper[4811]: I0122 09:49:09.152524 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-79dbbfc64c-27fss" event={"ID":"d2c00677-42e9-4694-9a73-a020bcb17a98","Type":"ContainerDied","Data":"948296facdc97260934827eb7261690c45a258af647887da415085bdd163b1b9"} Jan 22 09:49:09 crc kubenswrapper[4811]: I0122 09:49:09.169780 4811 generic.go:334] "Generic (PLEG): container finished" podID="080675ca-91da-4c39-a901-fef7f8496220" containerID="e244d2590d776e1829f2b330a4cec949ee28e11c9644fb7e2c9d04358d0ee031" exitCode=137 Jan 22 09:49:09 crc kubenswrapper[4811]: I0122 09:49:09.169809 4811 generic.go:334] "Generic (PLEG): container finished" podID="080675ca-91da-4c39-a901-fef7f8496220" containerID="8c63244c0280484a029a7c883eb5e14321d435dad5a1f103e8afa75de14982ea" exitCode=137 Jan 22 09:49:09 crc kubenswrapper[4811]: I0122 09:49:09.169872 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-84776b8f5f-8h2x2" event={"ID":"080675ca-91da-4c39-a901-fef7f8496220","Type":"ContainerDied","Data":"e244d2590d776e1829f2b330a4cec949ee28e11c9644fb7e2c9d04358d0ee031"} Jan 22 09:49:09 crc kubenswrapper[4811]: I0122 09:49:09.169899 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-84776b8f5f-8h2x2" event={"ID":"080675ca-91da-4c39-a901-fef7f8496220","Type":"ContainerDied","Data":"8c63244c0280484a029a7c883eb5e14321d435dad5a1f103e8afa75de14982ea"} Jan 22 09:49:09 crc kubenswrapper[4811]: I0122 09:49:09.198256 4811 generic.go:334] "Generic (PLEG): container finished" podID="d71dac51-18b5-44ad-b5f4-d3c46943a8dc" containerID="b7dec49aa88a4935c5cfa19723dd0eb4e3a95dcb633cfdaccfb56e8ae3eef017" exitCode=0 Jan 22 09:49:09 crc kubenswrapper[4811]: I0122 09:49:09.198297 4811 generic.go:334] "Generic (PLEG): container finished" podID="d71dac51-18b5-44ad-b5f4-d3c46943a8dc" containerID="71fea412a387028262aca8c831f527b6b91c1a02915a19ea7219ff8a6866cf8c" exitCode=0 Jan 22 09:49:09 crc kubenswrapper[4811]: I0122 09:49:09.198506 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-d6c54dc74-r2gjr" podUID="660d9785-0f9b-4953-af76-580ed227c244" containerName="horizon-log" containerID="cri-o://015e311ea5ef98e58145ed177feb6471c51ef5a722b72a6005d3bb6fbf8d8201" gracePeriod=30 Jan 22 09:49:09 crc kubenswrapper[4811]: I0122 09:49:09.198715 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d71dac51-18b5-44ad-b5f4-d3c46943a8dc","Type":"ContainerDied","Data":"b7dec49aa88a4935c5cfa19723dd0eb4e3a95dcb633cfdaccfb56e8ae3eef017"} Jan 22 09:49:09 crc kubenswrapper[4811]: I0122 09:49:09.198778 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d71dac51-18b5-44ad-b5f4-d3c46943a8dc","Type":"ContainerDied","Data":"71fea412a387028262aca8c831f527b6b91c1a02915a19ea7219ff8a6866cf8c"} Jan 22 09:49:09 crc kubenswrapper[4811]: I0122 09:49:09.198847 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-d6c54dc74-r2gjr" podUID="660d9785-0f9b-4953-af76-580ed227c244" containerName="horizon" containerID="cri-o://d932f2adbda59824841f6cb86225f24b2e877e9f69ebadd4e657ce3e01adf42c" gracePeriod=30 Jan 22 09:49:09 crc kubenswrapper[4811]: I0122 09:49:09.282489 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-84776b8f5f-8h2x2" Jan 22 09:49:09 crc kubenswrapper[4811]: I0122 09:49:09.286594 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-79dbbfc64c-27fss" Jan 22 09:49:09 crc kubenswrapper[4811]: I0122 09:49:09.413608 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/080675ca-91da-4c39-a901-fef7f8496220-logs\") pod \"080675ca-91da-4c39-a901-fef7f8496220\" (UID: \"080675ca-91da-4c39-a901-fef7f8496220\") " Jan 22 09:49:09 crc kubenswrapper[4811]: I0122 09:49:09.413694 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d2c00677-42e9-4694-9a73-a020bcb17a98-horizon-secret-key\") pod \"d2c00677-42e9-4694-9a73-a020bcb17a98\" (UID: \"d2c00677-42e9-4694-9a73-a020bcb17a98\") " Jan 22 09:49:09 crc kubenswrapper[4811]: I0122 09:49:09.413781 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b9rtv\" (UniqueName: \"kubernetes.io/projected/d2c00677-42e9-4694-9a73-a020bcb17a98-kube-api-access-b9rtv\") pod \"d2c00677-42e9-4694-9a73-a020bcb17a98\" (UID: \"d2c00677-42e9-4694-9a73-a020bcb17a98\") " Jan 22 09:49:09 crc kubenswrapper[4811]: I0122 09:49:09.413813 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/080675ca-91da-4c39-a901-fef7f8496220-horizon-secret-key\") pod \"080675ca-91da-4c39-a901-fef7f8496220\" (UID: \"080675ca-91da-4c39-a901-fef7f8496220\") " Jan 22 09:49:09 crc kubenswrapper[4811]: I0122 09:49:09.413844 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d2c00677-42e9-4694-9a73-a020bcb17a98-config-data\") pod \"d2c00677-42e9-4694-9a73-a020bcb17a98\" (UID: \"d2c00677-42e9-4694-9a73-a020bcb17a98\") " Jan 22 09:49:09 crc kubenswrapper[4811]: I0122 09:49:09.413878 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kgpzf\" (UniqueName: \"kubernetes.io/projected/080675ca-91da-4c39-a901-fef7f8496220-kube-api-access-kgpzf\") pod \"080675ca-91da-4c39-a901-fef7f8496220\" (UID: \"080675ca-91da-4c39-a901-fef7f8496220\") " Jan 22 09:49:09 crc kubenswrapper[4811]: I0122 09:49:09.413925 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/080675ca-91da-4c39-a901-fef7f8496220-scripts\") pod \"080675ca-91da-4c39-a901-fef7f8496220\" (UID: \"080675ca-91da-4c39-a901-fef7f8496220\") " Jan 22 09:49:09 crc kubenswrapper[4811]: I0122 09:49:09.414039 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/080675ca-91da-4c39-a901-fef7f8496220-config-data\") pod \"080675ca-91da-4c39-a901-fef7f8496220\" (UID: \"080675ca-91da-4c39-a901-fef7f8496220\") " Jan 22 09:49:09 crc kubenswrapper[4811]: I0122 09:49:09.414090 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d2c00677-42e9-4694-9a73-a020bcb17a98-scripts\") pod \"d2c00677-42e9-4694-9a73-a020bcb17a98\" (UID: \"d2c00677-42e9-4694-9a73-a020bcb17a98\") " Jan 22 09:49:09 crc kubenswrapper[4811]: I0122 09:49:09.414119 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d2c00677-42e9-4694-9a73-a020bcb17a98-logs\") pod \"d2c00677-42e9-4694-9a73-a020bcb17a98\" (UID: \"d2c00677-42e9-4694-9a73-a020bcb17a98\") " Jan 22 09:49:09 crc kubenswrapper[4811]: I0122 09:49:09.414271 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/080675ca-91da-4c39-a901-fef7f8496220-logs" (OuterVolumeSpecName: "logs") pod "080675ca-91da-4c39-a901-fef7f8496220" (UID: "080675ca-91da-4c39-a901-fef7f8496220"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:49:09 crc kubenswrapper[4811]: I0122 09:49:09.414543 4811 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/080675ca-91da-4c39-a901-fef7f8496220-logs\") on node \"crc\" DevicePath \"\"" Jan 22 09:49:09 crc kubenswrapper[4811]: I0122 09:49:09.419222 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d2c00677-42e9-4694-9a73-a020bcb17a98-logs" (OuterVolumeSpecName: "logs") pod "d2c00677-42e9-4694-9a73-a020bcb17a98" (UID: "d2c00677-42e9-4694-9a73-a020bcb17a98"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:49:09 crc kubenswrapper[4811]: I0122 09:49:09.421961 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2c00677-42e9-4694-9a73-a020bcb17a98-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "d2c00677-42e9-4694-9a73-a020bcb17a98" (UID: "d2c00677-42e9-4694-9a73-a020bcb17a98"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:49:09 crc kubenswrapper[4811]: I0122 09:49:09.424509 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/080675ca-91da-4c39-a901-fef7f8496220-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "080675ca-91da-4c39-a901-fef7f8496220" (UID: "080675ca-91da-4c39-a901-fef7f8496220"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:49:09 crc kubenswrapper[4811]: I0122 09:49:09.435673 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2c00677-42e9-4694-9a73-a020bcb17a98-kube-api-access-b9rtv" (OuterVolumeSpecName: "kube-api-access-b9rtv") pod "d2c00677-42e9-4694-9a73-a020bcb17a98" (UID: "d2c00677-42e9-4694-9a73-a020bcb17a98"). InnerVolumeSpecName "kube-api-access-b9rtv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:49:09 crc kubenswrapper[4811]: I0122 09:49:09.436340 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/080675ca-91da-4c39-a901-fef7f8496220-kube-api-access-kgpzf" (OuterVolumeSpecName: "kube-api-access-kgpzf") pod "080675ca-91da-4c39-a901-fef7f8496220" (UID: "080675ca-91da-4c39-a901-fef7f8496220"). InnerVolumeSpecName "kube-api-access-kgpzf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:49:09 crc kubenswrapper[4811]: I0122 09:49:09.451137 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/080675ca-91da-4c39-a901-fef7f8496220-scripts" (OuterVolumeSpecName: "scripts") pod "080675ca-91da-4c39-a901-fef7f8496220" (UID: "080675ca-91da-4c39-a901-fef7f8496220"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:49:09 crc kubenswrapper[4811]: I0122 09:49:09.457770 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2c00677-42e9-4694-9a73-a020bcb17a98-scripts" (OuterVolumeSpecName: "scripts") pod "d2c00677-42e9-4694-9a73-a020bcb17a98" (UID: "d2c00677-42e9-4694-9a73-a020bcb17a98"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:49:09 crc kubenswrapper[4811]: I0122 09:49:09.477270 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/080675ca-91da-4c39-a901-fef7f8496220-config-data" (OuterVolumeSpecName: "config-data") pod "080675ca-91da-4c39-a901-fef7f8496220" (UID: "080675ca-91da-4c39-a901-fef7f8496220"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:49:09 crc kubenswrapper[4811]: I0122 09:49:09.477481 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2c00677-42e9-4694-9a73-a020bcb17a98-config-data" (OuterVolumeSpecName: "config-data") pod "d2c00677-42e9-4694-9a73-a020bcb17a98" (UID: "d2c00677-42e9-4694-9a73-a020bcb17a98"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:49:09 crc kubenswrapper[4811]: I0122 09:49:09.516979 4811 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/080675ca-91da-4c39-a901-fef7f8496220-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 09:49:09 crc kubenswrapper[4811]: I0122 09:49:09.517093 4811 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d2c00677-42e9-4694-9a73-a020bcb17a98-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 09:49:09 crc kubenswrapper[4811]: I0122 09:49:09.517170 4811 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d2c00677-42e9-4694-9a73-a020bcb17a98-logs\") on node \"crc\" DevicePath \"\"" Jan 22 09:49:09 crc kubenswrapper[4811]: I0122 09:49:09.517227 4811 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d2c00677-42e9-4694-9a73-a020bcb17a98-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 22 09:49:09 crc kubenswrapper[4811]: I0122 09:49:09.517275 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b9rtv\" (UniqueName: \"kubernetes.io/projected/d2c00677-42e9-4694-9a73-a020bcb17a98-kube-api-access-b9rtv\") on node \"crc\" DevicePath \"\"" Jan 22 09:49:09 crc kubenswrapper[4811]: I0122 09:49:09.517321 4811 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/080675ca-91da-4c39-a901-fef7f8496220-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 22 09:49:09 crc kubenswrapper[4811]: I0122 09:49:09.517382 4811 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d2c00677-42e9-4694-9a73-a020bcb17a98-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 09:49:09 crc kubenswrapper[4811]: I0122 09:49:09.517429 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kgpzf\" (UniqueName: \"kubernetes.io/projected/080675ca-91da-4c39-a901-fef7f8496220-kube-api-access-kgpzf\") on node \"crc\" DevicePath \"\"" Jan 22 09:49:09 crc kubenswrapper[4811]: I0122 09:49:09.517473 4811 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/080675ca-91da-4c39-a901-fef7f8496220-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 09:49:10 crc kubenswrapper[4811]: I0122 09:49:10.207539 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-79dbbfc64c-27fss" Jan 22 09:49:10 crc kubenswrapper[4811]: I0122 09:49:10.207601 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-79dbbfc64c-27fss" event={"ID":"d2c00677-42e9-4694-9a73-a020bcb17a98","Type":"ContainerDied","Data":"52dcbce182134ba88ce993c9ce5850dd7cc5002d2c67d1b08d7f37eae5cbe022"} Jan 22 09:49:10 crc kubenswrapper[4811]: I0122 09:49:10.207799 4811 scope.go:117] "RemoveContainer" containerID="75f9c9ae46eadb35159ed1468edc3f2674789f8eb89f0644a6c69d7e26349ad5" Jan 22 09:49:10 crc kubenswrapper[4811]: I0122 09:49:10.210681 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-84776b8f5f-8h2x2" event={"ID":"080675ca-91da-4c39-a901-fef7f8496220","Type":"ContainerDied","Data":"96a10a839f606201c36c8209ef85468ddb61a4f6052902d2d13a6640d0c3e388"} Jan 22 09:49:10 crc kubenswrapper[4811]: I0122 09:49:10.210834 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-84776b8f5f-8h2x2" Jan 22 09:49:10 crc kubenswrapper[4811]: I0122 09:49:10.234931 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-79dbbfc64c-27fss"] Jan 22 09:49:10 crc kubenswrapper[4811]: I0122 09:49:10.243378 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-79dbbfc64c-27fss"] Jan 22 09:49:10 crc kubenswrapper[4811]: I0122 09:49:10.257355 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-84776b8f5f-8h2x2"] Jan 22 09:49:10 crc kubenswrapper[4811]: I0122 09:49:10.271878 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-84776b8f5f-8h2x2"] Jan 22 09:49:10 crc kubenswrapper[4811]: I0122 09:49:10.359068 4811 scope.go:117] "RemoveContainer" containerID="948296facdc97260934827eb7261690c45a258af647887da415085bdd163b1b9" Jan 22 09:49:10 crc kubenswrapper[4811]: I0122 09:49:10.377198 4811 scope.go:117] "RemoveContainer" containerID="e244d2590d776e1829f2b330a4cec949ee28e11c9644fb7e2c9d04358d0ee031" Jan 22 09:49:10 crc kubenswrapper[4811]: I0122 09:49:10.526938 4811 scope.go:117] "RemoveContainer" containerID="8c63244c0280484a029a7c883eb5e14321d435dad5a1f103e8afa75de14982ea" Jan 22 09:49:11 crc kubenswrapper[4811]: I0122 09:49:11.965370 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 09:49:12 crc kubenswrapper[4811]: I0122 09:49:12.002432 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="080675ca-91da-4c39-a901-fef7f8496220" path="/var/lib/kubelet/pods/080675ca-91da-4c39-a901-fef7f8496220/volumes" Jan 22 09:49:12 crc kubenswrapper[4811]: I0122 09:49:12.003275 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2c00677-42e9-4694-9a73-a020bcb17a98" path="/var/lib/kubelet/pods/d2c00677-42e9-4694-9a73-a020bcb17a98/volumes" Jan 22 09:49:12 crc kubenswrapper[4811]: I0122 09:49:12.092541 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d71dac51-18b5-44ad-b5f4-d3c46943a8dc-scripts\") pod \"d71dac51-18b5-44ad-b5f4-d3c46943a8dc\" (UID: \"d71dac51-18b5-44ad-b5f4-d3c46943a8dc\") " Jan 22 09:49:12 crc kubenswrapper[4811]: I0122 09:49:12.092647 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d71dac51-18b5-44ad-b5f4-d3c46943a8dc-run-httpd\") pod \"d71dac51-18b5-44ad-b5f4-d3c46943a8dc\" (UID: \"d71dac51-18b5-44ad-b5f4-d3c46943a8dc\") " Jan 22 09:49:12 crc kubenswrapper[4811]: I0122 09:49:12.092681 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d71dac51-18b5-44ad-b5f4-d3c46943a8dc-config-data\") pod \"d71dac51-18b5-44ad-b5f4-d3c46943a8dc\" (UID: \"d71dac51-18b5-44ad-b5f4-d3c46943a8dc\") " Jan 22 09:49:12 crc kubenswrapper[4811]: I0122 09:49:12.092715 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vdv7p\" (UniqueName: \"kubernetes.io/projected/d71dac51-18b5-44ad-b5f4-d3c46943a8dc-kube-api-access-vdv7p\") pod \"d71dac51-18b5-44ad-b5f4-d3c46943a8dc\" (UID: \"d71dac51-18b5-44ad-b5f4-d3c46943a8dc\") " Jan 22 09:49:12 crc kubenswrapper[4811]: I0122 09:49:12.092754 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d71dac51-18b5-44ad-b5f4-d3c46943a8dc-combined-ca-bundle\") pod \"d71dac51-18b5-44ad-b5f4-d3c46943a8dc\" (UID: \"d71dac51-18b5-44ad-b5f4-d3c46943a8dc\") " Jan 22 09:49:12 crc kubenswrapper[4811]: I0122 09:49:12.092895 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d71dac51-18b5-44ad-b5f4-d3c46943a8dc-ceilometer-tls-certs\") pod \"d71dac51-18b5-44ad-b5f4-d3c46943a8dc\" (UID: \"d71dac51-18b5-44ad-b5f4-d3c46943a8dc\") " Jan 22 09:49:12 crc kubenswrapper[4811]: I0122 09:49:12.092959 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d71dac51-18b5-44ad-b5f4-d3c46943a8dc-sg-core-conf-yaml\") pod \"d71dac51-18b5-44ad-b5f4-d3c46943a8dc\" (UID: \"d71dac51-18b5-44ad-b5f4-d3c46943a8dc\") " Jan 22 09:49:12 crc kubenswrapper[4811]: I0122 09:49:12.093008 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d71dac51-18b5-44ad-b5f4-d3c46943a8dc-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "d71dac51-18b5-44ad-b5f4-d3c46943a8dc" (UID: "d71dac51-18b5-44ad-b5f4-d3c46943a8dc"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:49:12 crc kubenswrapper[4811]: I0122 09:49:12.093074 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d71dac51-18b5-44ad-b5f4-d3c46943a8dc-log-httpd\") pod \"d71dac51-18b5-44ad-b5f4-d3c46943a8dc\" (UID: \"d71dac51-18b5-44ad-b5f4-d3c46943a8dc\") " Jan 22 09:49:12 crc kubenswrapper[4811]: I0122 09:49:12.094024 4811 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d71dac51-18b5-44ad-b5f4-d3c46943a8dc-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 09:49:12 crc kubenswrapper[4811]: I0122 09:49:12.095370 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d71dac51-18b5-44ad-b5f4-d3c46943a8dc-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "d71dac51-18b5-44ad-b5f4-d3c46943a8dc" (UID: "d71dac51-18b5-44ad-b5f4-d3c46943a8dc"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:49:12 crc kubenswrapper[4811]: I0122 09:49:12.099858 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d71dac51-18b5-44ad-b5f4-d3c46943a8dc-scripts" (OuterVolumeSpecName: "scripts") pod "d71dac51-18b5-44ad-b5f4-d3c46943a8dc" (UID: "d71dac51-18b5-44ad-b5f4-d3c46943a8dc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:49:12 crc kubenswrapper[4811]: I0122 09:49:12.112886 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d71dac51-18b5-44ad-b5f4-d3c46943a8dc-kube-api-access-vdv7p" (OuterVolumeSpecName: "kube-api-access-vdv7p") pod "d71dac51-18b5-44ad-b5f4-d3c46943a8dc" (UID: "d71dac51-18b5-44ad-b5f4-d3c46943a8dc"). InnerVolumeSpecName "kube-api-access-vdv7p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:49:12 crc kubenswrapper[4811]: I0122 09:49:12.126692 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d71dac51-18b5-44ad-b5f4-d3c46943a8dc-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "d71dac51-18b5-44ad-b5f4-d3c46943a8dc" (UID: "d71dac51-18b5-44ad-b5f4-d3c46943a8dc"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:49:12 crc kubenswrapper[4811]: I0122 09:49:12.141281 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d71dac51-18b5-44ad-b5f4-d3c46943a8dc-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "d71dac51-18b5-44ad-b5f4-d3c46943a8dc" (UID: "d71dac51-18b5-44ad-b5f4-d3c46943a8dc"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:49:12 crc kubenswrapper[4811]: I0122 09:49:12.168862 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d71dac51-18b5-44ad-b5f4-d3c46943a8dc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d71dac51-18b5-44ad-b5f4-d3c46943a8dc" (UID: "d71dac51-18b5-44ad-b5f4-d3c46943a8dc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:49:12 crc kubenswrapper[4811]: I0122 09:49:12.189251 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d71dac51-18b5-44ad-b5f4-d3c46943a8dc-config-data" (OuterVolumeSpecName: "config-data") pod "d71dac51-18b5-44ad-b5f4-d3c46943a8dc" (UID: "d71dac51-18b5-44ad-b5f4-d3c46943a8dc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:49:12 crc kubenswrapper[4811]: I0122 09:49:12.197056 4811 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d71dac51-18b5-44ad-b5f4-d3c46943a8dc-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 22 09:49:12 crc kubenswrapper[4811]: I0122 09:49:12.197104 4811 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d71dac51-18b5-44ad-b5f4-d3c46943a8dc-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 09:49:12 crc kubenswrapper[4811]: I0122 09:49:12.197115 4811 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d71dac51-18b5-44ad-b5f4-d3c46943a8dc-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 09:49:12 crc kubenswrapper[4811]: I0122 09:49:12.197127 4811 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d71dac51-18b5-44ad-b5f4-d3c46943a8dc-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 09:49:12 crc kubenswrapper[4811]: I0122 09:49:12.197136 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vdv7p\" (UniqueName: \"kubernetes.io/projected/d71dac51-18b5-44ad-b5f4-d3c46943a8dc-kube-api-access-vdv7p\") on node \"crc\" DevicePath \"\"" Jan 22 09:49:12 crc kubenswrapper[4811]: I0122 09:49:12.197147 4811 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d71dac51-18b5-44ad-b5f4-d3c46943a8dc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 09:49:12 crc kubenswrapper[4811]: I0122 09:49:12.197156 4811 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d71dac51-18b5-44ad-b5f4-d3c46943a8dc-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 09:49:12 crc kubenswrapper[4811]: I0122 09:49:12.241975 4811 generic.go:334] "Generic (PLEG): container finished" podID="d71dac51-18b5-44ad-b5f4-d3c46943a8dc" containerID="9ce4358d058a0d4623771855bcce81da227b8f8455d98febbb5126ab691d54e7" exitCode=0 Jan 22 09:49:12 crc kubenswrapper[4811]: I0122 09:49:12.242022 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d71dac51-18b5-44ad-b5f4-d3c46943a8dc","Type":"ContainerDied","Data":"9ce4358d058a0d4623771855bcce81da227b8f8455d98febbb5126ab691d54e7"} Jan 22 09:49:12 crc kubenswrapper[4811]: I0122 09:49:12.242076 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d71dac51-18b5-44ad-b5f4-d3c46943a8dc","Type":"ContainerDied","Data":"39e018f3cccc54360d617a9395607e403fa0637051a71cbb1715f6ddff695337"} Jan 22 09:49:12 crc kubenswrapper[4811]: I0122 09:49:12.242100 4811 scope.go:117] "RemoveContainer" containerID="b7dec49aa88a4935c5cfa19723dd0eb4e3a95dcb633cfdaccfb56e8ae3eef017" Jan 22 09:49:12 crc kubenswrapper[4811]: I0122 09:49:12.242316 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 09:49:12 crc kubenswrapper[4811]: I0122 09:49:12.300324 4811 scope.go:117] "RemoveContainer" containerID="4eacedf749444e027dc4d754bb9a46b1cc9e4d2c6604cfc8ac52aa4961d23721" Jan 22 09:49:12 crc kubenswrapper[4811]: I0122 09:49:12.309684 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 22 09:49:12 crc kubenswrapper[4811]: I0122 09:49:12.323856 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 22 09:49:12 crc kubenswrapper[4811]: I0122 09:49:12.337277 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 22 09:49:12 crc kubenswrapper[4811]: I0122 09:49:12.337404 4811 scope.go:117] "RemoveContainer" containerID="9ce4358d058a0d4623771855bcce81da227b8f8455d98febbb5126ab691d54e7" Jan 22 09:49:12 crc kubenswrapper[4811]: E0122 09:49:12.337782 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8f61363-c6ef-4f43-86f2-4fe8068d6894" containerName="dnsmasq-dns" Jan 22 09:49:12 crc kubenswrapper[4811]: I0122 09:49:12.337804 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8f61363-c6ef-4f43-86f2-4fe8068d6894" containerName="dnsmasq-dns" Jan 22 09:49:12 crc kubenswrapper[4811]: E0122 09:49:12.337826 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2c00677-42e9-4694-9a73-a020bcb17a98" containerName="horizon-log" Jan 22 09:49:12 crc kubenswrapper[4811]: I0122 09:49:12.337833 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2c00677-42e9-4694-9a73-a020bcb17a98" containerName="horizon-log" Jan 22 09:49:12 crc kubenswrapper[4811]: E0122 09:49:12.337846 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8f61363-c6ef-4f43-86f2-4fe8068d6894" containerName="init" Jan 22 09:49:12 crc kubenswrapper[4811]: I0122 09:49:12.337852 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8f61363-c6ef-4f43-86f2-4fe8068d6894" containerName="init" Jan 22 09:49:12 crc kubenswrapper[4811]: E0122 09:49:12.337866 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d71dac51-18b5-44ad-b5f4-d3c46943a8dc" containerName="sg-core" Jan 22 09:49:12 crc kubenswrapper[4811]: I0122 09:49:12.337874 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="d71dac51-18b5-44ad-b5f4-d3c46943a8dc" containerName="sg-core" Jan 22 09:49:12 crc kubenswrapper[4811]: E0122 09:49:12.337883 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="080675ca-91da-4c39-a901-fef7f8496220" containerName="horizon-log" Jan 22 09:49:12 crc kubenswrapper[4811]: I0122 09:49:12.337889 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="080675ca-91da-4c39-a901-fef7f8496220" containerName="horizon-log" Jan 22 09:49:12 crc kubenswrapper[4811]: E0122 09:49:12.337895 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d71dac51-18b5-44ad-b5f4-d3c46943a8dc" containerName="proxy-httpd" Jan 22 09:49:12 crc kubenswrapper[4811]: I0122 09:49:12.337901 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="d71dac51-18b5-44ad-b5f4-d3c46943a8dc" containerName="proxy-httpd" Jan 22 09:49:12 crc kubenswrapper[4811]: E0122 09:49:12.337911 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d71dac51-18b5-44ad-b5f4-d3c46943a8dc" containerName="ceilometer-central-agent" Jan 22 09:49:12 crc kubenswrapper[4811]: I0122 09:49:12.337917 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="d71dac51-18b5-44ad-b5f4-d3c46943a8dc" containerName="ceilometer-central-agent" Jan 22 09:49:12 crc kubenswrapper[4811]: E0122 09:49:12.337930 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2c00677-42e9-4694-9a73-a020bcb17a98" containerName="horizon" Jan 22 09:49:12 crc kubenswrapper[4811]: I0122 09:49:12.337935 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2c00677-42e9-4694-9a73-a020bcb17a98" containerName="horizon" Jan 22 09:49:12 crc kubenswrapper[4811]: E0122 09:49:12.337942 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d71dac51-18b5-44ad-b5f4-d3c46943a8dc" containerName="ceilometer-notification-agent" Jan 22 09:49:12 crc kubenswrapper[4811]: I0122 09:49:12.337949 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="d71dac51-18b5-44ad-b5f4-d3c46943a8dc" containerName="ceilometer-notification-agent" Jan 22 09:49:12 crc kubenswrapper[4811]: E0122 09:49:12.337959 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="080675ca-91da-4c39-a901-fef7f8496220" containerName="horizon" Jan 22 09:49:12 crc kubenswrapper[4811]: I0122 09:49:12.337964 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="080675ca-91da-4c39-a901-fef7f8496220" containerName="horizon" Jan 22 09:49:12 crc kubenswrapper[4811]: I0122 09:49:12.339479 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="d71dac51-18b5-44ad-b5f4-d3c46943a8dc" containerName="sg-core" Jan 22 09:49:12 crc kubenswrapper[4811]: I0122 09:49:12.339516 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="d71dac51-18b5-44ad-b5f4-d3c46943a8dc" containerName="ceilometer-central-agent" Jan 22 09:49:12 crc kubenswrapper[4811]: I0122 09:49:12.339532 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8f61363-c6ef-4f43-86f2-4fe8068d6894" containerName="dnsmasq-dns" Jan 22 09:49:12 crc kubenswrapper[4811]: I0122 09:49:12.339546 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="080675ca-91da-4c39-a901-fef7f8496220" containerName="horizon-log" Jan 22 09:49:12 crc kubenswrapper[4811]: I0122 09:49:12.339563 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2c00677-42e9-4694-9a73-a020bcb17a98" containerName="horizon" Jan 22 09:49:12 crc kubenswrapper[4811]: I0122 09:49:12.339578 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="d71dac51-18b5-44ad-b5f4-d3c46943a8dc" containerName="ceilometer-notification-agent" Jan 22 09:49:12 crc kubenswrapper[4811]: I0122 09:49:12.339600 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="080675ca-91da-4c39-a901-fef7f8496220" containerName="horizon" Jan 22 09:49:12 crc kubenswrapper[4811]: I0122 09:49:12.339613 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2c00677-42e9-4694-9a73-a020bcb17a98" containerName="horizon-log" Jan 22 09:49:12 crc kubenswrapper[4811]: I0122 09:49:12.339646 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="d71dac51-18b5-44ad-b5f4-d3c46943a8dc" containerName="proxy-httpd" Jan 22 09:49:12 crc kubenswrapper[4811]: I0122 09:49:12.349920 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 09:49:12 crc kubenswrapper[4811]: I0122 09:49:12.353923 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 22 09:49:12 crc kubenswrapper[4811]: I0122 09:49:12.356682 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 22 09:49:12 crc kubenswrapper[4811]: I0122 09:49:12.373785 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 22 09:49:12 crc kubenswrapper[4811]: I0122 09:49:12.380018 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 22 09:49:12 crc kubenswrapper[4811]: I0122 09:49:12.394593 4811 scope.go:117] "RemoveContainer" containerID="71fea412a387028262aca8c831f527b6b91c1a02915a19ea7219ff8a6866cf8c" Jan 22 09:49:12 crc kubenswrapper[4811]: I0122 09:49:12.414378 4811 scope.go:117] "RemoveContainer" containerID="b7dec49aa88a4935c5cfa19723dd0eb4e3a95dcb633cfdaccfb56e8ae3eef017" Jan 22 09:49:12 crc kubenswrapper[4811]: E0122 09:49:12.414840 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b7dec49aa88a4935c5cfa19723dd0eb4e3a95dcb633cfdaccfb56e8ae3eef017\": container with ID starting with b7dec49aa88a4935c5cfa19723dd0eb4e3a95dcb633cfdaccfb56e8ae3eef017 not found: ID does not exist" containerID="b7dec49aa88a4935c5cfa19723dd0eb4e3a95dcb633cfdaccfb56e8ae3eef017" Jan 22 09:49:12 crc kubenswrapper[4811]: I0122 09:49:12.414880 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b7dec49aa88a4935c5cfa19723dd0eb4e3a95dcb633cfdaccfb56e8ae3eef017"} err="failed to get container status \"b7dec49aa88a4935c5cfa19723dd0eb4e3a95dcb633cfdaccfb56e8ae3eef017\": rpc error: code = NotFound desc = could not find container \"b7dec49aa88a4935c5cfa19723dd0eb4e3a95dcb633cfdaccfb56e8ae3eef017\": container with ID starting with b7dec49aa88a4935c5cfa19723dd0eb4e3a95dcb633cfdaccfb56e8ae3eef017 not found: ID does not exist" Jan 22 09:49:12 crc kubenswrapper[4811]: I0122 09:49:12.414901 4811 scope.go:117] "RemoveContainer" containerID="4eacedf749444e027dc4d754bb9a46b1cc9e4d2c6604cfc8ac52aa4961d23721" Jan 22 09:49:12 crc kubenswrapper[4811]: E0122 09:49:12.415163 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4eacedf749444e027dc4d754bb9a46b1cc9e4d2c6604cfc8ac52aa4961d23721\": container with ID starting with 4eacedf749444e027dc4d754bb9a46b1cc9e4d2c6604cfc8ac52aa4961d23721 not found: ID does not exist" containerID="4eacedf749444e027dc4d754bb9a46b1cc9e4d2c6604cfc8ac52aa4961d23721" Jan 22 09:49:12 crc kubenswrapper[4811]: I0122 09:49:12.415187 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4eacedf749444e027dc4d754bb9a46b1cc9e4d2c6604cfc8ac52aa4961d23721"} err="failed to get container status \"4eacedf749444e027dc4d754bb9a46b1cc9e4d2c6604cfc8ac52aa4961d23721\": rpc error: code = NotFound desc = could not find container \"4eacedf749444e027dc4d754bb9a46b1cc9e4d2c6604cfc8ac52aa4961d23721\": container with ID starting with 4eacedf749444e027dc4d754bb9a46b1cc9e4d2c6604cfc8ac52aa4961d23721 not found: ID does not exist" Jan 22 09:49:12 crc kubenswrapper[4811]: I0122 09:49:12.415201 4811 scope.go:117] "RemoveContainer" containerID="9ce4358d058a0d4623771855bcce81da227b8f8455d98febbb5126ab691d54e7" Jan 22 09:49:12 crc kubenswrapper[4811]: E0122 09:49:12.415426 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ce4358d058a0d4623771855bcce81da227b8f8455d98febbb5126ab691d54e7\": container with ID starting with 9ce4358d058a0d4623771855bcce81da227b8f8455d98febbb5126ab691d54e7 not found: ID does not exist" containerID="9ce4358d058a0d4623771855bcce81da227b8f8455d98febbb5126ab691d54e7" Jan 22 09:49:12 crc kubenswrapper[4811]: I0122 09:49:12.415453 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ce4358d058a0d4623771855bcce81da227b8f8455d98febbb5126ab691d54e7"} err="failed to get container status \"9ce4358d058a0d4623771855bcce81da227b8f8455d98febbb5126ab691d54e7\": rpc error: code = NotFound desc = could not find container \"9ce4358d058a0d4623771855bcce81da227b8f8455d98febbb5126ab691d54e7\": container with ID starting with 9ce4358d058a0d4623771855bcce81da227b8f8455d98febbb5126ab691d54e7 not found: ID does not exist" Jan 22 09:49:12 crc kubenswrapper[4811]: I0122 09:49:12.415468 4811 scope.go:117] "RemoveContainer" containerID="71fea412a387028262aca8c831f527b6b91c1a02915a19ea7219ff8a6866cf8c" Jan 22 09:49:12 crc kubenswrapper[4811]: E0122 09:49:12.415793 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"71fea412a387028262aca8c831f527b6b91c1a02915a19ea7219ff8a6866cf8c\": container with ID starting with 71fea412a387028262aca8c831f527b6b91c1a02915a19ea7219ff8a6866cf8c not found: ID does not exist" containerID="71fea412a387028262aca8c831f527b6b91c1a02915a19ea7219ff8a6866cf8c" Jan 22 09:49:12 crc kubenswrapper[4811]: I0122 09:49:12.415817 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71fea412a387028262aca8c831f527b6b91c1a02915a19ea7219ff8a6866cf8c"} err="failed to get container status \"71fea412a387028262aca8c831f527b6b91c1a02915a19ea7219ff8a6866cf8c\": rpc error: code = NotFound desc = could not find container \"71fea412a387028262aca8c831f527b6b91c1a02915a19ea7219ff8a6866cf8c\": container with ID starting with 71fea412a387028262aca8c831f527b6b91c1a02915a19ea7219ff8a6866cf8c not found: ID does not exist" Jan 22 09:49:12 crc kubenswrapper[4811]: I0122 09:49:12.504347 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a18f46ac-7cea-410a-ac94-959fc43823bc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a18f46ac-7cea-410a-ac94-959fc43823bc\") " pod="openstack/ceilometer-0" Jan 22 09:49:12 crc kubenswrapper[4811]: I0122 09:49:12.504416 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a18f46ac-7cea-410a-ac94-959fc43823bc-scripts\") pod \"ceilometer-0\" (UID: \"a18f46ac-7cea-410a-ac94-959fc43823bc\") " pod="openstack/ceilometer-0" Jan 22 09:49:12 crc kubenswrapper[4811]: I0122 09:49:12.504541 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a18f46ac-7cea-410a-ac94-959fc43823bc-log-httpd\") pod \"ceilometer-0\" (UID: \"a18f46ac-7cea-410a-ac94-959fc43823bc\") " pod="openstack/ceilometer-0" Jan 22 09:49:12 crc kubenswrapper[4811]: I0122 09:49:12.504565 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a18f46ac-7cea-410a-ac94-959fc43823bc-run-httpd\") pod \"ceilometer-0\" (UID: \"a18f46ac-7cea-410a-ac94-959fc43823bc\") " pod="openstack/ceilometer-0" Jan 22 09:49:12 crc kubenswrapper[4811]: I0122 09:49:12.504591 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfc6r\" (UniqueName: \"kubernetes.io/projected/a18f46ac-7cea-410a-ac94-959fc43823bc-kube-api-access-vfc6r\") pod \"ceilometer-0\" (UID: \"a18f46ac-7cea-410a-ac94-959fc43823bc\") " pod="openstack/ceilometer-0" Jan 22 09:49:12 crc kubenswrapper[4811]: I0122 09:49:12.504609 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a18f46ac-7cea-410a-ac94-959fc43823bc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a18f46ac-7cea-410a-ac94-959fc43823bc\") " pod="openstack/ceilometer-0" Jan 22 09:49:12 crc kubenswrapper[4811]: I0122 09:49:12.504653 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a18f46ac-7cea-410a-ac94-959fc43823bc-config-data\") pod \"ceilometer-0\" (UID: \"a18f46ac-7cea-410a-ac94-959fc43823bc\") " pod="openstack/ceilometer-0" Jan 22 09:49:12 crc kubenswrapper[4811]: I0122 09:49:12.504752 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a18f46ac-7cea-410a-ac94-959fc43823bc-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"a18f46ac-7cea-410a-ac94-959fc43823bc\") " pod="openstack/ceilometer-0" Jan 22 09:49:12 crc kubenswrapper[4811]: I0122 09:49:12.607091 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a18f46ac-7cea-410a-ac94-959fc43823bc-log-httpd\") pod \"ceilometer-0\" (UID: \"a18f46ac-7cea-410a-ac94-959fc43823bc\") " pod="openstack/ceilometer-0" Jan 22 09:49:12 crc kubenswrapper[4811]: I0122 09:49:12.607142 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a18f46ac-7cea-410a-ac94-959fc43823bc-run-httpd\") pod \"ceilometer-0\" (UID: \"a18f46ac-7cea-410a-ac94-959fc43823bc\") " pod="openstack/ceilometer-0" Jan 22 09:49:12 crc kubenswrapper[4811]: I0122 09:49:12.607169 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vfc6r\" (UniqueName: \"kubernetes.io/projected/a18f46ac-7cea-410a-ac94-959fc43823bc-kube-api-access-vfc6r\") pod \"ceilometer-0\" (UID: \"a18f46ac-7cea-410a-ac94-959fc43823bc\") " pod="openstack/ceilometer-0" Jan 22 09:49:12 crc kubenswrapper[4811]: I0122 09:49:12.607188 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a18f46ac-7cea-410a-ac94-959fc43823bc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a18f46ac-7cea-410a-ac94-959fc43823bc\") " pod="openstack/ceilometer-0" Jan 22 09:49:12 crc kubenswrapper[4811]: I0122 09:49:12.607226 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a18f46ac-7cea-410a-ac94-959fc43823bc-config-data\") pod \"ceilometer-0\" (UID: \"a18f46ac-7cea-410a-ac94-959fc43823bc\") " pod="openstack/ceilometer-0" Jan 22 09:49:12 crc kubenswrapper[4811]: I0122 09:49:12.607383 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a18f46ac-7cea-410a-ac94-959fc43823bc-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"a18f46ac-7cea-410a-ac94-959fc43823bc\") " pod="openstack/ceilometer-0" Jan 22 09:49:12 crc kubenswrapper[4811]: I0122 09:49:12.607474 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a18f46ac-7cea-410a-ac94-959fc43823bc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a18f46ac-7cea-410a-ac94-959fc43823bc\") " pod="openstack/ceilometer-0" Jan 22 09:49:12 crc kubenswrapper[4811]: I0122 09:49:12.607539 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a18f46ac-7cea-410a-ac94-959fc43823bc-scripts\") pod \"ceilometer-0\" (UID: \"a18f46ac-7cea-410a-ac94-959fc43823bc\") " pod="openstack/ceilometer-0" Jan 22 09:49:12 crc kubenswrapper[4811]: I0122 09:49:12.607537 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a18f46ac-7cea-410a-ac94-959fc43823bc-log-httpd\") pod \"ceilometer-0\" (UID: \"a18f46ac-7cea-410a-ac94-959fc43823bc\") " pod="openstack/ceilometer-0" Jan 22 09:49:12 crc kubenswrapper[4811]: I0122 09:49:12.607567 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a18f46ac-7cea-410a-ac94-959fc43823bc-run-httpd\") pod \"ceilometer-0\" (UID: \"a18f46ac-7cea-410a-ac94-959fc43823bc\") " pod="openstack/ceilometer-0" Jan 22 09:49:12 crc kubenswrapper[4811]: I0122 09:49:12.612462 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a18f46ac-7cea-410a-ac94-959fc43823bc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a18f46ac-7cea-410a-ac94-959fc43823bc\") " pod="openstack/ceilometer-0" Jan 22 09:49:12 crc kubenswrapper[4811]: I0122 09:49:12.612989 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a18f46ac-7cea-410a-ac94-959fc43823bc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a18f46ac-7cea-410a-ac94-959fc43823bc\") " pod="openstack/ceilometer-0" Jan 22 09:49:12 crc kubenswrapper[4811]: I0122 09:49:12.613083 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a18f46ac-7cea-410a-ac94-959fc43823bc-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"a18f46ac-7cea-410a-ac94-959fc43823bc\") " pod="openstack/ceilometer-0" Jan 22 09:49:12 crc kubenswrapper[4811]: I0122 09:49:12.613573 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a18f46ac-7cea-410a-ac94-959fc43823bc-config-data\") pod \"ceilometer-0\" (UID: \"a18f46ac-7cea-410a-ac94-959fc43823bc\") " pod="openstack/ceilometer-0" Jan 22 09:49:12 crc kubenswrapper[4811]: I0122 09:49:12.614463 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a18f46ac-7cea-410a-ac94-959fc43823bc-scripts\") pod \"ceilometer-0\" (UID: \"a18f46ac-7cea-410a-ac94-959fc43823bc\") " pod="openstack/ceilometer-0" Jan 22 09:49:12 crc kubenswrapper[4811]: I0122 09:49:12.624260 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vfc6r\" (UniqueName: \"kubernetes.io/projected/a18f46ac-7cea-410a-ac94-959fc43823bc-kube-api-access-vfc6r\") pod \"ceilometer-0\" (UID: \"a18f46ac-7cea-410a-ac94-959fc43823bc\") " pod="openstack/ceilometer-0" Jan 22 09:49:12 crc kubenswrapper[4811]: I0122 09:49:12.675089 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 09:49:13 crc kubenswrapper[4811]: I0122 09:49:13.132069 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 22 09:49:13 crc kubenswrapper[4811]: I0122 09:49:13.253960 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a18f46ac-7cea-410a-ac94-959fc43823bc","Type":"ContainerStarted","Data":"7f2433c01aca26c6e54b1cda51dff4740fa00e910a848cda3220bdce79cfe005"} Jan 22 09:49:13 crc kubenswrapper[4811]: I0122 09:49:13.257477 4811 generic.go:334] "Generic (PLEG): container finished" podID="660d9785-0f9b-4953-af76-580ed227c244" containerID="d932f2adbda59824841f6cb86225f24b2e877e9f69ebadd4e657ce3e01adf42c" exitCode=0 Jan 22 09:49:13 crc kubenswrapper[4811]: I0122 09:49:13.257541 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-d6c54dc74-r2gjr" event={"ID":"660d9785-0f9b-4953-af76-580ed227c244","Type":"ContainerDied","Data":"d932f2adbda59824841f6cb86225f24b2e877e9f69ebadd4e657ce3e01adf42c"} Jan 22 09:49:13 crc kubenswrapper[4811]: I0122 09:49:13.471777 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-share-share1-0" Jan 22 09:49:14 crc kubenswrapper[4811]: I0122 09:49:14.000418 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d71dac51-18b5-44ad-b5f4-d3c46943a8dc" path="/var/lib/kubelet/pods/d71dac51-18b5-44ad-b5f4-d3c46943a8dc/volumes" Jan 22 09:49:14 crc kubenswrapper[4811]: I0122 09:49:14.207495 4811 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-d6c54dc74-r2gjr" podUID="660d9785-0f9b-4953-af76-580ed227c244" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.241:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.241:8443: connect: connection refused" Jan 22 09:49:14 crc kubenswrapper[4811]: I0122 09:49:14.270225 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a18f46ac-7cea-410a-ac94-959fc43823bc","Type":"ContainerStarted","Data":"8ff08e389df21ff4389e467c21c4fe96d4eac8e03a2fb49b3bac0a6323076006"} Jan 22 09:49:14 crc kubenswrapper[4811]: I0122 09:49:14.851003 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-scheduler-0" Jan 22 09:49:14 crc kubenswrapper[4811]: I0122 09:49:14.899033 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-scheduler-0"] Jan 22 09:49:15 crc kubenswrapper[4811]: I0122 09:49:15.280409 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a18f46ac-7cea-410a-ac94-959fc43823bc","Type":"ContainerStarted","Data":"5dd396d6dd585a70a036b7350f297e02592c7d59cecb125a378edcf12e8393c4"} Jan 22 09:49:15 crc kubenswrapper[4811]: I0122 09:49:15.281423 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-scheduler-0" podUID="df635a7c-5d1e-4116-af78-c06f331d0a8b" containerName="manila-scheduler" containerID="cri-o://99436b7d124c7f6b06edf217c1a6b192ccdcfb23f2430ff7258a4532d7544b24" gracePeriod=30 Jan 22 09:49:15 crc kubenswrapper[4811]: I0122 09:49:15.281458 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-scheduler-0" podUID="df635a7c-5d1e-4116-af78-c06f331d0a8b" containerName="probe" containerID="cri-o://3b8bfc4e7a882e3f7d9a692d5f5dd1ece4446714b3a7a64adc55482750bd768a" gracePeriod=30 Jan 22 09:49:16 crc kubenswrapper[4811]: I0122 09:49:16.301584 4811 generic.go:334] "Generic (PLEG): container finished" podID="df635a7c-5d1e-4116-af78-c06f331d0a8b" containerID="3b8bfc4e7a882e3f7d9a692d5f5dd1ece4446714b3a7a64adc55482750bd768a" exitCode=0 Jan 22 09:49:16 crc kubenswrapper[4811]: I0122 09:49:16.301940 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"df635a7c-5d1e-4116-af78-c06f331d0a8b","Type":"ContainerDied","Data":"3b8bfc4e7a882e3f7d9a692d5f5dd1ece4446714b3a7a64adc55482750bd768a"} Jan 22 09:49:16 crc kubenswrapper[4811]: I0122 09:49:16.305558 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a18f46ac-7cea-410a-ac94-959fc43823bc","Type":"ContainerStarted","Data":"e2f934c325626d39e40a7161efaacb6d7b77f41eeadda7e1382fe77c873e2e66"} Jan 22 09:49:17 crc kubenswrapper[4811]: I0122 09:49:17.321571 4811 generic.go:334] "Generic (PLEG): container finished" podID="df635a7c-5d1e-4116-af78-c06f331d0a8b" containerID="99436b7d124c7f6b06edf217c1a6b192ccdcfb23f2430ff7258a4532d7544b24" exitCode=0 Jan 22 09:49:17 crc kubenswrapper[4811]: I0122 09:49:17.321915 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"df635a7c-5d1e-4116-af78-c06f331d0a8b","Type":"ContainerDied","Data":"99436b7d124c7f6b06edf217c1a6b192ccdcfb23f2430ff7258a4532d7544b24"} Jan 22 09:49:17 crc kubenswrapper[4811]: I0122 09:49:17.333067 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a18f46ac-7cea-410a-ac94-959fc43823bc","Type":"ContainerStarted","Data":"d25d50f8482e0c97cd945956569aa4457cb9c6c061ec646beb46107436616beb"} Jan 22 09:49:17 crc kubenswrapper[4811]: I0122 09:49:17.333304 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 22 09:49:17 crc kubenswrapper[4811]: I0122 09:49:17.361671 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.607828987 podStartE2EDuration="5.361650703s" podCreationTimestamp="2026-01-22 09:49:12 +0000 UTC" firstStartedPulling="2026-01-22 09:49:13.143195955 +0000 UTC m=+2597.465383078" lastFinishedPulling="2026-01-22 09:49:16.897017671 +0000 UTC m=+2601.219204794" observedRunningTime="2026-01-22 09:49:17.35356561 +0000 UTC m=+2601.675752733" watchObservedRunningTime="2026-01-22 09:49:17.361650703 +0000 UTC m=+2601.683837826" Jan 22 09:49:17 crc kubenswrapper[4811]: I0122 09:49:17.687432 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Jan 22 09:49:17 crc kubenswrapper[4811]: I0122 09:49:17.743929 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df635a7c-5d1e-4116-af78-c06f331d0a8b-combined-ca-bundle\") pod \"df635a7c-5d1e-4116-af78-c06f331d0a8b\" (UID: \"df635a7c-5d1e-4116-af78-c06f331d0a8b\") " Jan 22 09:49:17 crc kubenswrapper[4811]: I0122 09:49:17.744097 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df635a7c-5d1e-4116-af78-c06f331d0a8b-config-data\") pod \"df635a7c-5d1e-4116-af78-c06f331d0a8b\" (UID: \"df635a7c-5d1e-4116-af78-c06f331d0a8b\") " Jan 22 09:49:17 crc kubenswrapper[4811]: I0122 09:49:17.744296 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/df635a7c-5d1e-4116-af78-c06f331d0a8b-etc-machine-id\") pod \"df635a7c-5d1e-4116-af78-c06f331d0a8b\" (UID: \"df635a7c-5d1e-4116-af78-c06f331d0a8b\") " Jan 22 09:49:17 crc kubenswrapper[4811]: I0122 09:49:17.744336 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wprcr\" (UniqueName: \"kubernetes.io/projected/df635a7c-5d1e-4116-af78-c06f331d0a8b-kube-api-access-wprcr\") pod \"df635a7c-5d1e-4116-af78-c06f331d0a8b\" (UID: \"df635a7c-5d1e-4116-af78-c06f331d0a8b\") " Jan 22 09:49:17 crc kubenswrapper[4811]: I0122 09:49:17.744473 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df635a7c-5d1e-4116-af78-c06f331d0a8b-scripts\") pod \"df635a7c-5d1e-4116-af78-c06f331d0a8b\" (UID: \"df635a7c-5d1e-4116-af78-c06f331d0a8b\") " Jan 22 09:49:17 crc kubenswrapper[4811]: I0122 09:49:17.745129 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/df635a7c-5d1e-4116-af78-c06f331d0a8b-config-data-custom\") pod \"df635a7c-5d1e-4116-af78-c06f331d0a8b\" (UID: \"df635a7c-5d1e-4116-af78-c06f331d0a8b\") " Jan 22 09:49:17 crc kubenswrapper[4811]: I0122 09:49:17.746945 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/df635a7c-5d1e-4116-af78-c06f331d0a8b-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "df635a7c-5d1e-4116-af78-c06f331d0a8b" (UID: "df635a7c-5d1e-4116-af78-c06f331d0a8b"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 09:49:17 crc kubenswrapper[4811]: I0122 09:49:17.756789 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df635a7c-5d1e-4116-af78-c06f331d0a8b-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "df635a7c-5d1e-4116-af78-c06f331d0a8b" (UID: "df635a7c-5d1e-4116-af78-c06f331d0a8b"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:49:17 crc kubenswrapper[4811]: I0122 09:49:17.756835 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df635a7c-5d1e-4116-af78-c06f331d0a8b-kube-api-access-wprcr" (OuterVolumeSpecName: "kube-api-access-wprcr") pod "df635a7c-5d1e-4116-af78-c06f331d0a8b" (UID: "df635a7c-5d1e-4116-af78-c06f331d0a8b"). InnerVolumeSpecName "kube-api-access-wprcr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:49:17 crc kubenswrapper[4811]: I0122 09:49:17.756915 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df635a7c-5d1e-4116-af78-c06f331d0a8b-scripts" (OuterVolumeSpecName: "scripts") pod "df635a7c-5d1e-4116-af78-c06f331d0a8b" (UID: "df635a7c-5d1e-4116-af78-c06f331d0a8b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:49:17 crc kubenswrapper[4811]: I0122 09:49:17.804585 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df635a7c-5d1e-4116-af78-c06f331d0a8b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "df635a7c-5d1e-4116-af78-c06f331d0a8b" (UID: "df635a7c-5d1e-4116-af78-c06f331d0a8b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:49:17 crc kubenswrapper[4811]: I0122 09:49:17.836276 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df635a7c-5d1e-4116-af78-c06f331d0a8b-config-data" (OuterVolumeSpecName: "config-data") pod "df635a7c-5d1e-4116-af78-c06f331d0a8b" (UID: "df635a7c-5d1e-4116-af78-c06f331d0a8b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:49:17 crc kubenswrapper[4811]: I0122 09:49:17.849075 4811 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/df635a7c-5d1e-4116-af78-c06f331d0a8b-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 22 09:49:17 crc kubenswrapper[4811]: I0122 09:49:17.849106 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wprcr\" (UniqueName: \"kubernetes.io/projected/df635a7c-5d1e-4116-af78-c06f331d0a8b-kube-api-access-wprcr\") on node \"crc\" DevicePath \"\"" Jan 22 09:49:17 crc kubenswrapper[4811]: I0122 09:49:17.849119 4811 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df635a7c-5d1e-4116-af78-c06f331d0a8b-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 09:49:17 crc kubenswrapper[4811]: I0122 09:49:17.849128 4811 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/df635a7c-5d1e-4116-af78-c06f331d0a8b-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 22 09:49:17 crc kubenswrapper[4811]: I0122 09:49:17.849137 4811 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df635a7c-5d1e-4116-af78-c06f331d0a8b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 09:49:17 crc kubenswrapper[4811]: I0122 09:49:17.849147 4811 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df635a7c-5d1e-4116-af78-c06f331d0a8b-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 09:49:18 crc kubenswrapper[4811]: I0122 09:49:18.341606 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"df635a7c-5d1e-4116-af78-c06f331d0a8b","Type":"ContainerDied","Data":"953057de622309c1565f208edb7b5974238f984cc8f694117fb63f9e5f82480d"} Jan 22 09:49:18 crc kubenswrapper[4811]: I0122 09:49:18.341736 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Jan 22 09:49:18 crc kubenswrapper[4811]: I0122 09:49:18.341996 4811 scope.go:117] "RemoveContainer" containerID="3b8bfc4e7a882e3f7d9a692d5f5dd1ece4446714b3a7a64adc55482750bd768a" Jan 22 09:49:18 crc kubenswrapper[4811]: I0122 09:49:18.367199 4811 scope.go:117] "RemoveContainer" containerID="99436b7d124c7f6b06edf217c1a6b192ccdcfb23f2430ff7258a4532d7544b24" Jan 22 09:49:18 crc kubenswrapper[4811]: I0122 09:49:18.369169 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-scheduler-0"] Jan 22 09:49:18 crc kubenswrapper[4811]: I0122 09:49:18.379013 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-scheduler-0"] Jan 22 09:49:18 crc kubenswrapper[4811]: I0122 09:49:18.406668 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-scheduler-0"] Jan 22 09:49:18 crc kubenswrapper[4811]: E0122 09:49:18.407184 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df635a7c-5d1e-4116-af78-c06f331d0a8b" containerName="probe" Jan 22 09:49:18 crc kubenswrapper[4811]: I0122 09:49:18.407205 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="df635a7c-5d1e-4116-af78-c06f331d0a8b" containerName="probe" Jan 22 09:49:18 crc kubenswrapper[4811]: E0122 09:49:18.407234 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df635a7c-5d1e-4116-af78-c06f331d0a8b" containerName="manila-scheduler" Jan 22 09:49:18 crc kubenswrapper[4811]: I0122 09:49:18.407240 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="df635a7c-5d1e-4116-af78-c06f331d0a8b" containerName="manila-scheduler" Jan 22 09:49:18 crc kubenswrapper[4811]: I0122 09:49:18.407481 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="df635a7c-5d1e-4116-af78-c06f331d0a8b" containerName="manila-scheduler" Jan 22 09:49:18 crc kubenswrapper[4811]: I0122 09:49:18.407507 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="df635a7c-5d1e-4116-af78-c06f331d0a8b" containerName="probe" Jan 22 09:49:18 crc kubenswrapper[4811]: I0122 09:49:18.408706 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Jan 22 09:49:18 crc kubenswrapper[4811]: I0122 09:49:18.413758 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scheduler-config-data" Jan 22 09:49:18 crc kubenswrapper[4811]: I0122 09:49:18.421238 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Jan 22 09:49:18 crc kubenswrapper[4811]: I0122 09:49:18.563880 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4acbe231-7f63-499d-8813-f7a18c9d70fa-config-data\") pod \"manila-scheduler-0\" (UID: \"4acbe231-7f63-499d-8813-f7a18c9d70fa\") " pod="openstack/manila-scheduler-0" Jan 22 09:49:18 crc kubenswrapper[4811]: I0122 09:49:18.564092 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4acbe231-7f63-499d-8813-f7a18c9d70fa-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"4acbe231-7f63-499d-8813-f7a18c9d70fa\") " pod="openstack/manila-scheduler-0" Jan 22 09:49:18 crc kubenswrapper[4811]: I0122 09:49:18.564206 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4acbe231-7f63-499d-8813-f7a18c9d70fa-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"4acbe231-7f63-499d-8813-f7a18c9d70fa\") " pod="openstack/manila-scheduler-0" Jan 22 09:49:18 crc kubenswrapper[4811]: I0122 09:49:18.564274 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4acbe231-7f63-499d-8813-f7a18c9d70fa-scripts\") pod \"manila-scheduler-0\" (UID: \"4acbe231-7f63-499d-8813-f7a18c9d70fa\") " pod="openstack/manila-scheduler-0" Jan 22 09:49:18 crc kubenswrapper[4811]: I0122 09:49:18.564369 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcxgk\" (UniqueName: \"kubernetes.io/projected/4acbe231-7f63-499d-8813-f7a18c9d70fa-kube-api-access-dcxgk\") pod \"manila-scheduler-0\" (UID: \"4acbe231-7f63-499d-8813-f7a18c9d70fa\") " pod="openstack/manila-scheduler-0" Jan 22 09:49:18 crc kubenswrapper[4811]: I0122 09:49:18.564453 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4acbe231-7f63-499d-8813-f7a18c9d70fa-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"4acbe231-7f63-499d-8813-f7a18c9d70fa\") " pod="openstack/manila-scheduler-0" Jan 22 09:49:18 crc kubenswrapper[4811]: I0122 09:49:18.666640 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4acbe231-7f63-499d-8813-f7a18c9d70fa-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"4acbe231-7f63-499d-8813-f7a18c9d70fa\") " pod="openstack/manila-scheduler-0" Jan 22 09:49:18 crc kubenswrapper[4811]: I0122 09:49:18.666697 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4acbe231-7f63-499d-8813-f7a18c9d70fa-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"4acbe231-7f63-499d-8813-f7a18c9d70fa\") " pod="openstack/manila-scheduler-0" Jan 22 09:49:18 crc kubenswrapper[4811]: I0122 09:49:18.666722 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4acbe231-7f63-499d-8813-f7a18c9d70fa-scripts\") pod \"manila-scheduler-0\" (UID: \"4acbe231-7f63-499d-8813-f7a18c9d70fa\") " pod="openstack/manila-scheduler-0" Jan 22 09:49:18 crc kubenswrapper[4811]: I0122 09:49:18.666831 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dcxgk\" (UniqueName: \"kubernetes.io/projected/4acbe231-7f63-499d-8813-f7a18c9d70fa-kube-api-access-dcxgk\") pod \"manila-scheduler-0\" (UID: \"4acbe231-7f63-499d-8813-f7a18c9d70fa\") " pod="openstack/manila-scheduler-0" Jan 22 09:49:18 crc kubenswrapper[4811]: I0122 09:49:18.666911 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4acbe231-7f63-499d-8813-f7a18c9d70fa-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"4acbe231-7f63-499d-8813-f7a18c9d70fa\") " pod="openstack/manila-scheduler-0" Jan 22 09:49:18 crc kubenswrapper[4811]: I0122 09:49:18.667271 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4acbe231-7f63-499d-8813-f7a18c9d70fa-config-data\") pod \"manila-scheduler-0\" (UID: \"4acbe231-7f63-499d-8813-f7a18c9d70fa\") " pod="openstack/manila-scheduler-0" Jan 22 09:49:18 crc kubenswrapper[4811]: I0122 09:49:18.667826 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4acbe231-7f63-499d-8813-f7a18c9d70fa-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"4acbe231-7f63-499d-8813-f7a18c9d70fa\") " pod="openstack/manila-scheduler-0" Jan 22 09:49:18 crc kubenswrapper[4811]: I0122 09:49:18.672080 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4acbe231-7f63-499d-8813-f7a18c9d70fa-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"4acbe231-7f63-499d-8813-f7a18c9d70fa\") " pod="openstack/manila-scheduler-0" Jan 22 09:49:18 crc kubenswrapper[4811]: I0122 09:49:18.672800 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4acbe231-7f63-499d-8813-f7a18c9d70fa-scripts\") pod \"manila-scheduler-0\" (UID: \"4acbe231-7f63-499d-8813-f7a18c9d70fa\") " pod="openstack/manila-scheduler-0" Jan 22 09:49:18 crc kubenswrapper[4811]: I0122 09:49:18.687776 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4acbe231-7f63-499d-8813-f7a18c9d70fa-config-data\") pod \"manila-scheduler-0\" (UID: \"4acbe231-7f63-499d-8813-f7a18c9d70fa\") " pod="openstack/manila-scheduler-0" Jan 22 09:49:18 crc kubenswrapper[4811]: I0122 09:49:18.691904 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4acbe231-7f63-499d-8813-f7a18c9d70fa-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"4acbe231-7f63-499d-8813-f7a18c9d70fa\") " pod="openstack/manila-scheduler-0" Jan 22 09:49:18 crc kubenswrapper[4811]: I0122 09:49:18.694064 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dcxgk\" (UniqueName: \"kubernetes.io/projected/4acbe231-7f63-499d-8813-f7a18c9d70fa-kube-api-access-dcxgk\") pod \"manila-scheduler-0\" (UID: \"4acbe231-7f63-499d-8813-f7a18c9d70fa\") " pod="openstack/manila-scheduler-0" Jan 22 09:49:18 crc kubenswrapper[4811]: I0122 09:49:18.727823 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Jan 22 09:49:19 crc kubenswrapper[4811]: I0122 09:49:19.153164 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Jan 22 09:49:19 crc kubenswrapper[4811]: I0122 09:49:19.354359 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"4acbe231-7f63-499d-8813-f7a18c9d70fa","Type":"ContainerStarted","Data":"b24d3dd48ccdb95c57f30738cb5013a38328ed410502bb4fc23309f1617999ef"} Jan 22 09:49:20 crc kubenswrapper[4811]: I0122 09:49:20.003609 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df635a7c-5d1e-4116-af78-c06f331d0a8b" path="/var/lib/kubelet/pods/df635a7c-5d1e-4116-af78-c06f331d0a8b/volumes" Jan 22 09:49:20 crc kubenswrapper[4811]: I0122 09:49:20.086207 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/manila-api-0" Jan 22 09:49:20 crc kubenswrapper[4811]: I0122 09:49:20.362981 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"4acbe231-7f63-499d-8813-f7a18c9d70fa","Type":"ContainerStarted","Data":"f0deba30fb929121710042646501a48b34b4dc984fa5c520337a881fdf3f20a2"} Jan 22 09:49:20 crc kubenswrapper[4811]: I0122 09:49:20.363023 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"4acbe231-7f63-499d-8813-f7a18c9d70fa","Type":"ContainerStarted","Data":"5fc829329dcd4f8398e1f5630de853de877d6fff6fe36543c114fb8de45c60d5"} Jan 22 09:49:20 crc kubenswrapper[4811]: I0122 09:49:20.404272 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-scheduler-0" podStartSLOduration=2.404261271 podStartE2EDuration="2.404261271s" podCreationTimestamp="2026-01-22 09:49:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:49:20.397427536 +0000 UTC m=+2604.719614659" watchObservedRunningTime="2026-01-22 09:49:20.404261271 +0000 UTC m=+2604.726448393" Jan 22 09:49:24 crc kubenswrapper[4811]: I0122 09:49:24.207255 4811 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-d6c54dc74-r2gjr" podUID="660d9785-0f9b-4953-af76-580ed227c244" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.241:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.241:8443: connect: connection refused" Jan 22 09:49:24 crc kubenswrapper[4811]: I0122 09:49:24.674743 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-share-share1-0" Jan 22 09:49:24 crc kubenswrapper[4811]: I0122 09:49:24.730695 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-share-share1-0"] Jan 22 09:49:25 crc kubenswrapper[4811]: I0122 09:49:25.402303 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-share-share1-0" podUID="0326dd33-6447-432e-9f9f-1ee950f5c82d" containerName="manila-share" containerID="cri-o://538d115fefd99bdede60d08b99b7a4acc76fc80a4cf3958a169d88803cf4a122" gracePeriod=30 Jan 22 09:49:25 crc kubenswrapper[4811]: I0122 09:49:25.402413 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-share-share1-0" podUID="0326dd33-6447-432e-9f9f-1ee950f5c82d" containerName="probe" containerID="cri-o://7c32946b9428edaf8e1b92bc2a0d31c11fb011792f40315448537e0476fa8d17" gracePeriod=30 Jan 22 09:49:26 crc kubenswrapper[4811]: I0122 09:49:26.301186 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Jan 22 09:49:26 crc kubenswrapper[4811]: I0122 09:49:26.424228 4811 generic.go:334] "Generic (PLEG): container finished" podID="0326dd33-6447-432e-9f9f-1ee950f5c82d" containerID="7c32946b9428edaf8e1b92bc2a0d31c11fb011792f40315448537e0476fa8d17" exitCode=0 Jan 22 09:49:26 crc kubenswrapper[4811]: I0122 09:49:26.424254 4811 generic.go:334] "Generic (PLEG): container finished" podID="0326dd33-6447-432e-9f9f-1ee950f5c82d" containerID="538d115fefd99bdede60d08b99b7a4acc76fc80a4cf3958a169d88803cf4a122" exitCode=1 Jan 22 09:49:26 crc kubenswrapper[4811]: I0122 09:49:26.424275 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"0326dd33-6447-432e-9f9f-1ee950f5c82d","Type":"ContainerDied","Data":"7c32946b9428edaf8e1b92bc2a0d31c11fb011792f40315448537e0476fa8d17"} Jan 22 09:49:26 crc kubenswrapper[4811]: I0122 09:49:26.424302 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"0326dd33-6447-432e-9f9f-1ee950f5c82d","Type":"ContainerDied","Data":"538d115fefd99bdede60d08b99b7a4acc76fc80a4cf3958a169d88803cf4a122"} Jan 22 09:49:26 crc kubenswrapper[4811]: I0122 09:49:26.424311 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"0326dd33-6447-432e-9f9f-1ee950f5c82d","Type":"ContainerDied","Data":"277d0c04705dae46590f8c97c9a0c0b602843c97d0a283f00575bb9b26ac3e7d"} Jan 22 09:49:26 crc kubenswrapper[4811]: I0122 09:49:26.424326 4811 scope.go:117] "RemoveContainer" containerID="7c32946b9428edaf8e1b92bc2a0d31c11fb011792f40315448537e0476fa8d17" Jan 22 09:49:26 crc kubenswrapper[4811]: I0122 09:49:26.424442 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Jan 22 09:49:26 crc kubenswrapper[4811]: I0122 09:49:26.431534 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0326dd33-6447-432e-9f9f-1ee950f5c82d-config-data-custom\") pod \"0326dd33-6447-432e-9f9f-1ee950f5c82d\" (UID: \"0326dd33-6447-432e-9f9f-1ee950f5c82d\") " Jan 22 09:49:26 crc kubenswrapper[4811]: I0122 09:49:26.431719 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0326dd33-6447-432e-9f9f-1ee950f5c82d-scripts\") pod \"0326dd33-6447-432e-9f9f-1ee950f5c82d\" (UID: \"0326dd33-6447-432e-9f9f-1ee950f5c82d\") " Jan 22 09:49:26 crc kubenswrapper[4811]: I0122 09:49:26.431768 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/0326dd33-6447-432e-9f9f-1ee950f5c82d-var-lib-manila\") pod \"0326dd33-6447-432e-9f9f-1ee950f5c82d\" (UID: \"0326dd33-6447-432e-9f9f-1ee950f5c82d\") " Jan 22 09:49:26 crc kubenswrapper[4811]: I0122 09:49:26.431787 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/0326dd33-6447-432e-9f9f-1ee950f5c82d-ceph\") pod \"0326dd33-6447-432e-9f9f-1ee950f5c82d\" (UID: \"0326dd33-6447-432e-9f9f-1ee950f5c82d\") " Jan 22 09:49:26 crc kubenswrapper[4811]: I0122 09:49:26.431914 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0326dd33-6447-432e-9f9f-1ee950f5c82d-config-data\") pod \"0326dd33-6447-432e-9f9f-1ee950f5c82d\" (UID: \"0326dd33-6447-432e-9f9f-1ee950f5c82d\") " Jan 22 09:49:26 crc kubenswrapper[4811]: I0122 09:49:26.431967 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0326dd33-6447-432e-9f9f-1ee950f5c82d-combined-ca-bundle\") pod \"0326dd33-6447-432e-9f9f-1ee950f5c82d\" (UID: \"0326dd33-6447-432e-9f9f-1ee950f5c82d\") " Jan 22 09:49:26 crc kubenswrapper[4811]: I0122 09:49:26.431996 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptrb2\" (UniqueName: \"kubernetes.io/projected/0326dd33-6447-432e-9f9f-1ee950f5c82d-kube-api-access-ptrb2\") pod \"0326dd33-6447-432e-9f9f-1ee950f5c82d\" (UID: \"0326dd33-6447-432e-9f9f-1ee950f5c82d\") " Jan 22 09:49:26 crc kubenswrapper[4811]: I0122 09:49:26.432036 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0326dd33-6447-432e-9f9f-1ee950f5c82d-etc-machine-id\") pod \"0326dd33-6447-432e-9f9f-1ee950f5c82d\" (UID: \"0326dd33-6447-432e-9f9f-1ee950f5c82d\") " Jan 22 09:49:26 crc kubenswrapper[4811]: I0122 09:49:26.433369 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0326dd33-6447-432e-9f9f-1ee950f5c82d-var-lib-manila" (OuterVolumeSpecName: "var-lib-manila") pod "0326dd33-6447-432e-9f9f-1ee950f5c82d" (UID: "0326dd33-6447-432e-9f9f-1ee950f5c82d"). InnerVolumeSpecName "var-lib-manila". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 09:49:26 crc kubenswrapper[4811]: I0122 09:49:26.437347 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0326dd33-6447-432e-9f9f-1ee950f5c82d-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "0326dd33-6447-432e-9f9f-1ee950f5c82d" (UID: "0326dd33-6447-432e-9f9f-1ee950f5c82d"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 09:49:26 crc kubenswrapper[4811]: I0122 09:49:26.445288 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0326dd33-6447-432e-9f9f-1ee950f5c82d-ceph" (OuterVolumeSpecName: "ceph") pod "0326dd33-6447-432e-9f9f-1ee950f5c82d" (UID: "0326dd33-6447-432e-9f9f-1ee950f5c82d"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:49:26 crc kubenswrapper[4811]: I0122 09:49:26.447574 4811 reconciler_common.go:293] "Volume detached for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/0326dd33-6447-432e-9f9f-1ee950f5c82d-var-lib-manila\") on node \"crc\" DevicePath \"\"" Jan 22 09:49:26 crc kubenswrapper[4811]: I0122 09:49:26.447598 4811 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/0326dd33-6447-432e-9f9f-1ee950f5c82d-ceph\") on node \"crc\" DevicePath \"\"" Jan 22 09:49:26 crc kubenswrapper[4811]: I0122 09:49:26.447694 4811 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0326dd33-6447-432e-9f9f-1ee950f5c82d-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 22 09:49:26 crc kubenswrapper[4811]: I0122 09:49:26.457315 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0326dd33-6447-432e-9f9f-1ee950f5c82d-scripts" (OuterVolumeSpecName: "scripts") pod "0326dd33-6447-432e-9f9f-1ee950f5c82d" (UID: "0326dd33-6447-432e-9f9f-1ee950f5c82d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:49:26 crc kubenswrapper[4811]: I0122 09:49:26.462016 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0326dd33-6447-432e-9f9f-1ee950f5c82d-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "0326dd33-6447-432e-9f9f-1ee950f5c82d" (UID: "0326dd33-6447-432e-9f9f-1ee950f5c82d"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:49:26 crc kubenswrapper[4811]: I0122 09:49:26.476690 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0326dd33-6447-432e-9f9f-1ee950f5c82d-kube-api-access-ptrb2" (OuterVolumeSpecName: "kube-api-access-ptrb2") pod "0326dd33-6447-432e-9f9f-1ee950f5c82d" (UID: "0326dd33-6447-432e-9f9f-1ee950f5c82d"). InnerVolumeSpecName "kube-api-access-ptrb2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:49:26 crc kubenswrapper[4811]: I0122 09:49:26.514293 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0326dd33-6447-432e-9f9f-1ee950f5c82d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0326dd33-6447-432e-9f9f-1ee950f5c82d" (UID: "0326dd33-6447-432e-9f9f-1ee950f5c82d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:49:26 crc kubenswrapper[4811]: I0122 09:49:26.550287 4811 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0326dd33-6447-432e-9f9f-1ee950f5c82d-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 22 09:49:26 crc kubenswrapper[4811]: I0122 09:49:26.550325 4811 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0326dd33-6447-432e-9f9f-1ee950f5c82d-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 09:49:26 crc kubenswrapper[4811]: I0122 09:49:26.550339 4811 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0326dd33-6447-432e-9f9f-1ee950f5c82d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 09:49:26 crc kubenswrapper[4811]: I0122 09:49:26.550349 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ptrb2\" (UniqueName: \"kubernetes.io/projected/0326dd33-6447-432e-9f9f-1ee950f5c82d-kube-api-access-ptrb2\") on node \"crc\" DevicePath \"\"" Jan 22 09:49:26 crc kubenswrapper[4811]: I0122 09:49:26.550545 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0326dd33-6447-432e-9f9f-1ee950f5c82d-config-data" (OuterVolumeSpecName: "config-data") pod "0326dd33-6447-432e-9f9f-1ee950f5c82d" (UID: "0326dd33-6447-432e-9f9f-1ee950f5c82d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:49:26 crc kubenswrapper[4811]: I0122 09:49:26.556618 4811 scope.go:117] "RemoveContainer" containerID="538d115fefd99bdede60d08b99b7a4acc76fc80a4cf3958a169d88803cf4a122" Jan 22 09:49:26 crc kubenswrapper[4811]: I0122 09:49:26.590060 4811 scope.go:117] "RemoveContainer" containerID="7c32946b9428edaf8e1b92bc2a0d31c11fb011792f40315448537e0476fa8d17" Jan 22 09:49:26 crc kubenswrapper[4811]: E0122 09:49:26.590643 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c32946b9428edaf8e1b92bc2a0d31c11fb011792f40315448537e0476fa8d17\": container with ID starting with 7c32946b9428edaf8e1b92bc2a0d31c11fb011792f40315448537e0476fa8d17 not found: ID does not exist" containerID="7c32946b9428edaf8e1b92bc2a0d31c11fb011792f40315448537e0476fa8d17" Jan 22 09:49:26 crc kubenswrapper[4811]: I0122 09:49:26.590683 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c32946b9428edaf8e1b92bc2a0d31c11fb011792f40315448537e0476fa8d17"} err="failed to get container status \"7c32946b9428edaf8e1b92bc2a0d31c11fb011792f40315448537e0476fa8d17\": rpc error: code = NotFound desc = could not find container \"7c32946b9428edaf8e1b92bc2a0d31c11fb011792f40315448537e0476fa8d17\": container with ID starting with 7c32946b9428edaf8e1b92bc2a0d31c11fb011792f40315448537e0476fa8d17 not found: ID does not exist" Jan 22 09:49:26 crc kubenswrapper[4811]: I0122 09:49:26.590735 4811 scope.go:117] "RemoveContainer" containerID="538d115fefd99bdede60d08b99b7a4acc76fc80a4cf3958a169d88803cf4a122" Jan 22 09:49:26 crc kubenswrapper[4811]: E0122 09:49:26.591172 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"538d115fefd99bdede60d08b99b7a4acc76fc80a4cf3958a169d88803cf4a122\": container with ID starting with 538d115fefd99bdede60d08b99b7a4acc76fc80a4cf3958a169d88803cf4a122 not found: ID does not exist" containerID="538d115fefd99bdede60d08b99b7a4acc76fc80a4cf3958a169d88803cf4a122" Jan 22 09:49:26 crc kubenswrapper[4811]: I0122 09:49:26.591224 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"538d115fefd99bdede60d08b99b7a4acc76fc80a4cf3958a169d88803cf4a122"} err="failed to get container status \"538d115fefd99bdede60d08b99b7a4acc76fc80a4cf3958a169d88803cf4a122\": rpc error: code = NotFound desc = could not find container \"538d115fefd99bdede60d08b99b7a4acc76fc80a4cf3958a169d88803cf4a122\": container with ID starting with 538d115fefd99bdede60d08b99b7a4acc76fc80a4cf3958a169d88803cf4a122 not found: ID does not exist" Jan 22 09:49:26 crc kubenswrapper[4811]: I0122 09:49:26.591242 4811 scope.go:117] "RemoveContainer" containerID="7c32946b9428edaf8e1b92bc2a0d31c11fb011792f40315448537e0476fa8d17" Jan 22 09:49:26 crc kubenswrapper[4811]: I0122 09:49:26.591691 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c32946b9428edaf8e1b92bc2a0d31c11fb011792f40315448537e0476fa8d17"} err="failed to get container status \"7c32946b9428edaf8e1b92bc2a0d31c11fb011792f40315448537e0476fa8d17\": rpc error: code = NotFound desc = could not find container \"7c32946b9428edaf8e1b92bc2a0d31c11fb011792f40315448537e0476fa8d17\": container with ID starting with 7c32946b9428edaf8e1b92bc2a0d31c11fb011792f40315448537e0476fa8d17 not found: ID does not exist" Jan 22 09:49:26 crc kubenswrapper[4811]: I0122 09:49:26.591752 4811 scope.go:117] "RemoveContainer" containerID="538d115fefd99bdede60d08b99b7a4acc76fc80a4cf3958a169d88803cf4a122" Jan 22 09:49:26 crc kubenswrapper[4811]: I0122 09:49:26.592068 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"538d115fefd99bdede60d08b99b7a4acc76fc80a4cf3958a169d88803cf4a122"} err="failed to get container status \"538d115fefd99bdede60d08b99b7a4acc76fc80a4cf3958a169d88803cf4a122\": rpc error: code = NotFound desc = could not find container \"538d115fefd99bdede60d08b99b7a4acc76fc80a4cf3958a169d88803cf4a122\": container with ID starting with 538d115fefd99bdede60d08b99b7a4acc76fc80a4cf3958a169d88803cf4a122 not found: ID does not exist" Jan 22 09:49:26 crc kubenswrapper[4811]: I0122 09:49:26.652456 4811 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0326dd33-6447-432e-9f9f-1ee950f5c82d-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 09:49:26 crc kubenswrapper[4811]: I0122 09:49:26.759983 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-share-share1-0"] Jan 22 09:49:26 crc kubenswrapper[4811]: I0122 09:49:26.769092 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-share-share1-0"] Jan 22 09:49:26 crc kubenswrapper[4811]: I0122 09:49:26.784787 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-share-share1-0"] Jan 22 09:49:26 crc kubenswrapper[4811]: E0122 09:49:26.785304 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0326dd33-6447-432e-9f9f-1ee950f5c82d" containerName="probe" Jan 22 09:49:26 crc kubenswrapper[4811]: I0122 09:49:26.785326 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="0326dd33-6447-432e-9f9f-1ee950f5c82d" containerName="probe" Jan 22 09:49:26 crc kubenswrapper[4811]: E0122 09:49:26.785345 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0326dd33-6447-432e-9f9f-1ee950f5c82d" containerName="manila-share" Jan 22 09:49:26 crc kubenswrapper[4811]: I0122 09:49:26.785353 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="0326dd33-6447-432e-9f9f-1ee950f5c82d" containerName="manila-share" Jan 22 09:49:26 crc kubenswrapper[4811]: I0122 09:49:26.785566 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="0326dd33-6447-432e-9f9f-1ee950f5c82d" containerName="probe" Jan 22 09:49:26 crc kubenswrapper[4811]: I0122 09:49:26.785596 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="0326dd33-6447-432e-9f9f-1ee950f5c82d" containerName="manila-share" Jan 22 09:49:26 crc kubenswrapper[4811]: I0122 09:49:26.786836 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Jan 22 09:49:26 crc kubenswrapper[4811]: I0122 09:49:26.789774 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-share-share1-config-data" Jan 22 09:49:26 crc kubenswrapper[4811]: I0122 09:49:26.808666 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Jan 22 09:49:26 crc kubenswrapper[4811]: I0122 09:49:26.862506 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxfdn\" (UniqueName: \"kubernetes.io/projected/f3e1d3e9-c984-442b-8a77-28b88a934ebc-kube-api-access-zxfdn\") pod \"manila-share-share1-0\" (UID: \"f3e1d3e9-c984-442b-8a77-28b88a934ebc\") " pod="openstack/manila-share-share1-0" Jan 22 09:49:26 crc kubenswrapper[4811]: I0122 09:49:26.862937 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3e1d3e9-c984-442b-8a77-28b88a934ebc-config-data\") pod \"manila-share-share1-0\" (UID: \"f3e1d3e9-c984-442b-8a77-28b88a934ebc\") " pod="openstack/manila-share-share1-0" Jan 22 09:49:26 crc kubenswrapper[4811]: I0122 09:49:26.863053 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/f3e1d3e9-c984-442b-8a77-28b88a934ebc-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"f3e1d3e9-c984-442b-8a77-28b88a934ebc\") " pod="openstack/manila-share-share1-0" Jan 22 09:49:26 crc kubenswrapper[4811]: I0122 09:49:26.863080 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f3e1d3e9-c984-442b-8a77-28b88a934ebc-scripts\") pod \"manila-share-share1-0\" (UID: \"f3e1d3e9-c984-442b-8a77-28b88a934ebc\") " pod="openstack/manila-share-share1-0" Jan 22 09:49:26 crc kubenswrapper[4811]: I0122 09:49:26.863389 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/f3e1d3e9-c984-442b-8a77-28b88a934ebc-ceph\") pod \"manila-share-share1-0\" (UID: \"f3e1d3e9-c984-442b-8a77-28b88a934ebc\") " pod="openstack/manila-share-share1-0" Jan 22 09:49:26 crc kubenswrapper[4811]: I0122 09:49:26.863425 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f3e1d3e9-c984-442b-8a77-28b88a934ebc-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"f3e1d3e9-c984-442b-8a77-28b88a934ebc\") " pod="openstack/manila-share-share1-0" Jan 22 09:49:26 crc kubenswrapper[4811]: I0122 09:49:26.863567 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f3e1d3e9-c984-442b-8a77-28b88a934ebc-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"f3e1d3e9-c984-442b-8a77-28b88a934ebc\") " pod="openstack/manila-share-share1-0" Jan 22 09:49:26 crc kubenswrapper[4811]: I0122 09:49:26.863608 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3e1d3e9-c984-442b-8a77-28b88a934ebc-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"f3e1d3e9-c984-442b-8a77-28b88a934ebc\") " pod="openstack/manila-share-share1-0" Jan 22 09:49:26 crc kubenswrapper[4811]: I0122 09:49:26.965088 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxfdn\" (UniqueName: \"kubernetes.io/projected/f3e1d3e9-c984-442b-8a77-28b88a934ebc-kube-api-access-zxfdn\") pod \"manila-share-share1-0\" (UID: \"f3e1d3e9-c984-442b-8a77-28b88a934ebc\") " pod="openstack/manila-share-share1-0" Jan 22 09:49:26 crc kubenswrapper[4811]: I0122 09:49:26.965132 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3e1d3e9-c984-442b-8a77-28b88a934ebc-config-data\") pod \"manila-share-share1-0\" (UID: \"f3e1d3e9-c984-442b-8a77-28b88a934ebc\") " pod="openstack/manila-share-share1-0" Jan 22 09:49:26 crc kubenswrapper[4811]: I0122 09:49:26.965206 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/f3e1d3e9-c984-442b-8a77-28b88a934ebc-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"f3e1d3e9-c984-442b-8a77-28b88a934ebc\") " pod="openstack/manila-share-share1-0" Jan 22 09:49:26 crc kubenswrapper[4811]: I0122 09:49:26.965226 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f3e1d3e9-c984-442b-8a77-28b88a934ebc-scripts\") pod \"manila-share-share1-0\" (UID: \"f3e1d3e9-c984-442b-8a77-28b88a934ebc\") " pod="openstack/manila-share-share1-0" Jan 22 09:49:26 crc kubenswrapper[4811]: I0122 09:49:26.965319 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/f3e1d3e9-c984-442b-8a77-28b88a934ebc-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"f3e1d3e9-c984-442b-8a77-28b88a934ebc\") " pod="openstack/manila-share-share1-0" Jan 22 09:49:26 crc kubenswrapper[4811]: I0122 09:49:26.965941 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/f3e1d3e9-c984-442b-8a77-28b88a934ebc-ceph\") pod \"manila-share-share1-0\" (UID: \"f3e1d3e9-c984-442b-8a77-28b88a934ebc\") " pod="openstack/manila-share-share1-0" Jan 22 09:49:26 crc kubenswrapper[4811]: I0122 09:49:26.965965 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f3e1d3e9-c984-442b-8a77-28b88a934ebc-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"f3e1d3e9-c984-442b-8a77-28b88a934ebc\") " pod="openstack/manila-share-share1-0" Jan 22 09:49:26 crc kubenswrapper[4811]: I0122 09:49:26.966038 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f3e1d3e9-c984-442b-8a77-28b88a934ebc-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"f3e1d3e9-c984-442b-8a77-28b88a934ebc\") " pod="openstack/manila-share-share1-0" Jan 22 09:49:26 crc kubenswrapper[4811]: I0122 09:49:26.966087 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f3e1d3e9-c984-442b-8a77-28b88a934ebc-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"f3e1d3e9-c984-442b-8a77-28b88a934ebc\") " pod="openstack/manila-share-share1-0" Jan 22 09:49:26 crc kubenswrapper[4811]: I0122 09:49:26.966093 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3e1d3e9-c984-442b-8a77-28b88a934ebc-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"f3e1d3e9-c984-442b-8a77-28b88a934ebc\") " pod="openstack/manila-share-share1-0" Jan 22 09:49:26 crc kubenswrapper[4811]: I0122 09:49:26.968665 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f3e1d3e9-c984-442b-8a77-28b88a934ebc-scripts\") pod \"manila-share-share1-0\" (UID: \"f3e1d3e9-c984-442b-8a77-28b88a934ebc\") " pod="openstack/manila-share-share1-0" Jan 22 09:49:26 crc kubenswrapper[4811]: I0122 09:49:26.969107 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3e1d3e9-c984-442b-8a77-28b88a934ebc-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"f3e1d3e9-c984-442b-8a77-28b88a934ebc\") " pod="openstack/manila-share-share1-0" Jan 22 09:49:26 crc kubenswrapper[4811]: I0122 09:49:26.969440 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/f3e1d3e9-c984-442b-8a77-28b88a934ebc-ceph\") pod \"manila-share-share1-0\" (UID: \"f3e1d3e9-c984-442b-8a77-28b88a934ebc\") " pod="openstack/manila-share-share1-0" Jan 22 09:49:26 crc kubenswrapper[4811]: I0122 09:49:26.969678 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3e1d3e9-c984-442b-8a77-28b88a934ebc-config-data\") pod \"manila-share-share1-0\" (UID: \"f3e1d3e9-c984-442b-8a77-28b88a934ebc\") " pod="openstack/manila-share-share1-0" Jan 22 09:49:26 crc kubenswrapper[4811]: I0122 09:49:26.969927 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f3e1d3e9-c984-442b-8a77-28b88a934ebc-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"f3e1d3e9-c984-442b-8a77-28b88a934ebc\") " pod="openstack/manila-share-share1-0" Jan 22 09:49:26 crc kubenswrapper[4811]: I0122 09:49:26.988033 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxfdn\" (UniqueName: \"kubernetes.io/projected/f3e1d3e9-c984-442b-8a77-28b88a934ebc-kube-api-access-zxfdn\") pod \"manila-share-share1-0\" (UID: \"f3e1d3e9-c984-442b-8a77-28b88a934ebc\") " pod="openstack/manila-share-share1-0" Jan 22 09:49:27 crc kubenswrapper[4811]: I0122 09:49:27.101919 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Jan 22 09:49:27 crc kubenswrapper[4811]: I0122 09:49:27.625197 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Jan 22 09:49:27 crc kubenswrapper[4811]: W0122 09:49:27.632553 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf3e1d3e9_c984_442b_8a77_28b88a934ebc.slice/crio-0eef4bc88ffeebd5060f8cf7b5e687f8f41172df664e942d7211b58509e274ff WatchSource:0}: Error finding container 0eef4bc88ffeebd5060f8cf7b5e687f8f41172df664e942d7211b58509e274ff: Status 404 returned error can't find the container with id 0eef4bc88ffeebd5060f8cf7b5e687f8f41172df664e942d7211b58509e274ff Jan 22 09:49:28 crc kubenswrapper[4811]: I0122 09:49:28.002797 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0326dd33-6447-432e-9f9f-1ee950f5c82d" path="/var/lib/kubelet/pods/0326dd33-6447-432e-9f9f-1ee950f5c82d/volumes" Jan 22 09:49:28 crc kubenswrapper[4811]: I0122 09:49:28.444365 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"f3e1d3e9-c984-442b-8a77-28b88a934ebc","Type":"ContainerStarted","Data":"3f7263ad86490dd6cfef926ce837003bec1433481b50e287d82034b9823b77df"} Jan 22 09:49:28 crc kubenswrapper[4811]: I0122 09:49:28.444602 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"f3e1d3e9-c984-442b-8a77-28b88a934ebc","Type":"ContainerStarted","Data":"ad47c7a66594ce4283534ec8810ca09f7ab44493003a33ee11c93f1c71dad10f"} Jan 22 09:49:28 crc kubenswrapper[4811]: I0122 09:49:28.444616 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"f3e1d3e9-c984-442b-8a77-28b88a934ebc","Type":"ContainerStarted","Data":"0eef4bc88ffeebd5060f8cf7b5e687f8f41172df664e942d7211b58509e274ff"} Jan 22 09:49:28 crc kubenswrapper[4811]: I0122 09:49:28.470878 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-share-share1-0" podStartSLOduration=2.470857327 podStartE2EDuration="2.470857327s" podCreationTimestamp="2026-01-22 09:49:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 09:49:28.466424319 +0000 UTC m=+2612.788611442" watchObservedRunningTime="2026-01-22 09:49:28.470857327 +0000 UTC m=+2612.793044450" Jan 22 09:49:28 crc kubenswrapper[4811]: I0122 09:49:28.728938 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-scheduler-0" Jan 22 09:49:34 crc kubenswrapper[4811]: I0122 09:49:34.207416 4811 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-d6c54dc74-r2gjr" podUID="660d9785-0f9b-4953-af76-580ed227c244" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.241:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.241:8443: connect: connection refused" Jan 22 09:49:34 crc kubenswrapper[4811]: I0122 09:49:34.208334 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-d6c54dc74-r2gjr" Jan 22 09:49:35 crc kubenswrapper[4811]: I0122 09:49:35.500993 4811 patch_prober.go:28] interesting pod/machine-config-daemon-txvcq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 09:49:35 crc kubenswrapper[4811]: I0122 09:49:35.501332 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 09:49:37 crc kubenswrapper[4811]: I0122 09:49:37.102297 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-share-share1-0" Jan 22 09:49:39 crc kubenswrapper[4811]: I0122 09:49:39.552763 4811 generic.go:334] "Generic (PLEG): container finished" podID="660d9785-0f9b-4953-af76-580ed227c244" containerID="015e311ea5ef98e58145ed177feb6471c51ef5a722b72a6005d3bb6fbf8d8201" exitCode=137 Jan 22 09:49:39 crc kubenswrapper[4811]: I0122 09:49:39.553011 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-d6c54dc74-r2gjr" event={"ID":"660d9785-0f9b-4953-af76-580ed227c244","Type":"ContainerDied","Data":"015e311ea5ef98e58145ed177feb6471c51ef5a722b72a6005d3bb6fbf8d8201"} Jan 22 09:49:39 crc kubenswrapper[4811]: I0122 09:49:39.670975 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-d6c54dc74-r2gjr" Jan 22 09:49:39 crc kubenswrapper[4811]: I0122 09:49:39.763488 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b4dj8\" (UniqueName: \"kubernetes.io/projected/660d9785-0f9b-4953-af76-580ed227c244-kube-api-access-b4dj8\") pod \"660d9785-0f9b-4953-af76-580ed227c244\" (UID: \"660d9785-0f9b-4953-af76-580ed227c244\") " Jan 22 09:49:39 crc kubenswrapper[4811]: I0122 09:49:39.763564 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/660d9785-0f9b-4953-af76-580ed227c244-horizon-secret-key\") pod \"660d9785-0f9b-4953-af76-580ed227c244\" (UID: \"660d9785-0f9b-4953-af76-580ed227c244\") " Jan 22 09:49:39 crc kubenswrapper[4811]: I0122 09:49:39.763863 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/660d9785-0f9b-4953-af76-580ed227c244-combined-ca-bundle\") pod \"660d9785-0f9b-4953-af76-580ed227c244\" (UID: \"660d9785-0f9b-4953-af76-580ed227c244\") " Jan 22 09:49:39 crc kubenswrapper[4811]: I0122 09:49:39.763893 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/660d9785-0f9b-4953-af76-580ed227c244-logs\") pod \"660d9785-0f9b-4953-af76-580ed227c244\" (UID: \"660d9785-0f9b-4953-af76-580ed227c244\") " Jan 22 09:49:39 crc kubenswrapper[4811]: I0122 09:49:39.763918 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/660d9785-0f9b-4953-af76-580ed227c244-horizon-tls-certs\") pod \"660d9785-0f9b-4953-af76-580ed227c244\" (UID: \"660d9785-0f9b-4953-af76-580ed227c244\") " Jan 22 09:49:39 crc kubenswrapper[4811]: I0122 09:49:39.763943 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/660d9785-0f9b-4953-af76-580ed227c244-config-data\") pod \"660d9785-0f9b-4953-af76-580ed227c244\" (UID: \"660d9785-0f9b-4953-af76-580ed227c244\") " Jan 22 09:49:39 crc kubenswrapper[4811]: I0122 09:49:39.764748 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/660d9785-0f9b-4953-af76-580ed227c244-scripts\") pod \"660d9785-0f9b-4953-af76-580ed227c244\" (UID: \"660d9785-0f9b-4953-af76-580ed227c244\") " Jan 22 09:49:39 crc kubenswrapper[4811]: I0122 09:49:39.765286 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/660d9785-0f9b-4953-af76-580ed227c244-logs" (OuterVolumeSpecName: "logs") pod "660d9785-0f9b-4953-af76-580ed227c244" (UID: "660d9785-0f9b-4953-af76-580ed227c244"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:49:39 crc kubenswrapper[4811]: I0122 09:49:39.766199 4811 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/660d9785-0f9b-4953-af76-580ed227c244-logs\") on node \"crc\" DevicePath \"\"" Jan 22 09:49:39 crc kubenswrapper[4811]: I0122 09:49:39.772878 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/660d9785-0f9b-4953-af76-580ed227c244-kube-api-access-b4dj8" (OuterVolumeSpecName: "kube-api-access-b4dj8") pod "660d9785-0f9b-4953-af76-580ed227c244" (UID: "660d9785-0f9b-4953-af76-580ed227c244"). InnerVolumeSpecName "kube-api-access-b4dj8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:49:39 crc kubenswrapper[4811]: I0122 09:49:39.783772 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/660d9785-0f9b-4953-af76-580ed227c244-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "660d9785-0f9b-4953-af76-580ed227c244" (UID: "660d9785-0f9b-4953-af76-580ed227c244"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:49:39 crc kubenswrapper[4811]: I0122 09:49:39.789403 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/660d9785-0f9b-4953-af76-580ed227c244-scripts" (OuterVolumeSpecName: "scripts") pod "660d9785-0f9b-4953-af76-580ed227c244" (UID: "660d9785-0f9b-4953-af76-580ed227c244"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:49:39 crc kubenswrapper[4811]: I0122 09:49:39.794144 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/660d9785-0f9b-4953-af76-580ed227c244-config-data" (OuterVolumeSpecName: "config-data") pod "660d9785-0f9b-4953-af76-580ed227c244" (UID: "660d9785-0f9b-4953-af76-580ed227c244"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 09:49:39 crc kubenswrapper[4811]: I0122 09:49:39.798567 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/660d9785-0f9b-4953-af76-580ed227c244-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "660d9785-0f9b-4953-af76-580ed227c244" (UID: "660d9785-0f9b-4953-af76-580ed227c244"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:49:39 crc kubenswrapper[4811]: I0122 09:49:39.806950 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/660d9785-0f9b-4953-af76-580ed227c244-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "660d9785-0f9b-4953-af76-580ed227c244" (UID: "660d9785-0f9b-4953-af76-580ed227c244"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 09:49:39 crc kubenswrapper[4811]: I0122 09:49:39.868367 4811 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/660d9785-0f9b-4953-af76-580ed227c244-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 09:49:39 crc kubenswrapper[4811]: I0122 09:49:39.868689 4811 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/660d9785-0f9b-4953-af76-580ed227c244-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 09:49:39 crc kubenswrapper[4811]: I0122 09:49:39.868777 4811 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/660d9785-0f9b-4953-af76-580ed227c244-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 09:49:39 crc kubenswrapper[4811]: I0122 09:49:39.868841 4811 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/660d9785-0f9b-4953-af76-580ed227c244-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 09:49:39 crc kubenswrapper[4811]: I0122 09:49:39.868906 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b4dj8\" (UniqueName: \"kubernetes.io/projected/660d9785-0f9b-4953-af76-580ed227c244-kube-api-access-b4dj8\") on node \"crc\" DevicePath \"\"" Jan 22 09:49:39 crc kubenswrapper[4811]: I0122 09:49:39.868967 4811 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/660d9785-0f9b-4953-af76-580ed227c244-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 22 09:49:39 crc kubenswrapper[4811]: I0122 09:49:39.911562 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-scheduler-0" Jan 22 09:49:40 crc kubenswrapper[4811]: I0122 09:49:40.566155 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-d6c54dc74-r2gjr" event={"ID":"660d9785-0f9b-4953-af76-580ed227c244","Type":"ContainerDied","Data":"78dadc9b007792aea77b34db856205d08cf7d3e2e9185bd629f26f6e76ca6608"} Jan 22 09:49:40 crc kubenswrapper[4811]: I0122 09:49:40.566227 4811 scope.go:117] "RemoveContainer" containerID="d932f2adbda59824841f6cb86225f24b2e877e9f69ebadd4e657ce3e01adf42c" Jan 22 09:49:40 crc kubenswrapper[4811]: I0122 09:49:40.566261 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-d6c54dc74-r2gjr" Jan 22 09:49:40 crc kubenswrapper[4811]: I0122 09:49:40.589522 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-d6c54dc74-r2gjr"] Jan 22 09:49:40 crc kubenswrapper[4811]: I0122 09:49:40.596774 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-d6c54dc74-r2gjr"] Jan 22 09:49:40 crc kubenswrapper[4811]: I0122 09:49:40.713240 4811 scope.go:117] "RemoveContainer" containerID="015e311ea5ef98e58145ed177feb6471c51ef5a722b72a6005d3bb6fbf8d8201" Jan 22 09:49:42 crc kubenswrapper[4811]: I0122 09:49:42.003153 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="660d9785-0f9b-4953-af76-580ed227c244" path="/var/lib/kubelet/pods/660d9785-0f9b-4953-af76-580ed227c244/volumes" Jan 22 09:49:42 crc kubenswrapper[4811]: I0122 09:49:42.684844 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 22 09:49:48 crc kubenswrapper[4811]: I0122 09:49:48.307277 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-share-share1-0" Jan 22 09:50:05 crc kubenswrapper[4811]: I0122 09:50:05.501508 4811 patch_prober.go:28] interesting pod/machine-config-daemon-txvcq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 09:50:05 crc kubenswrapper[4811]: I0122 09:50:05.501906 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 09:50:35 crc kubenswrapper[4811]: I0122 09:50:35.501323 4811 patch_prober.go:28] interesting pod/machine-config-daemon-txvcq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 09:50:35 crc kubenswrapper[4811]: I0122 09:50:35.501708 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 09:50:35 crc kubenswrapper[4811]: I0122 09:50:35.501751 4811 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" Jan 22 09:50:35 crc kubenswrapper[4811]: I0122 09:50:35.502280 4811 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"583eec57819133c8760804210a11b9e588774eddca3a7b248d1500229de8ad12"} pod="openshift-machine-config-operator/machine-config-daemon-txvcq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 09:50:35 crc kubenswrapper[4811]: I0122 09:50:35.502332 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" containerName="machine-config-daemon" containerID="cri-o://583eec57819133c8760804210a11b9e588774eddca3a7b248d1500229de8ad12" gracePeriod=600 Jan 22 09:50:35 crc kubenswrapper[4811]: I0122 09:50:35.973303 4811 generic.go:334] "Generic (PLEG): container finished" podID="84068a6b-e189-419b-87f5-f31428f6eafe" containerID="583eec57819133c8760804210a11b9e588774eddca3a7b248d1500229de8ad12" exitCode=0 Jan 22 09:50:35 crc kubenswrapper[4811]: I0122 09:50:35.973380 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" event={"ID":"84068a6b-e189-419b-87f5-f31428f6eafe","Type":"ContainerDied","Data":"583eec57819133c8760804210a11b9e588774eddca3a7b248d1500229de8ad12"} Jan 22 09:50:35 crc kubenswrapper[4811]: I0122 09:50:35.973514 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" event={"ID":"84068a6b-e189-419b-87f5-f31428f6eafe","Type":"ContainerStarted","Data":"27d99ddba75ace4f5b92aada119cc94fd0acd38a4e4c54101859524605bfeb4c"} Jan 22 09:50:35 crc kubenswrapper[4811]: I0122 09:50:35.973537 4811 scope.go:117] "RemoveContainer" containerID="a3eaf3fca355ac62b01468032411a18e7dabb820be02232f4becb3d78ad2b686" Jan 22 09:50:41 crc kubenswrapper[4811]: I0122 09:50:41.581202 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Jan 22 09:50:41 crc kubenswrapper[4811]: E0122 09:50:41.582122 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="660d9785-0f9b-4953-af76-580ed227c244" containerName="horizon-log" Jan 22 09:50:41 crc kubenswrapper[4811]: I0122 09:50:41.582135 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="660d9785-0f9b-4953-af76-580ed227c244" containerName="horizon-log" Jan 22 09:50:41 crc kubenswrapper[4811]: E0122 09:50:41.582147 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="660d9785-0f9b-4953-af76-580ed227c244" containerName="horizon" Jan 22 09:50:41 crc kubenswrapper[4811]: I0122 09:50:41.582154 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="660d9785-0f9b-4953-af76-580ed227c244" containerName="horizon" Jan 22 09:50:41 crc kubenswrapper[4811]: I0122 09:50:41.582297 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="660d9785-0f9b-4953-af76-580ed227c244" containerName="horizon" Jan 22 09:50:41 crc kubenswrapper[4811]: I0122 09:50:41.582310 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="660d9785-0f9b-4953-af76-580ed227c244" containerName="horizon-log" Jan 22 09:50:41 crc kubenswrapper[4811]: I0122 09:50:41.584074 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 22 09:50:41 crc kubenswrapper[4811]: I0122 09:50:41.590327 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 22 09:50:41 crc kubenswrapper[4811]: I0122 09:50:41.590559 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Jan 22 09:50:41 crc kubenswrapper[4811]: I0122 09:50:41.590672 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-h6mm9" Jan 22 09:50:41 crc kubenswrapper[4811]: I0122 09:50:41.590924 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 22 09:50:41 crc kubenswrapper[4811]: I0122 09:50:41.591068 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Jan 22 09:50:41 crc kubenswrapper[4811]: I0122 09:50:41.702437 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/eb23b893-6bb1-4d84-bb05-09c701024b37-config-data\") pod \"tempest-tests-tempest\" (UID: \"eb23b893-6bb1-4d84-bb05-09c701024b37\") " pod="openstack/tempest-tests-tempest" Jan 22 09:50:41 crc kubenswrapper[4811]: I0122 09:50:41.702592 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/eb23b893-6bb1-4d84-bb05-09c701024b37-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"eb23b893-6bb1-4d84-bb05-09c701024b37\") " pod="openstack/tempest-tests-tempest" Jan 22 09:50:41 crc kubenswrapper[4811]: I0122 09:50:41.702704 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/eb23b893-6bb1-4d84-bb05-09c701024b37-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"eb23b893-6bb1-4d84-bb05-09c701024b37\") " pod="openstack/tempest-tests-tempest" Jan 22 09:50:41 crc kubenswrapper[4811]: I0122 09:50:41.702751 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/eb23b893-6bb1-4d84-bb05-09c701024b37-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"eb23b893-6bb1-4d84-bb05-09c701024b37\") " pod="openstack/tempest-tests-tempest" Jan 22 09:50:41 crc kubenswrapper[4811]: I0122 09:50:41.702787 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/eb23b893-6bb1-4d84-bb05-09c701024b37-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"eb23b893-6bb1-4d84-bb05-09c701024b37\") " pod="openstack/tempest-tests-tempest" Jan 22 09:50:41 crc kubenswrapper[4811]: I0122 09:50:41.702815 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/eb23b893-6bb1-4d84-bb05-09c701024b37-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"eb23b893-6bb1-4d84-bb05-09c701024b37\") " pod="openstack/tempest-tests-tempest" Jan 22 09:50:41 crc kubenswrapper[4811]: I0122 09:50:41.702894 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9pp7\" (UniqueName: \"kubernetes.io/projected/eb23b893-6bb1-4d84-bb05-09c701024b37-kube-api-access-w9pp7\") pod \"tempest-tests-tempest\" (UID: \"eb23b893-6bb1-4d84-bb05-09c701024b37\") " pod="openstack/tempest-tests-tempest" Jan 22 09:50:41 crc kubenswrapper[4811]: I0122 09:50:41.702944 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"tempest-tests-tempest\" (UID: \"eb23b893-6bb1-4d84-bb05-09c701024b37\") " pod="openstack/tempest-tests-tempest" Jan 22 09:50:41 crc kubenswrapper[4811]: I0122 09:50:41.703015 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/eb23b893-6bb1-4d84-bb05-09c701024b37-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"eb23b893-6bb1-4d84-bb05-09c701024b37\") " pod="openstack/tempest-tests-tempest" Jan 22 09:50:41 crc kubenswrapper[4811]: I0122 09:50:41.804114 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/eb23b893-6bb1-4d84-bb05-09c701024b37-config-data\") pod \"tempest-tests-tempest\" (UID: \"eb23b893-6bb1-4d84-bb05-09c701024b37\") " pod="openstack/tempest-tests-tempest" Jan 22 09:50:41 crc kubenswrapper[4811]: I0122 09:50:41.804190 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/eb23b893-6bb1-4d84-bb05-09c701024b37-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"eb23b893-6bb1-4d84-bb05-09c701024b37\") " pod="openstack/tempest-tests-tempest" Jan 22 09:50:41 crc kubenswrapper[4811]: I0122 09:50:41.804216 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/eb23b893-6bb1-4d84-bb05-09c701024b37-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"eb23b893-6bb1-4d84-bb05-09c701024b37\") " pod="openstack/tempest-tests-tempest" Jan 22 09:50:41 crc kubenswrapper[4811]: I0122 09:50:41.804238 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/eb23b893-6bb1-4d84-bb05-09c701024b37-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"eb23b893-6bb1-4d84-bb05-09c701024b37\") " pod="openstack/tempest-tests-tempest" Jan 22 09:50:41 crc kubenswrapper[4811]: I0122 09:50:41.804260 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/eb23b893-6bb1-4d84-bb05-09c701024b37-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"eb23b893-6bb1-4d84-bb05-09c701024b37\") " pod="openstack/tempest-tests-tempest" Jan 22 09:50:41 crc kubenswrapper[4811]: I0122 09:50:41.804276 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/eb23b893-6bb1-4d84-bb05-09c701024b37-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"eb23b893-6bb1-4d84-bb05-09c701024b37\") " pod="openstack/tempest-tests-tempest" Jan 22 09:50:41 crc kubenswrapper[4811]: I0122 09:50:41.804307 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9pp7\" (UniqueName: \"kubernetes.io/projected/eb23b893-6bb1-4d84-bb05-09c701024b37-kube-api-access-w9pp7\") pod \"tempest-tests-tempest\" (UID: \"eb23b893-6bb1-4d84-bb05-09c701024b37\") " pod="openstack/tempest-tests-tempest" Jan 22 09:50:41 crc kubenswrapper[4811]: I0122 09:50:41.804334 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"tempest-tests-tempest\" (UID: \"eb23b893-6bb1-4d84-bb05-09c701024b37\") " pod="openstack/tempest-tests-tempest" Jan 22 09:50:41 crc kubenswrapper[4811]: I0122 09:50:41.804372 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/eb23b893-6bb1-4d84-bb05-09c701024b37-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"eb23b893-6bb1-4d84-bb05-09c701024b37\") " pod="openstack/tempest-tests-tempest" Jan 22 09:50:41 crc kubenswrapper[4811]: I0122 09:50:41.804768 4811 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"tempest-tests-tempest\" (UID: \"eb23b893-6bb1-4d84-bb05-09c701024b37\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/tempest-tests-tempest" Jan 22 09:50:41 crc kubenswrapper[4811]: I0122 09:50:41.805082 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/eb23b893-6bb1-4d84-bb05-09c701024b37-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"eb23b893-6bb1-4d84-bb05-09c701024b37\") " pod="openstack/tempest-tests-tempest" Jan 22 09:50:41 crc kubenswrapper[4811]: I0122 09:50:41.805251 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/eb23b893-6bb1-4d84-bb05-09c701024b37-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"eb23b893-6bb1-4d84-bb05-09c701024b37\") " pod="openstack/tempest-tests-tempest" Jan 22 09:50:41 crc kubenswrapper[4811]: I0122 09:50:41.805327 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/eb23b893-6bb1-4d84-bb05-09c701024b37-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"eb23b893-6bb1-4d84-bb05-09c701024b37\") " pod="openstack/tempest-tests-tempest" Jan 22 09:50:41 crc kubenswrapper[4811]: I0122 09:50:41.805424 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/eb23b893-6bb1-4d84-bb05-09c701024b37-config-data\") pod \"tempest-tests-tempest\" (UID: \"eb23b893-6bb1-4d84-bb05-09c701024b37\") " pod="openstack/tempest-tests-tempest" Jan 22 09:50:41 crc kubenswrapper[4811]: I0122 09:50:41.810437 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/eb23b893-6bb1-4d84-bb05-09c701024b37-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"eb23b893-6bb1-4d84-bb05-09c701024b37\") " pod="openstack/tempest-tests-tempest" Jan 22 09:50:41 crc kubenswrapper[4811]: I0122 09:50:41.810580 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/eb23b893-6bb1-4d84-bb05-09c701024b37-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"eb23b893-6bb1-4d84-bb05-09c701024b37\") " pod="openstack/tempest-tests-tempest" Jan 22 09:50:41 crc kubenswrapper[4811]: I0122 09:50:41.811443 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/eb23b893-6bb1-4d84-bb05-09c701024b37-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"eb23b893-6bb1-4d84-bb05-09c701024b37\") " pod="openstack/tempest-tests-tempest" Jan 22 09:50:41 crc kubenswrapper[4811]: I0122 09:50:41.818133 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9pp7\" (UniqueName: \"kubernetes.io/projected/eb23b893-6bb1-4d84-bb05-09c701024b37-kube-api-access-w9pp7\") pod \"tempest-tests-tempest\" (UID: \"eb23b893-6bb1-4d84-bb05-09c701024b37\") " pod="openstack/tempest-tests-tempest" Jan 22 09:50:41 crc kubenswrapper[4811]: I0122 09:50:41.826454 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"tempest-tests-tempest\" (UID: \"eb23b893-6bb1-4d84-bb05-09c701024b37\") " pod="openstack/tempest-tests-tempest" Jan 22 09:50:41 crc kubenswrapper[4811]: I0122 09:50:41.896530 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 22 09:50:42 crc kubenswrapper[4811]: I0122 09:50:42.301445 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 22 09:50:43 crc kubenswrapper[4811]: I0122 09:50:43.028587 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"eb23b893-6bb1-4d84-bb05-09c701024b37","Type":"ContainerStarted","Data":"4c2308e19de2de4edb99c50f9a6625850f46bda0f125ad43cd11b1c8fa07fba2"} Jan 22 09:50:46 crc kubenswrapper[4811]: I0122 09:50:46.572402 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-88qmv"] Jan 22 09:50:46 crc kubenswrapper[4811]: I0122 09:50:46.576357 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-88qmv" Jan 22 09:50:46 crc kubenswrapper[4811]: I0122 09:50:46.582556 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-88qmv"] Jan 22 09:50:46 crc kubenswrapper[4811]: I0122 09:50:46.699670 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e0876f2-026f-48a5-8d69-0a0abeb47521-catalog-content\") pod \"certified-operators-88qmv\" (UID: \"5e0876f2-026f-48a5-8d69-0a0abeb47521\") " pod="openshift-marketplace/certified-operators-88qmv" Jan 22 09:50:46 crc kubenswrapper[4811]: I0122 09:50:46.699884 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kn876\" (UniqueName: \"kubernetes.io/projected/5e0876f2-026f-48a5-8d69-0a0abeb47521-kube-api-access-kn876\") pod \"certified-operators-88qmv\" (UID: \"5e0876f2-026f-48a5-8d69-0a0abeb47521\") " pod="openshift-marketplace/certified-operators-88qmv" Jan 22 09:50:46 crc kubenswrapper[4811]: I0122 09:50:46.700048 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e0876f2-026f-48a5-8d69-0a0abeb47521-utilities\") pod \"certified-operators-88qmv\" (UID: \"5e0876f2-026f-48a5-8d69-0a0abeb47521\") " pod="openshift-marketplace/certified-operators-88qmv" Jan 22 09:50:46 crc kubenswrapper[4811]: I0122 09:50:46.803138 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kn876\" (UniqueName: \"kubernetes.io/projected/5e0876f2-026f-48a5-8d69-0a0abeb47521-kube-api-access-kn876\") pod \"certified-operators-88qmv\" (UID: \"5e0876f2-026f-48a5-8d69-0a0abeb47521\") " pod="openshift-marketplace/certified-operators-88qmv" Jan 22 09:50:46 crc kubenswrapper[4811]: I0122 09:50:46.803268 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e0876f2-026f-48a5-8d69-0a0abeb47521-utilities\") pod \"certified-operators-88qmv\" (UID: \"5e0876f2-026f-48a5-8d69-0a0abeb47521\") " pod="openshift-marketplace/certified-operators-88qmv" Jan 22 09:50:46 crc kubenswrapper[4811]: I0122 09:50:46.803579 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e0876f2-026f-48a5-8d69-0a0abeb47521-catalog-content\") pod \"certified-operators-88qmv\" (UID: \"5e0876f2-026f-48a5-8d69-0a0abeb47521\") " pod="openshift-marketplace/certified-operators-88qmv" Jan 22 09:50:46 crc kubenswrapper[4811]: I0122 09:50:46.804327 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e0876f2-026f-48a5-8d69-0a0abeb47521-catalog-content\") pod \"certified-operators-88qmv\" (UID: \"5e0876f2-026f-48a5-8d69-0a0abeb47521\") " pod="openshift-marketplace/certified-operators-88qmv" Jan 22 09:50:46 crc kubenswrapper[4811]: I0122 09:50:46.804548 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e0876f2-026f-48a5-8d69-0a0abeb47521-utilities\") pod \"certified-operators-88qmv\" (UID: \"5e0876f2-026f-48a5-8d69-0a0abeb47521\") " pod="openshift-marketplace/certified-operators-88qmv" Jan 22 09:50:46 crc kubenswrapper[4811]: I0122 09:50:46.824121 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kn876\" (UniqueName: \"kubernetes.io/projected/5e0876f2-026f-48a5-8d69-0a0abeb47521-kube-api-access-kn876\") pod \"certified-operators-88qmv\" (UID: \"5e0876f2-026f-48a5-8d69-0a0abeb47521\") " pod="openshift-marketplace/certified-operators-88qmv" Jan 22 09:50:46 crc kubenswrapper[4811]: I0122 09:50:46.904747 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-88qmv" Jan 22 09:50:50 crc kubenswrapper[4811]: I0122 09:50:50.171749 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-88qmv"] Jan 22 09:50:51 crc kubenswrapper[4811]: I0122 09:50:51.110471 4811 generic.go:334] "Generic (PLEG): container finished" podID="5e0876f2-026f-48a5-8d69-0a0abeb47521" containerID="63404b52abf7ebf6e2f0552f3dc4deb6a640bceb491757748504342db7575e82" exitCode=0 Jan 22 09:50:51 crc kubenswrapper[4811]: I0122 09:50:51.110551 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-88qmv" event={"ID":"5e0876f2-026f-48a5-8d69-0a0abeb47521","Type":"ContainerDied","Data":"63404b52abf7ebf6e2f0552f3dc4deb6a640bceb491757748504342db7575e82"} Jan 22 09:50:51 crc kubenswrapper[4811]: I0122 09:50:51.110911 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-88qmv" event={"ID":"5e0876f2-026f-48a5-8d69-0a0abeb47521","Type":"ContainerStarted","Data":"03a569273584cba90dba0b52f52bed9f5dcd66fcccaa6867033715fb141b70cc"} Jan 22 09:50:52 crc kubenswrapper[4811]: I0122 09:50:52.120207 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-88qmv" event={"ID":"5e0876f2-026f-48a5-8d69-0a0abeb47521","Type":"ContainerStarted","Data":"3931303153d06efec9bba845886da1f3e6cf56540fa8275725979c4d3ccdc699"} Jan 22 09:50:53 crc kubenswrapper[4811]: I0122 09:50:53.129751 4811 generic.go:334] "Generic (PLEG): container finished" podID="5e0876f2-026f-48a5-8d69-0a0abeb47521" containerID="3931303153d06efec9bba845886da1f3e6cf56540fa8275725979c4d3ccdc699" exitCode=0 Jan 22 09:50:53 crc kubenswrapper[4811]: I0122 09:50:53.129836 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-88qmv" event={"ID":"5e0876f2-026f-48a5-8d69-0a0abeb47521","Type":"ContainerDied","Data":"3931303153d06efec9bba845886da1f3e6cf56540fa8275725979c4d3ccdc699"} Jan 22 09:51:10 crc kubenswrapper[4811]: E0122 09:51:10.888769 4811 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Jan 22 09:51:10 crc kubenswrapper[4811]: E0122 09:51:10.898080 4811 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w9pp7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(eb23b893-6bb1-4d84-bb05-09c701024b37): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 09:51:10 crc kubenswrapper[4811]: E0122 09:51:10.899698 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="eb23b893-6bb1-4d84-bb05-09c701024b37" Jan 22 09:51:11 crc kubenswrapper[4811]: I0122 09:51:11.270220 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-88qmv" event={"ID":"5e0876f2-026f-48a5-8d69-0a0abeb47521","Type":"ContainerStarted","Data":"e117a269b1cee4dd85b857779af46bce81ef87aa92310dd87babd25cea09b93f"} Jan 22 09:51:11 crc kubenswrapper[4811]: E0122 09:51:11.271742 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="eb23b893-6bb1-4d84-bb05-09c701024b37" Jan 22 09:51:11 crc kubenswrapper[4811]: I0122 09:51:11.288758 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-88qmv" podStartSLOduration=5.553757914 podStartE2EDuration="25.28874119s" podCreationTimestamp="2026-01-22 09:50:46 +0000 UTC" firstStartedPulling="2026-01-22 09:50:51.113784286 +0000 UTC m=+2695.435971409" lastFinishedPulling="2026-01-22 09:51:10.848767561 +0000 UTC m=+2715.170954685" observedRunningTime="2026-01-22 09:51:11.284446753 +0000 UTC m=+2715.606633876" watchObservedRunningTime="2026-01-22 09:51:11.28874119 +0000 UTC m=+2715.610928313" Jan 22 09:51:16 crc kubenswrapper[4811]: I0122 09:51:16.905688 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-88qmv" Jan 22 09:51:16 crc kubenswrapper[4811]: I0122 09:51:16.906525 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-88qmv" Jan 22 09:51:16 crc kubenswrapper[4811]: I0122 09:51:16.946062 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-88qmv" Jan 22 09:51:17 crc kubenswrapper[4811]: I0122 09:51:17.362200 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-88qmv" Jan 22 09:51:17 crc kubenswrapper[4811]: I0122 09:51:17.774530 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-88qmv"] Jan 22 09:51:19 crc kubenswrapper[4811]: I0122 09:51:19.355157 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-88qmv" podUID="5e0876f2-026f-48a5-8d69-0a0abeb47521" containerName="registry-server" containerID="cri-o://e117a269b1cee4dd85b857779af46bce81ef87aa92310dd87babd25cea09b93f" gracePeriod=2 Jan 22 09:51:19 crc kubenswrapper[4811]: I0122 09:51:19.903570 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-88qmv" Jan 22 09:51:19 crc kubenswrapper[4811]: I0122 09:51:19.970935 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e0876f2-026f-48a5-8d69-0a0abeb47521-utilities\") pod \"5e0876f2-026f-48a5-8d69-0a0abeb47521\" (UID: \"5e0876f2-026f-48a5-8d69-0a0abeb47521\") " Jan 22 09:51:19 crc kubenswrapper[4811]: I0122 09:51:19.971163 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kn876\" (UniqueName: \"kubernetes.io/projected/5e0876f2-026f-48a5-8d69-0a0abeb47521-kube-api-access-kn876\") pod \"5e0876f2-026f-48a5-8d69-0a0abeb47521\" (UID: \"5e0876f2-026f-48a5-8d69-0a0abeb47521\") " Jan 22 09:51:19 crc kubenswrapper[4811]: I0122 09:51:19.971222 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e0876f2-026f-48a5-8d69-0a0abeb47521-catalog-content\") pod \"5e0876f2-026f-48a5-8d69-0a0abeb47521\" (UID: \"5e0876f2-026f-48a5-8d69-0a0abeb47521\") " Jan 22 09:51:19 crc kubenswrapper[4811]: I0122 09:51:19.974195 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5e0876f2-026f-48a5-8d69-0a0abeb47521-utilities" (OuterVolumeSpecName: "utilities") pod "5e0876f2-026f-48a5-8d69-0a0abeb47521" (UID: "5e0876f2-026f-48a5-8d69-0a0abeb47521"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:51:19 crc kubenswrapper[4811]: I0122 09:51:19.978441 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e0876f2-026f-48a5-8d69-0a0abeb47521-kube-api-access-kn876" (OuterVolumeSpecName: "kube-api-access-kn876") pod "5e0876f2-026f-48a5-8d69-0a0abeb47521" (UID: "5e0876f2-026f-48a5-8d69-0a0abeb47521"). InnerVolumeSpecName "kube-api-access-kn876". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:51:20 crc kubenswrapper[4811]: I0122 09:51:20.024362 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5e0876f2-026f-48a5-8d69-0a0abeb47521-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5e0876f2-026f-48a5-8d69-0a0abeb47521" (UID: "5e0876f2-026f-48a5-8d69-0a0abeb47521"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:51:20 crc kubenswrapper[4811]: I0122 09:51:20.076112 4811 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e0876f2-026f-48a5-8d69-0a0abeb47521-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 09:51:20 crc kubenswrapper[4811]: I0122 09:51:20.076234 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kn876\" (UniqueName: \"kubernetes.io/projected/5e0876f2-026f-48a5-8d69-0a0abeb47521-kube-api-access-kn876\") on node \"crc\" DevicePath \"\"" Jan 22 09:51:20 crc kubenswrapper[4811]: I0122 09:51:20.076308 4811 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e0876f2-026f-48a5-8d69-0a0abeb47521-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 09:51:20 crc kubenswrapper[4811]: I0122 09:51:20.366091 4811 generic.go:334] "Generic (PLEG): container finished" podID="5e0876f2-026f-48a5-8d69-0a0abeb47521" containerID="e117a269b1cee4dd85b857779af46bce81ef87aa92310dd87babd25cea09b93f" exitCode=0 Jan 22 09:51:20 crc kubenswrapper[4811]: I0122 09:51:20.366139 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-88qmv" event={"ID":"5e0876f2-026f-48a5-8d69-0a0abeb47521","Type":"ContainerDied","Data":"e117a269b1cee4dd85b857779af46bce81ef87aa92310dd87babd25cea09b93f"} Jan 22 09:51:20 crc kubenswrapper[4811]: I0122 09:51:20.366170 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-88qmv" event={"ID":"5e0876f2-026f-48a5-8d69-0a0abeb47521","Type":"ContainerDied","Data":"03a569273584cba90dba0b52f52bed9f5dcd66fcccaa6867033715fb141b70cc"} Jan 22 09:51:20 crc kubenswrapper[4811]: I0122 09:51:20.366191 4811 scope.go:117] "RemoveContainer" containerID="e117a269b1cee4dd85b857779af46bce81ef87aa92310dd87babd25cea09b93f" Jan 22 09:51:20 crc kubenswrapper[4811]: I0122 09:51:20.366328 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-88qmv" Jan 22 09:51:20 crc kubenswrapper[4811]: I0122 09:51:20.394749 4811 scope.go:117] "RemoveContainer" containerID="3931303153d06efec9bba845886da1f3e6cf56540fa8275725979c4d3ccdc699" Jan 22 09:51:20 crc kubenswrapper[4811]: I0122 09:51:20.412665 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-88qmv"] Jan 22 09:51:20 crc kubenswrapper[4811]: E0122 09:51:20.427116 4811 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5e0876f2_026f_48a5_8d69_0a0abeb47521.slice\": RecentStats: unable to find data in memory cache]" Jan 22 09:51:20 crc kubenswrapper[4811]: I0122 09:51:20.432772 4811 scope.go:117] "RemoveContainer" containerID="63404b52abf7ebf6e2f0552f3dc4deb6a640bceb491757748504342db7575e82" Jan 22 09:51:20 crc kubenswrapper[4811]: I0122 09:51:20.434033 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-88qmv"] Jan 22 09:51:20 crc kubenswrapper[4811]: I0122 09:51:20.463030 4811 scope.go:117] "RemoveContainer" containerID="e117a269b1cee4dd85b857779af46bce81ef87aa92310dd87babd25cea09b93f" Jan 22 09:51:20 crc kubenswrapper[4811]: E0122 09:51:20.466743 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e117a269b1cee4dd85b857779af46bce81ef87aa92310dd87babd25cea09b93f\": container with ID starting with e117a269b1cee4dd85b857779af46bce81ef87aa92310dd87babd25cea09b93f not found: ID does not exist" containerID="e117a269b1cee4dd85b857779af46bce81ef87aa92310dd87babd25cea09b93f" Jan 22 09:51:20 crc kubenswrapper[4811]: I0122 09:51:20.466784 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e117a269b1cee4dd85b857779af46bce81ef87aa92310dd87babd25cea09b93f"} err="failed to get container status \"e117a269b1cee4dd85b857779af46bce81ef87aa92310dd87babd25cea09b93f\": rpc error: code = NotFound desc = could not find container \"e117a269b1cee4dd85b857779af46bce81ef87aa92310dd87babd25cea09b93f\": container with ID starting with e117a269b1cee4dd85b857779af46bce81ef87aa92310dd87babd25cea09b93f not found: ID does not exist" Jan 22 09:51:20 crc kubenswrapper[4811]: I0122 09:51:20.466810 4811 scope.go:117] "RemoveContainer" containerID="3931303153d06efec9bba845886da1f3e6cf56540fa8275725979c4d3ccdc699" Jan 22 09:51:20 crc kubenswrapper[4811]: E0122 09:51:20.467134 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3931303153d06efec9bba845886da1f3e6cf56540fa8275725979c4d3ccdc699\": container with ID starting with 3931303153d06efec9bba845886da1f3e6cf56540fa8275725979c4d3ccdc699 not found: ID does not exist" containerID="3931303153d06efec9bba845886da1f3e6cf56540fa8275725979c4d3ccdc699" Jan 22 09:51:20 crc kubenswrapper[4811]: I0122 09:51:20.467165 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3931303153d06efec9bba845886da1f3e6cf56540fa8275725979c4d3ccdc699"} err="failed to get container status \"3931303153d06efec9bba845886da1f3e6cf56540fa8275725979c4d3ccdc699\": rpc error: code = NotFound desc = could not find container \"3931303153d06efec9bba845886da1f3e6cf56540fa8275725979c4d3ccdc699\": container with ID starting with 3931303153d06efec9bba845886da1f3e6cf56540fa8275725979c4d3ccdc699 not found: ID does not exist" Jan 22 09:51:20 crc kubenswrapper[4811]: I0122 09:51:20.467186 4811 scope.go:117] "RemoveContainer" containerID="63404b52abf7ebf6e2f0552f3dc4deb6a640bceb491757748504342db7575e82" Jan 22 09:51:20 crc kubenswrapper[4811]: E0122 09:51:20.467477 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"63404b52abf7ebf6e2f0552f3dc4deb6a640bceb491757748504342db7575e82\": container with ID starting with 63404b52abf7ebf6e2f0552f3dc4deb6a640bceb491757748504342db7575e82 not found: ID does not exist" containerID="63404b52abf7ebf6e2f0552f3dc4deb6a640bceb491757748504342db7575e82" Jan 22 09:51:20 crc kubenswrapper[4811]: I0122 09:51:20.467502 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63404b52abf7ebf6e2f0552f3dc4deb6a640bceb491757748504342db7575e82"} err="failed to get container status \"63404b52abf7ebf6e2f0552f3dc4deb6a640bceb491757748504342db7575e82\": rpc error: code = NotFound desc = could not find container \"63404b52abf7ebf6e2f0552f3dc4deb6a640bceb491757748504342db7575e82\": container with ID starting with 63404b52abf7ebf6e2f0552f3dc4deb6a640bceb491757748504342db7575e82 not found: ID does not exist" Jan 22 09:51:22 crc kubenswrapper[4811]: I0122 09:51:22.001392 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e0876f2-026f-48a5-8d69-0a0abeb47521" path="/var/lib/kubelet/pods/5e0876f2-026f-48a5-8d69-0a0abeb47521/volumes" Jan 22 09:51:23 crc kubenswrapper[4811]: I0122 09:51:23.514527 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 22 09:51:25 crc kubenswrapper[4811]: I0122 09:51:25.411595 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"eb23b893-6bb1-4d84-bb05-09c701024b37","Type":"ContainerStarted","Data":"0c3ba8e7a7044b4324c75965e2c29fb239bd9fc702eb8bfea55d006a5513ee9c"} Jan 22 09:51:25 crc kubenswrapper[4811]: I0122 09:51:25.444969 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=4.243637989 podStartE2EDuration="45.444946584s" podCreationTimestamp="2026-01-22 09:50:40 +0000 UTC" firstStartedPulling="2026-01-22 09:50:42.310462364 +0000 UTC m=+2686.632649487" lastFinishedPulling="2026-01-22 09:51:23.511770959 +0000 UTC m=+2727.833958082" observedRunningTime="2026-01-22 09:51:25.442607996 +0000 UTC m=+2729.764795118" watchObservedRunningTime="2026-01-22 09:51:25.444946584 +0000 UTC m=+2729.767133707" Jan 22 09:52:35 crc kubenswrapper[4811]: I0122 09:52:35.501154 4811 patch_prober.go:28] interesting pod/machine-config-daemon-txvcq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 09:52:35 crc kubenswrapper[4811]: I0122 09:52:35.501511 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 09:52:52 crc kubenswrapper[4811]: I0122 09:52:52.209359 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-msbq8"] Jan 22 09:52:52 crc kubenswrapper[4811]: E0122 09:52:52.210094 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e0876f2-026f-48a5-8d69-0a0abeb47521" containerName="extract-content" Jan 22 09:52:52 crc kubenswrapper[4811]: I0122 09:52:52.210110 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e0876f2-026f-48a5-8d69-0a0abeb47521" containerName="extract-content" Jan 22 09:52:52 crc kubenswrapper[4811]: E0122 09:52:52.210122 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e0876f2-026f-48a5-8d69-0a0abeb47521" containerName="extract-utilities" Jan 22 09:52:52 crc kubenswrapper[4811]: I0122 09:52:52.210128 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e0876f2-026f-48a5-8d69-0a0abeb47521" containerName="extract-utilities" Jan 22 09:52:52 crc kubenswrapper[4811]: E0122 09:52:52.210164 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e0876f2-026f-48a5-8d69-0a0abeb47521" containerName="registry-server" Jan 22 09:52:52 crc kubenswrapper[4811]: I0122 09:52:52.210170 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e0876f2-026f-48a5-8d69-0a0abeb47521" containerName="registry-server" Jan 22 09:52:52 crc kubenswrapper[4811]: I0122 09:52:52.210352 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e0876f2-026f-48a5-8d69-0a0abeb47521" containerName="registry-server" Jan 22 09:52:52 crc kubenswrapper[4811]: I0122 09:52:52.212405 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-msbq8" Jan 22 09:52:52 crc kubenswrapper[4811]: I0122 09:52:52.227470 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-msbq8"] Jan 22 09:52:52 crc kubenswrapper[4811]: I0122 09:52:52.259282 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e818cf55-f817-4393-be42-b23a595b1f07-utilities\") pod \"redhat-operators-msbq8\" (UID: \"e818cf55-f817-4393-be42-b23a595b1f07\") " pod="openshift-marketplace/redhat-operators-msbq8" Jan 22 09:52:52 crc kubenswrapper[4811]: I0122 09:52:52.259671 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgdhp\" (UniqueName: \"kubernetes.io/projected/e818cf55-f817-4393-be42-b23a595b1f07-kube-api-access-rgdhp\") pod \"redhat-operators-msbq8\" (UID: \"e818cf55-f817-4393-be42-b23a595b1f07\") " pod="openshift-marketplace/redhat-operators-msbq8" Jan 22 09:52:52 crc kubenswrapper[4811]: I0122 09:52:52.259950 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e818cf55-f817-4393-be42-b23a595b1f07-catalog-content\") pod \"redhat-operators-msbq8\" (UID: \"e818cf55-f817-4393-be42-b23a595b1f07\") " pod="openshift-marketplace/redhat-operators-msbq8" Jan 22 09:52:52 crc kubenswrapper[4811]: I0122 09:52:52.361747 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rgdhp\" (UniqueName: \"kubernetes.io/projected/e818cf55-f817-4393-be42-b23a595b1f07-kube-api-access-rgdhp\") pod \"redhat-operators-msbq8\" (UID: \"e818cf55-f817-4393-be42-b23a595b1f07\") " pod="openshift-marketplace/redhat-operators-msbq8" Jan 22 09:52:52 crc kubenswrapper[4811]: I0122 09:52:52.361882 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e818cf55-f817-4393-be42-b23a595b1f07-catalog-content\") pod \"redhat-operators-msbq8\" (UID: \"e818cf55-f817-4393-be42-b23a595b1f07\") " pod="openshift-marketplace/redhat-operators-msbq8" Jan 22 09:52:52 crc kubenswrapper[4811]: I0122 09:52:52.361993 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e818cf55-f817-4393-be42-b23a595b1f07-utilities\") pod \"redhat-operators-msbq8\" (UID: \"e818cf55-f817-4393-be42-b23a595b1f07\") " pod="openshift-marketplace/redhat-operators-msbq8" Jan 22 09:52:52 crc kubenswrapper[4811]: I0122 09:52:52.362397 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e818cf55-f817-4393-be42-b23a595b1f07-utilities\") pod \"redhat-operators-msbq8\" (UID: \"e818cf55-f817-4393-be42-b23a595b1f07\") " pod="openshift-marketplace/redhat-operators-msbq8" Jan 22 09:52:52 crc kubenswrapper[4811]: I0122 09:52:52.362655 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e818cf55-f817-4393-be42-b23a595b1f07-catalog-content\") pod \"redhat-operators-msbq8\" (UID: \"e818cf55-f817-4393-be42-b23a595b1f07\") " pod="openshift-marketplace/redhat-operators-msbq8" Jan 22 09:52:52 crc kubenswrapper[4811]: I0122 09:52:52.383201 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rgdhp\" (UniqueName: \"kubernetes.io/projected/e818cf55-f817-4393-be42-b23a595b1f07-kube-api-access-rgdhp\") pod \"redhat-operators-msbq8\" (UID: \"e818cf55-f817-4393-be42-b23a595b1f07\") " pod="openshift-marketplace/redhat-operators-msbq8" Jan 22 09:52:52 crc kubenswrapper[4811]: I0122 09:52:52.547575 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-msbq8" Jan 22 09:52:52 crc kubenswrapper[4811]: I0122 09:52:52.956261 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-msbq8"] Jan 22 09:52:53 crc kubenswrapper[4811]: I0122 09:52:53.043226 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-msbq8" event={"ID":"e818cf55-f817-4393-be42-b23a595b1f07","Type":"ContainerStarted","Data":"bb6bb15f98bce648c97f546f752c23e6ec418e9f373c3f0785a38af8870b965b"} Jan 22 09:52:54 crc kubenswrapper[4811]: I0122 09:52:54.054734 4811 generic.go:334] "Generic (PLEG): container finished" podID="e818cf55-f817-4393-be42-b23a595b1f07" containerID="b0ad641b79f22289e90aee09eefb09c1cee22a8be41d9327cc9c0f59f193a98a" exitCode=0 Jan 22 09:52:54 crc kubenswrapper[4811]: I0122 09:52:54.054913 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-msbq8" event={"ID":"e818cf55-f817-4393-be42-b23a595b1f07","Type":"ContainerDied","Data":"b0ad641b79f22289e90aee09eefb09c1cee22a8be41d9327cc9c0f59f193a98a"} Jan 22 09:52:55 crc kubenswrapper[4811]: I0122 09:52:55.070281 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-msbq8" event={"ID":"e818cf55-f817-4393-be42-b23a595b1f07","Type":"ContainerStarted","Data":"48fa1ffb6be0344a076d005313e9ea1d5d87bf9cec216dd824e4c0cf306fd062"} Jan 22 09:52:58 crc kubenswrapper[4811]: I0122 09:52:58.091728 4811 generic.go:334] "Generic (PLEG): container finished" podID="e818cf55-f817-4393-be42-b23a595b1f07" containerID="48fa1ffb6be0344a076d005313e9ea1d5d87bf9cec216dd824e4c0cf306fd062" exitCode=0 Jan 22 09:52:58 crc kubenswrapper[4811]: I0122 09:52:58.091792 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-msbq8" event={"ID":"e818cf55-f817-4393-be42-b23a595b1f07","Type":"ContainerDied","Data":"48fa1ffb6be0344a076d005313e9ea1d5d87bf9cec216dd824e4c0cf306fd062"} Jan 22 09:52:59 crc kubenswrapper[4811]: I0122 09:52:59.103604 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-msbq8" event={"ID":"e818cf55-f817-4393-be42-b23a595b1f07","Type":"ContainerStarted","Data":"f77616b025fa050996f8e03f65940f18b960f72f9c3b099f94d38045a4db36ef"} Jan 22 09:53:02 crc kubenswrapper[4811]: I0122 09:53:02.547871 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-msbq8" Jan 22 09:53:02 crc kubenswrapper[4811]: I0122 09:53:02.548302 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-msbq8" Jan 22 09:53:03 crc kubenswrapper[4811]: I0122 09:53:03.584571 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-msbq8" podUID="e818cf55-f817-4393-be42-b23a595b1f07" containerName="registry-server" probeResult="failure" output=< Jan 22 09:53:03 crc kubenswrapper[4811]: timeout: failed to connect service ":50051" within 1s Jan 22 09:53:03 crc kubenswrapper[4811]: > Jan 22 09:53:05 crc kubenswrapper[4811]: I0122 09:53:05.501759 4811 patch_prober.go:28] interesting pod/machine-config-daemon-txvcq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 09:53:05 crc kubenswrapper[4811]: I0122 09:53:05.502407 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 09:53:12 crc kubenswrapper[4811]: I0122 09:53:12.582744 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-msbq8" Jan 22 09:53:12 crc kubenswrapper[4811]: I0122 09:53:12.598162 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-msbq8" podStartSLOduration=15.91078011 podStartE2EDuration="20.598148902s" podCreationTimestamp="2026-01-22 09:52:52 +0000 UTC" firstStartedPulling="2026-01-22 09:52:54.058119476 +0000 UTC m=+2818.380306599" lastFinishedPulling="2026-01-22 09:52:58.745488268 +0000 UTC m=+2823.067675391" observedRunningTime="2026-01-22 09:52:59.131949644 +0000 UTC m=+2823.454136767" watchObservedRunningTime="2026-01-22 09:53:12.598148902 +0000 UTC m=+2836.920336025" Jan 22 09:53:12 crc kubenswrapper[4811]: I0122 09:53:12.616089 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-msbq8" Jan 22 09:53:12 crc kubenswrapper[4811]: I0122 09:53:12.815151 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-msbq8"] Jan 22 09:53:14 crc kubenswrapper[4811]: I0122 09:53:14.214509 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-msbq8" podUID="e818cf55-f817-4393-be42-b23a595b1f07" containerName="registry-server" containerID="cri-o://f77616b025fa050996f8e03f65940f18b960f72f9c3b099f94d38045a4db36ef" gracePeriod=2 Jan 22 09:53:14 crc kubenswrapper[4811]: I0122 09:53:14.664770 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-msbq8" Jan 22 09:53:14 crc kubenswrapper[4811]: I0122 09:53:14.708912 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rgdhp\" (UniqueName: \"kubernetes.io/projected/e818cf55-f817-4393-be42-b23a595b1f07-kube-api-access-rgdhp\") pod \"e818cf55-f817-4393-be42-b23a595b1f07\" (UID: \"e818cf55-f817-4393-be42-b23a595b1f07\") " Jan 22 09:53:14 crc kubenswrapper[4811]: I0122 09:53:14.709003 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e818cf55-f817-4393-be42-b23a595b1f07-catalog-content\") pod \"e818cf55-f817-4393-be42-b23a595b1f07\" (UID: \"e818cf55-f817-4393-be42-b23a595b1f07\") " Jan 22 09:53:14 crc kubenswrapper[4811]: I0122 09:53:14.709115 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e818cf55-f817-4393-be42-b23a595b1f07-utilities\") pod \"e818cf55-f817-4393-be42-b23a595b1f07\" (UID: \"e818cf55-f817-4393-be42-b23a595b1f07\") " Jan 22 09:53:14 crc kubenswrapper[4811]: I0122 09:53:14.710504 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e818cf55-f817-4393-be42-b23a595b1f07-utilities" (OuterVolumeSpecName: "utilities") pod "e818cf55-f817-4393-be42-b23a595b1f07" (UID: "e818cf55-f817-4393-be42-b23a595b1f07"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:53:14 crc kubenswrapper[4811]: I0122 09:53:14.723346 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e818cf55-f817-4393-be42-b23a595b1f07-kube-api-access-rgdhp" (OuterVolumeSpecName: "kube-api-access-rgdhp") pod "e818cf55-f817-4393-be42-b23a595b1f07" (UID: "e818cf55-f817-4393-be42-b23a595b1f07"). InnerVolumeSpecName "kube-api-access-rgdhp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:53:14 crc kubenswrapper[4811]: I0122 09:53:14.804987 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e818cf55-f817-4393-be42-b23a595b1f07-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e818cf55-f817-4393-be42-b23a595b1f07" (UID: "e818cf55-f817-4393-be42-b23a595b1f07"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:53:14 crc kubenswrapper[4811]: I0122 09:53:14.811665 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rgdhp\" (UniqueName: \"kubernetes.io/projected/e818cf55-f817-4393-be42-b23a595b1f07-kube-api-access-rgdhp\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:14 crc kubenswrapper[4811]: I0122 09:53:14.811797 4811 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e818cf55-f817-4393-be42-b23a595b1f07-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:14 crc kubenswrapper[4811]: I0122 09:53:14.811878 4811 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e818cf55-f817-4393-be42-b23a595b1f07-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:15 crc kubenswrapper[4811]: I0122 09:53:15.230397 4811 generic.go:334] "Generic (PLEG): container finished" podID="e818cf55-f817-4393-be42-b23a595b1f07" containerID="f77616b025fa050996f8e03f65940f18b960f72f9c3b099f94d38045a4db36ef" exitCode=0 Jan 22 09:53:15 crc kubenswrapper[4811]: I0122 09:53:15.230483 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-msbq8" Jan 22 09:53:15 crc kubenswrapper[4811]: I0122 09:53:15.230491 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-msbq8" event={"ID":"e818cf55-f817-4393-be42-b23a595b1f07","Type":"ContainerDied","Data":"f77616b025fa050996f8e03f65940f18b960f72f9c3b099f94d38045a4db36ef"} Jan 22 09:53:15 crc kubenswrapper[4811]: I0122 09:53:15.230842 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-msbq8" event={"ID":"e818cf55-f817-4393-be42-b23a595b1f07","Type":"ContainerDied","Data":"bb6bb15f98bce648c97f546f752c23e6ec418e9f373c3f0785a38af8870b965b"} Jan 22 09:53:15 crc kubenswrapper[4811]: I0122 09:53:15.230884 4811 scope.go:117] "RemoveContainer" containerID="f77616b025fa050996f8e03f65940f18b960f72f9c3b099f94d38045a4db36ef" Jan 22 09:53:15 crc kubenswrapper[4811]: I0122 09:53:15.253800 4811 scope.go:117] "RemoveContainer" containerID="48fa1ffb6be0344a076d005313e9ea1d5d87bf9cec216dd824e4c0cf306fd062" Jan 22 09:53:15 crc kubenswrapper[4811]: I0122 09:53:15.256155 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-msbq8"] Jan 22 09:53:15 crc kubenswrapper[4811]: I0122 09:53:15.263927 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-msbq8"] Jan 22 09:53:15 crc kubenswrapper[4811]: I0122 09:53:15.279716 4811 scope.go:117] "RemoveContainer" containerID="b0ad641b79f22289e90aee09eefb09c1cee22a8be41d9327cc9c0f59f193a98a" Jan 22 09:53:15 crc kubenswrapper[4811]: I0122 09:53:15.309380 4811 scope.go:117] "RemoveContainer" containerID="f77616b025fa050996f8e03f65940f18b960f72f9c3b099f94d38045a4db36ef" Jan 22 09:53:15 crc kubenswrapper[4811]: E0122 09:53:15.309758 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f77616b025fa050996f8e03f65940f18b960f72f9c3b099f94d38045a4db36ef\": container with ID starting with f77616b025fa050996f8e03f65940f18b960f72f9c3b099f94d38045a4db36ef not found: ID does not exist" containerID="f77616b025fa050996f8e03f65940f18b960f72f9c3b099f94d38045a4db36ef" Jan 22 09:53:15 crc kubenswrapper[4811]: I0122 09:53:15.309791 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f77616b025fa050996f8e03f65940f18b960f72f9c3b099f94d38045a4db36ef"} err="failed to get container status \"f77616b025fa050996f8e03f65940f18b960f72f9c3b099f94d38045a4db36ef\": rpc error: code = NotFound desc = could not find container \"f77616b025fa050996f8e03f65940f18b960f72f9c3b099f94d38045a4db36ef\": container with ID starting with f77616b025fa050996f8e03f65940f18b960f72f9c3b099f94d38045a4db36ef not found: ID does not exist" Jan 22 09:53:15 crc kubenswrapper[4811]: I0122 09:53:15.309812 4811 scope.go:117] "RemoveContainer" containerID="48fa1ffb6be0344a076d005313e9ea1d5d87bf9cec216dd824e4c0cf306fd062" Jan 22 09:53:15 crc kubenswrapper[4811]: E0122 09:53:15.310164 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48fa1ffb6be0344a076d005313e9ea1d5d87bf9cec216dd824e4c0cf306fd062\": container with ID starting with 48fa1ffb6be0344a076d005313e9ea1d5d87bf9cec216dd824e4c0cf306fd062 not found: ID does not exist" containerID="48fa1ffb6be0344a076d005313e9ea1d5d87bf9cec216dd824e4c0cf306fd062" Jan 22 09:53:15 crc kubenswrapper[4811]: I0122 09:53:15.310195 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48fa1ffb6be0344a076d005313e9ea1d5d87bf9cec216dd824e4c0cf306fd062"} err="failed to get container status \"48fa1ffb6be0344a076d005313e9ea1d5d87bf9cec216dd824e4c0cf306fd062\": rpc error: code = NotFound desc = could not find container \"48fa1ffb6be0344a076d005313e9ea1d5d87bf9cec216dd824e4c0cf306fd062\": container with ID starting with 48fa1ffb6be0344a076d005313e9ea1d5d87bf9cec216dd824e4c0cf306fd062 not found: ID does not exist" Jan 22 09:53:15 crc kubenswrapper[4811]: I0122 09:53:15.310210 4811 scope.go:117] "RemoveContainer" containerID="b0ad641b79f22289e90aee09eefb09c1cee22a8be41d9327cc9c0f59f193a98a" Jan 22 09:53:15 crc kubenswrapper[4811]: E0122 09:53:15.310417 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b0ad641b79f22289e90aee09eefb09c1cee22a8be41d9327cc9c0f59f193a98a\": container with ID starting with b0ad641b79f22289e90aee09eefb09c1cee22a8be41d9327cc9c0f59f193a98a not found: ID does not exist" containerID="b0ad641b79f22289e90aee09eefb09c1cee22a8be41d9327cc9c0f59f193a98a" Jan 22 09:53:15 crc kubenswrapper[4811]: I0122 09:53:15.310447 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0ad641b79f22289e90aee09eefb09c1cee22a8be41d9327cc9c0f59f193a98a"} err="failed to get container status \"b0ad641b79f22289e90aee09eefb09c1cee22a8be41d9327cc9c0f59f193a98a\": rpc error: code = NotFound desc = could not find container \"b0ad641b79f22289e90aee09eefb09c1cee22a8be41d9327cc9c0f59f193a98a\": container with ID starting with b0ad641b79f22289e90aee09eefb09c1cee22a8be41d9327cc9c0f59f193a98a not found: ID does not exist" Jan 22 09:53:16 crc kubenswrapper[4811]: I0122 09:53:16.001595 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e818cf55-f817-4393-be42-b23a595b1f07" path="/var/lib/kubelet/pods/e818cf55-f817-4393-be42-b23a595b1f07/volumes" Jan 22 09:53:26 crc kubenswrapper[4811]: I0122 09:53:26.751547 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-ztd6z"] Jan 22 09:53:26 crc kubenswrapper[4811]: E0122 09:53:26.752271 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e818cf55-f817-4393-be42-b23a595b1f07" containerName="extract-content" Jan 22 09:53:26 crc kubenswrapper[4811]: I0122 09:53:26.752287 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="e818cf55-f817-4393-be42-b23a595b1f07" containerName="extract-content" Jan 22 09:53:26 crc kubenswrapper[4811]: E0122 09:53:26.752297 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e818cf55-f817-4393-be42-b23a595b1f07" containerName="extract-utilities" Jan 22 09:53:26 crc kubenswrapper[4811]: I0122 09:53:26.752303 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="e818cf55-f817-4393-be42-b23a595b1f07" containerName="extract-utilities" Jan 22 09:53:26 crc kubenswrapper[4811]: E0122 09:53:26.752325 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e818cf55-f817-4393-be42-b23a595b1f07" containerName="registry-server" Jan 22 09:53:26 crc kubenswrapper[4811]: I0122 09:53:26.752331 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="e818cf55-f817-4393-be42-b23a595b1f07" containerName="registry-server" Jan 22 09:53:26 crc kubenswrapper[4811]: I0122 09:53:26.752507 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="e818cf55-f817-4393-be42-b23a595b1f07" containerName="registry-server" Jan 22 09:53:26 crc kubenswrapper[4811]: I0122 09:53:26.753817 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ztd6z" Jan 22 09:53:26 crc kubenswrapper[4811]: I0122 09:53:26.762016 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ztd6z"] Jan 22 09:53:26 crc kubenswrapper[4811]: I0122 09:53:26.828161 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6j28\" (UniqueName: \"kubernetes.io/projected/257616ba-671a-4cbd-91d3-30a00b85773d-kube-api-access-d6j28\") pod \"redhat-marketplace-ztd6z\" (UID: \"257616ba-671a-4cbd-91d3-30a00b85773d\") " pod="openshift-marketplace/redhat-marketplace-ztd6z" Jan 22 09:53:26 crc kubenswrapper[4811]: I0122 09:53:26.828290 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/257616ba-671a-4cbd-91d3-30a00b85773d-catalog-content\") pod \"redhat-marketplace-ztd6z\" (UID: \"257616ba-671a-4cbd-91d3-30a00b85773d\") " pod="openshift-marketplace/redhat-marketplace-ztd6z" Jan 22 09:53:26 crc kubenswrapper[4811]: I0122 09:53:26.828514 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/257616ba-671a-4cbd-91d3-30a00b85773d-utilities\") pod \"redhat-marketplace-ztd6z\" (UID: \"257616ba-671a-4cbd-91d3-30a00b85773d\") " pod="openshift-marketplace/redhat-marketplace-ztd6z" Jan 22 09:53:26 crc kubenswrapper[4811]: I0122 09:53:26.930511 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6j28\" (UniqueName: \"kubernetes.io/projected/257616ba-671a-4cbd-91d3-30a00b85773d-kube-api-access-d6j28\") pod \"redhat-marketplace-ztd6z\" (UID: \"257616ba-671a-4cbd-91d3-30a00b85773d\") " pod="openshift-marketplace/redhat-marketplace-ztd6z" Jan 22 09:53:26 crc kubenswrapper[4811]: I0122 09:53:26.930806 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/257616ba-671a-4cbd-91d3-30a00b85773d-catalog-content\") pod \"redhat-marketplace-ztd6z\" (UID: \"257616ba-671a-4cbd-91d3-30a00b85773d\") " pod="openshift-marketplace/redhat-marketplace-ztd6z" Jan 22 09:53:26 crc kubenswrapper[4811]: I0122 09:53:26.931021 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/257616ba-671a-4cbd-91d3-30a00b85773d-utilities\") pod \"redhat-marketplace-ztd6z\" (UID: \"257616ba-671a-4cbd-91d3-30a00b85773d\") " pod="openshift-marketplace/redhat-marketplace-ztd6z" Jan 22 09:53:26 crc kubenswrapper[4811]: I0122 09:53:26.931317 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/257616ba-671a-4cbd-91d3-30a00b85773d-catalog-content\") pod \"redhat-marketplace-ztd6z\" (UID: \"257616ba-671a-4cbd-91d3-30a00b85773d\") " pod="openshift-marketplace/redhat-marketplace-ztd6z" Jan 22 09:53:26 crc kubenswrapper[4811]: I0122 09:53:26.931367 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/257616ba-671a-4cbd-91d3-30a00b85773d-utilities\") pod \"redhat-marketplace-ztd6z\" (UID: \"257616ba-671a-4cbd-91d3-30a00b85773d\") " pod="openshift-marketplace/redhat-marketplace-ztd6z" Jan 22 09:53:26 crc kubenswrapper[4811]: I0122 09:53:26.947339 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6j28\" (UniqueName: \"kubernetes.io/projected/257616ba-671a-4cbd-91d3-30a00b85773d-kube-api-access-d6j28\") pod \"redhat-marketplace-ztd6z\" (UID: \"257616ba-671a-4cbd-91d3-30a00b85773d\") " pod="openshift-marketplace/redhat-marketplace-ztd6z" Jan 22 09:53:27 crc kubenswrapper[4811]: I0122 09:53:27.079384 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ztd6z" Jan 22 09:53:27 crc kubenswrapper[4811]: I0122 09:53:27.507895 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ztd6z"] Jan 22 09:53:28 crc kubenswrapper[4811]: I0122 09:53:28.327130 4811 generic.go:334] "Generic (PLEG): container finished" podID="257616ba-671a-4cbd-91d3-30a00b85773d" containerID="d62d56ace5cd4078de054479204b47f0d888676ba6dbe6f42a8db144002206d7" exitCode=0 Jan 22 09:53:28 crc kubenswrapper[4811]: I0122 09:53:28.328109 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ztd6z" event={"ID":"257616ba-671a-4cbd-91d3-30a00b85773d","Type":"ContainerDied","Data":"d62d56ace5cd4078de054479204b47f0d888676ba6dbe6f42a8db144002206d7"} Jan 22 09:53:28 crc kubenswrapper[4811]: I0122 09:53:28.328205 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ztd6z" event={"ID":"257616ba-671a-4cbd-91d3-30a00b85773d","Type":"ContainerStarted","Data":"b31281052795c7e98b6776ec47b3495988c75f4b03c6a83e3ec898897811fe3a"} Jan 22 09:53:28 crc kubenswrapper[4811]: I0122 09:53:28.330388 4811 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 09:53:29 crc kubenswrapper[4811]: I0122 09:53:29.335582 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ztd6z" event={"ID":"257616ba-671a-4cbd-91d3-30a00b85773d","Type":"ContainerStarted","Data":"8470850545b05a78232d0e42a0a13d31e75e42a0d0c88d8e4b03c50acd57286c"} Jan 22 09:53:30 crc kubenswrapper[4811]: I0122 09:53:30.344192 4811 generic.go:334] "Generic (PLEG): container finished" podID="257616ba-671a-4cbd-91d3-30a00b85773d" containerID="8470850545b05a78232d0e42a0a13d31e75e42a0d0c88d8e4b03c50acd57286c" exitCode=0 Jan 22 09:53:30 crc kubenswrapper[4811]: I0122 09:53:30.344737 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ztd6z" event={"ID":"257616ba-671a-4cbd-91d3-30a00b85773d","Type":"ContainerDied","Data":"8470850545b05a78232d0e42a0a13d31e75e42a0d0c88d8e4b03c50acd57286c"} Jan 22 09:53:31 crc kubenswrapper[4811]: I0122 09:53:31.354570 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ztd6z" event={"ID":"257616ba-671a-4cbd-91d3-30a00b85773d","Type":"ContainerStarted","Data":"1de8078bae5b7690a151ab82e13b987b9691c371990f9b7d3aea255b2d872b38"} Jan 22 09:53:31 crc kubenswrapper[4811]: I0122 09:53:31.379842 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-ztd6z" podStartSLOduration=2.881023773 podStartE2EDuration="5.379825691s" podCreationTimestamp="2026-01-22 09:53:26 +0000 UTC" firstStartedPulling="2026-01-22 09:53:28.329242586 +0000 UTC m=+2852.651429709" lastFinishedPulling="2026-01-22 09:53:30.828044515 +0000 UTC m=+2855.150231627" observedRunningTime="2026-01-22 09:53:31.37479726 +0000 UTC m=+2855.696984382" watchObservedRunningTime="2026-01-22 09:53:31.379825691 +0000 UTC m=+2855.702012814" Jan 22 09:53:35 crc kubenswrapper[4811]: I0122 09:53:35.501164 4811 patch_prober.go:28] interesting pod/machine-config-daemon-txvcq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 09:53:35 crc kubenswrapper[4811]: I0122 09:53:35.501535 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 09:53:35 crc kubenswrapper[4811]: I0122 09:53:35.501582 4811 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" Jan 22 09:53:35 crc kubenswrapper[4811]: I0122 09:53:35.502114 4811 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"27d99ddba75ace4f5b92aada119cc94fd0acd38a4e4c54101859524605bfeb4c"} pod="openshift-machine-config-operator/machine-config-daemon-txvcq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 09:53:35 crc kubenswrapper[4811]: I0122 09:53:35.502164 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" containerName="machine-config-daemon" containerID="cri-o://27d99ddba75ace4f5b92aada119cc94fd0acd38a4e4c54101859524605bfeb4c" gracePeriod=600 Jan 22 09:53:35 crc kubenswrapper[4811]: E0122 09:53:35.626202 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-txvcq_openshift-machine-config-operator(84068a6b-e189-419b-87f5-f31428f6eafe)\"" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" Jan 22 09:53:36 crc kubenswrapper[4811]: I0122 09:53:36.389562 4811 generic.go:334] "Generic (PLEG): container finished" podID="84068a6b-e189-419b-87f5-f31428f6eafe" containerID="27d99ddba75ace4f5b92aada119cc94fd0acd38a4e4c54101859524605bfeb4c" exitCode=0 Jan 22 09:53:36 crc kubenswrapper[4811]: I0122 09:53:36.389660 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" event={"ID":"84068a6b-e189-419b-87f5-f31428f6eafe","Type":"ContainerDied","Data":"27d99ddba75ace4f5b92aada119cc94fd0acd38a4e4c54101859524605bfeb4c"} Jan 22 09:53:36 crc kubenswrapper[4811]: I0122 09:53:36.389781 4811 scope.go:117] "RemoveContainer" containerID="583eec57819133c8760804210a11b9e588774eddca3a7b248d1500229de8ad12" Jan 22 09:53:36 crc kubenswrapper[4811]: I0122 09:53:36.390653 4811 scope.go:117] "RemoveContainer" containerID="27d99ddba75ace4f5b92aada119cc94fd0acd38a4e4c54101859524605bfeb4c" Jan 22 09:53:36 crc kubenswrapper[4811]: E0122 09:53:36.391041 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-txvcq_openshift-machine-config-operator(84068a6b-e189-419b-87f5-f31428f6eafe)\"" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" Jan 22 09:53:37 crc kubenswrapper[4811]: I0122 09:53:37.079517 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-ztd6z" Jan 22 09:53:37 crc kubenswrapper[4811]: I0122 09:53:37.079929 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-ztd6z" Jan 22 09:53:37 crc kubenswrapper[4811]: I0122 09:53:37.113809 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-ztd6z" Jan 22 09:53:37 crc kubenswrapper[4811]: I0122 09:53:37.429571 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-ztd6z" Jan 22 09:53:37 crc kubenswrapper[4811]: I0122 09:53:37.467980 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ztd6z"] Jan 22 09:53:39 crc kubenswrapper[4811]: I0122 09:53:39.414514 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-ztd6z" podUID="257616ba-671a-4cbd-91d3-30a00b85773d" containerName="registry-server" containerID="cri-o://1de8078bae5b7690a151ab82e13b987b9691c371990f9b7d3aea255b2d872b38" gracePeriod=2 Jan 22 09:53:39 crc kubenswrapper[4811]: I0122 09:53:39.963347 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ztd6z" Jan 22 09:53:40 crc kubenswrapper[4811]: I0122 09:53:40.075204 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/257616ba-671a-4cbd-91d3-30a00b85773d-catalog-content\") pod \"257616ba-671a-4cbd-91d3-30a00b85773d\" (UID: \"257616ba-671a-4cbd-91d3-30a00b85773d\") " Jan 22 09:53:40 crc kubenswrapper[4811]: I0122 09:53:40.075263 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6j28\" (UniqueName: \"kubernetes.io/projected/257616ba-671a-4cbd-91d3-30a00b85773d-kube-api-access-d6j28\") pod \"257616ba-671a-4cbd-91d3-30a00b85773d\" (UID: \"257616ba-671a-4cbd-91d3-30a00b85773d\") " Jan 22 09:53:40 crc kubenswrapper[4811]: I0122 09:53:40.075379 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/257616ba-671a-4cbd-91d3-30a00b85773d-utilities\") pod \"257616ba-671a-4cbd-91d3-30a00b85773d\" (UID: \"257616ba-671a-4cbd-91d3-30a00b85773d\") " Jan 22 09:53:40 crc kubenswrapper[4811]: I0122 09:53:40.075990 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/257616ba-671a-4cbd-91d3-30a00b85773d-utilities" (OuterVolumeSpecName: "utilities") pod "257616ba-671a-4cbd-91d3-30a00b85773d" (UID: "257616ba-671a-4cbd-91d3-30a00b85773d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:53:40 crc kubenswrapper[4811]: I0122 09:53:40.076478 4811 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/257616ba-671a-4cbd-91d3-30a00b85773d-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:40 crc kubenswrapper[4811]: I0122 09:53:40.080890 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/257616ba-671a-4cbd-91d3-30a00b85773d-kube-api-access-d6j28" (OuterVolumeSpecName: "kube-api-access-d6j28") pod "257616ba-671a-4cbd-91d3-30a00b85773d" (UID: "257616ba-671a-4cbd-91d3-30a00b85773d"). InnerVolumeSpecName "kube-api-access-d6j28". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:53:40 crc kubenswrapper[4811]: I0122 09:53:40.091571 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/257616ba-671a-4cbd-91d3-30a00b85773d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "257616ba-671a-4cbd-91d3-30a00b85773d" (UID: "257616ba-671a-4cbd-91d3-30a00b85773d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:53:40 crc kubenswrapper[4811]: I0122 09:53:40.178828 4811 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/257616ba-671a-4cbd-91d3-30a00b85773d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:40 crc kubenswrapper[4811]: I0122 09:53:40.178953 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6j28\" (UniqueName: \"kubernetes.io/projected/257616ba-671a-4cbd-91d3-30a00b85773d-kube-api-access-d6j28\") on node \"crc\" DevicePath \"\"" Jan 22 09:53:40 crc kubenswrapper[4811]: I0122 09:53:40.423731 4811 generic.go:334] "Generic (PLEG): container finished" podID="257616ba-671a-4cbd-91d3-30a00b85773d" containerID="1de8078bae5b7690a151ab82e13b987b9691c371990f9b7d3aea255b2d872b38" exitCode=0 Jan 22 09:53:40 crc kubenswrapper[4811]: I0122 09:53:40.423776 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ztd6z" event={"ID":"257616ba-671a-4cbd-91d3-30a00b85773d","Type":"ContainerDied","Data":"1de8078bae5b7690a151ab82e13b987b9691c371990f9b7d3aea255b2d872b38"} Jan 22 09:53:40 crc kubenswrapper[4811]: I0122 09:53:40.423804 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ztd6z" event={"ID":"257616ba-671a-4cbd-91d3-30a00b85773d","Type":"ContainerDied","Data":"b31281052795c7e98b6776ec47b3495988c75f4b03c6a83e3ec898897811fe3a"} Jan 22 09:53:40 crc kubenswrapper[4811]: I0122 09:53:40.423826 4811 scope.go:117] "RemoveContainer" containerID="1de8078bae5b7690a151ab82e13b987b9691c371990f9b7d3aea255b2d872b38" Jan 22 09:53:40 crc kubenswrapper[4811]: I0122 09:53:40.423817 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ztd6z" Jan 22 09:53:40 crc kubenswrapper[4811]: I0122 09:53:40.442456 4811 scope.go:117] "RemoveContainer" containerID="8470850545b05a78232d0e42a0a13d31e75e42a0d0c88d8e4b03c50acd57286c" Jan 22 09:53:40 crc kubenswrapper[4811]: I0122 09:53:40.454833 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ztd6z"] Jan 22 09:53:40 crc kubenswrapper[4811]: I0122 09:53:40.461968 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-ztd6z"] Jan 22 09:53:40 crc kubenswrapper[4811]: I0122 09:53:40.468576 4811 scope.go:117] "RemoveContainer" containerID="d62d56ace5cd4078de054479204b47f0d888676ba6dbe6f42a8db144002206d7" Jan 22 09:53:40 crc kubenswrapper[4811]: I0122 09:53:40.494370 4811 scope.go:117] "RemoveContainer" containerID="1de8078bae5b7690a151ab82e13b987b9691c371990f9b7d3aea255b2d872b38" Jan 22 09:53:40 crc kubenswrapper[4811]: E0122 09:53:40.494699 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1de8078bae5b7690a151ab82e13b987b9691c371990f9b7d3aea255b2d872b38\": container with ID starting with 1de8078bae5b7690a151ab82e13b987b9691c371990f9b7d3aea255b2d872b38 not found: ID does not exist" containerID="1de8078bae5b7690a151ab82e13b987b9691c371990f9b7d3aea255b2d872b38" Jan 22 09:53:40 crc kubenswrapper[4811]: I0122 09:53:40.494738 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1de8078bae5b7690a151ab82e13b987b9691c371990f9b7d3aea255b2d872b38"} err="failed to get container status \"1de8078bae5b7690a151ab82e13b987b9691c371990f9b7d3aea255b2d872b38\": rpc error: code = NotFound desc = could not find container \"1de8078bae5b7690a151ab82e13b987b9691c371990f9b7d3aea255b2d872b38\": container with ID starting with 1de8078bae5b7690a151ab82e13b987b9691c371990f9b7d3aea255b2d872b38 not found: ID does not exist" Jan 22 09:53:40 crc kubenswrapper[4811]: I0122 09:53:40.494763 4811 scope.go:117] "RemoveContainer" containerID="8470850545b05a78232d0e42a0a13d31e75e42a0d0c88d8e4b03c50acd57286c" Jan 22 09:53:40 crc kubenswrapper[4811]: E0122 09:53:40.494993 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8470850545b05a78232d0e42a0a13d31e75e42a0d0c88d8e4b03c50acd57286c\": container with ID starting with 8470850545b05a78232d0e42a0a13d31e75e42a0d0c88d8e4b03c50acd57286c not found: ID does not exist" containerID="8470850545b05a78232d0e42a0a13d31e75e42a0d0c88d8e4b03c50acd57286c" Jan 22 09:53:40 crc kubenswrapper[4811]: I0122 09:53:40.495013 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8470850545b05a78232d0e42a0a13d31e75e42a0d0c88d8e4b03c50acd57286c"} err="failed to get container status \"8470850545b05a78232d0e42a0a13d31e75e42a0d0c88d8e4b03c50acd57286c\": rpc error: code = NotFound desc = could not find container \"8470850545b05a78232d0e42a0a13d31e75e42a0d0c88d8e4b03c50acd57286c\": container with ID starting with 8470850545b05a78232d0e42a0a13d31e75e42a0d0c88d8e4b03c50acd57286c not found: ID does not exist" Jan 22 09:53:40 crc kubenswrapper[4811]: I0122 09:53:40.495026 4811 scope.go:117] "RemoveContainer" containerID="d62d56ace5cd4078de054479204b47f0d888676ba6dbe6f42a8db144002206d7" Jan 22 09:53:40 crc kubenswrapper[4811]: E0122 09:53:40.495243 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d62d56ace5cd4078de054479204b47f0d888676ba6dbe6f42a8db144002206d7\": container with ID starting with d62d56ace5cd4078de054479204b47f0d888676ba6dbe6f42a8db144002206d7 not found: ID does not exist" containerID="d62d56ace5cd4078de054479204b47f0d888676ba6dbe6f42a8db144002206d7" Jan 22 09:53:40 crc kubenswrapper[4811]: I0122 09:53:40.495265 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d62d56ace5cd4078de054479204b47f0d888676ba6dbe6f42a8db144002206d7"} err="failed to get container status \"d62d56ace5cd4078de054479204b47f0d888676ba6dbe6f42a8db144002206d7\": rpc error: code = NotFound desc = could not find container \"d62d56ace5cd4078de054479204b47f0d888676ba6dbe6f42a8db144002206d7\": container with ID starting with d62d56ace5cd4078de054479204b47f0d888676ba6dbe6f42a8db144002206d7 not found: ID does not exist" Jan 22 09:53:42 crc kubenswrapper[4811]: I0122 09:53:42.004460 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="257616ba-671a-4cbd-91d3-30a00b85773d" path="/var/lib/kubelet/pods/257616ba-671a-4cbd-91d3-30a00b85773d/volumes" Jan 22 09:53:47 crc kubenswrapper[4811]: I0122 09:53:47.991979 4811 scope.go:117] "RemoveContainer" containerID="27d99ddba75ace4f5b92aada119cc94fd0acd38a4e4c54101859524605bfeb4c" Jan 22 09:53:47 crc kubenswrapper[4811]: E0122 09:53:47.993354 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-txvcq_openshift-machine-config-operator(84068a6b-e189-419b-87f5-f31428f6eafe)\"" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" Jan 22 09:54:02 crc kubenswrapper[4811]: I0122 09:54:02.992671 4811 scope.go:117] "RemoveContainer" containerID="27d99ddba75ace4f5b92aada119cc94fd0acd38a4e4c54101859524605bfeb4c" Jan 22 09:54:02 crc kubenswrapper[4811]: E0122 09:54:02.994513 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-txvcq_openshift-machine-config-operator(84068a6b-e189-419b-87f5-f31428f6eafe)\"" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" Jan 22 09:54:14 crc kubenswrapper[4811]: I0122 09:54:14.992854 4811 scope.go:117] "RemoveContainer" containerID="27d99ddba75ace4f5b92aada119cc94fd0acd38a4e4c54101859524605bfeb4c" Jan 22 09:54:14 crc kubenswrapper[4811]: E0122 09:54:14.993889 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-txvcq_openshift-machine-config-operator(84068a6b-e189-419b-87f5-f31428f6eafe)\"" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" Jan 22 09:54:29 crc kubenswrapper[4811]: I0122 09:54:29.992349 4811 scope.go:117] "RemoveContainer" containerID="27d99ddba75ace4f5b92aada119cc94fd0acd38a4e4c54101859524605bfeb4c" Jan 22 09:54:29 crc kubenswrapper[4811]: E0122 09:54:29.993264 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-txvcq_openshift-machine-config-operator(84068a6b-e189-419b-87f5-f31428f6eafe)\"" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" Jan 22 09:54:40 crc kubenswrapper[4811]: I0122 09:54:40.992502 4811 scope.go:117] "RemoveContainer" containerID="27d99ddba75ace4f5b92aada119cc94fd0acd38a4e4c54101859524605bfeb4c" Jan 22 09:54:40 crc kubenswrapper[4811]: E0122 09:54:40.993127 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-txvcq_openshift-machine-config-operator(84068a6b-e189-419b-87f5-f31428f6eafe)\"" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" Jan 22 09:54:53 crc kubenswrapper[4811]: I0122 09:54:53.993451 4811 scope.go:117] "RemoveContainer" containerID="27d99ddba75ace4f5b92aada119cc94fd0acd38a4e4c54101859524605bfeb4c" Jan 22 09:54:53 crc kubenswrapper[4811]: E0122 09:54:53.994411 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-txvcq_openshift-machine-config-operator(84068a6b-e189-419b-87f5-f31428f6eafe)\"" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" Jan 22 09:55:08 crc kubenswrapper[4811]: I0122 09:55:08.992443 4811 scope.go:117] "RemoveContainer" containerID="27d99ddba75ace4f5b92aada119cc94fd0acd38a4e4c54101859524605bfeb4c" Jan 22 09:55:08 crc kubenswrapper[4811]: E0122 09:55:08.993721 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-txvcq_openshift-machine-config-operator(84068a6b-e189-419b-87f5-f31428f6eafe)\"" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" Jan 22 09:55:19 crc kubenswrapper[4811]: I0122 09:55:19.992493 4811 scope.go:117] "RemoveContainer" containerID="27d99ddba75ace4f5b92aada119cc94fd0acd38a4e4c54101859524605bfeb4c" Jan 22 09:55:19 crc kubenswrapper[4811]: E0122 09:55:19.993153 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-txvcq_openshift-machine-config-operator(84068a6b-e189-419b-87f5-f31428f6eafe)\"" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" Jan 22 09:55:31 crc kubenswrapper[4811]: I0122 09:55:31.992168 4811 scope.go:117] "RemoveContainer" containerID="27d99ddba75ace4f5b92aada119cc94fd0acd38a4e4c54101859524605bfeb4c" Jan 22 09:55:31 crc kubenswrapper[4811]: E0122 09:55:31.992816 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-txvcq_openshift-machine-config-operator(84068a6b-e189-419b-87f5-f31428f6eafe)\"" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" Jan 22 09:55:42 crc kubenswrapper[4811]: I0122 09:55:42.992379 4811 scope.go:117] "RemoveContainer" containerID="27d99ddba75ace4f5b92aada119cc94fd0acd38a4e4c54101859524605bfeb4c" Jan 22 09:55:42 crc kubenswrapper[4811]: E0122 09:55:42.992901 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-txvcq_openshift-machine-config-operator(84068a6b-e189-419b-87f5-f31428f6eafe)\"" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" Jan 22 09:55:57 crc kubenswrapper[4811]: I0122 09:55:57.992228 4811 scope.go:117] "RemoveContainer" containerID="27d99ddba75ace4f5b92aada119cc94fd0acd38a4e4c54101859524605bfeb4c" Jan 22 09:55:57 crc kubenswrapper[4811]: E0122 09:55:57.992940 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-txvcq_openshift-machine-config-operator(84068a6b-e189-419b-87f5-f31428f6eafe)\"" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" Jan 22 09:56:11 crc kubenswrapper[4811]: I0122 09:56:11.992009 4811 scope.go:117] "RemoveContainer" containerID="27d99ddba75ace4f5b92aada119cc94fd0acd38a4e4c54101859524605bfeb4c" Jan 22 09:56:11 crc kubenswrapper[4811]: E0122 09:56:11.992532 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-txvcq_openshift-machine-config-operator(84068a6b-e189-419b-87f5-f31428f6eafe)\"" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" Jan 22 09:56:26 crc kubenswrapper[4811]: I0122 09:56:26.992689 4811 scope.go:117] "RemoveContainer" containerID="27d99ddba75ace4f5b92aada119cc94fd0acd38a4e4c54101859524605bfeb4c" Jan 22 09:56:26 crc kubenswrapper[4811]: E0122 09:56:26.993360 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-txvcq_openshift-machine-config-operator(84068a6b-e189-419b-87f5-f31428f6eafe)\"" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" Jan 22 09:56:31 crc kubenswrapper[4811]: I0122 09:56:31.201658 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-z9wt9"] Jan 22 09:56:31 crc kubenswrapper[4811]: E0122 09:56:31.202385 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="257616ba-671a-4cbd-91d3-30a00b85773d" containerName="extract-content" Jan 22 09:56:31 crc kubenswrapper[4811]: I0122 09:56:31.202415 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="257616ba-671a-4cbd-91d3-30a00b85773d" containerName="extract-content" Jan 22 09:56:31 crc kubenswrapper[4811]: E0122 09:56:31.202427 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="257616ba-671a-4cbd-91d3-30a00b85773d" containerName="extract-utilities" Jan 22 09:56:31 crc kubenswrapper[4811]: I0122 09:56:31.202433 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="257616ba-671a-4cbd-91d3-30a00b85773d" containerName="extract-utilities" Jan 22 09:56:31 crc kubenswrapper[4811]: E0122 09:56:31.202450 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="257616ba-671a-4cbd-91d3-30a00b85773d" containerName="registry-server" Jan 22 09:56:31 crc kubenswrapper[4811]: I0122 09:56:31.202455 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="257616ba-671a-4cbd-91d3-30a00b85773d" containerName="registry-server" Jan 22 09:56:31 crc kubenswrapper[4811]: I0122 09:56:31.202686 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="257616ba-671a-4cbd-91d3-30a00b85773d" containerName="registry-server" Jan 22 09:56:31 crc kubenswrapper[4811]: I0122 09:56:31.204125 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z9wt9" Jan 22 09:56:31 crc kubenswrapper[4811]: I0122 09:56:31.226133 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-z9wt9"] Jan 22 09:56:31 crc kubenswrapper[4811]: I0122 09:56:31.320381 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5s49w\" (UniqueName: \"kubernetes.io/projected/21fc2b12-c93d-438d-b3ce-8e3d57eea840-kube-api-access-5s49w\") pod \"community-operators-z9wt9\" (UID: \"21fc2b12-c93d-438d-b3ce-8e3d57eea840\") " pod="openshift-marketplace/community-operators-z9wt9" Jan 22 09:56:31 crc kubenswrapper[4811]: I0122 09:56:31.320512 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21fc2b12-c93d-438d-b3ce-8e3d57eea840-utilities\") pod \"community-operators-z9wt9\" (UID: \"21fc2b12-c93d-438d-b3ce-8e3d57eea840\") " pod="openshift-marketplace/community-operators-z9wt9" Jan 22 09:56:31 crc kubenswrapper[4811]: I0122 09:56:31.320592 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21fc2b12-c93d-438d-b3ce-8e3d57eea840-catalog-content\") pod \"community-operators-z9wt9\" (UID: \"21fc2b12-c93d-438d-b3ce-8e3d57eea840\") " pod="openshift-marketplace/community-operators-z9wt9" Jan 22 09:56:31 crc kubenswrapper[4811]: I0122 09:56:31.422948 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5s49w\" (UniqueName: \"kubernetes.io/projected/21fc2b12-c93d-438d-b3ce-8e3d57eea840-kube-api-access-5s49w\") pod \"community-operators-z9wt9\" (UID: \"21fc2b12-c93d-438d-b3ce-8e3d57eea840\") " pod="openshift-marketplace/community-operators-z9wt9" Jan 22 09:56:31 crc kubenswrapper[4811]: I0122 09:56:31.423099 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21fc2b12-c93d-438d-b3ce-8e3d57eea840-utilities\") pod \"community-operators-z9wt9\" (UID: \"21fc2b12-c93d-438d-b3ce-8e3d57eea840\") " pod="openshift-marketplace/community-operators-z9wt9" Jan 22 09:56:31 crc kubenswrapper[4811]: I0122 09:56:31.423189 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21fc2b12-c93d-438d-b3ce-8e3d57eea840-catalog-content\") pod \"community-operators-z9wt9\" (UID: \"21fc2b12-c93d-438d-b3ce-8e3d57eea840\") " pod="openshift-marketplace/community-operators-z9wt9" Jan 22 09:56:31 crc kubenswrapper[4811]: I0122 09:56:31.423567 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21fc2b12-c93d-438d-b3ce-8e3d57eea840-utilities\") pod \"community-operators-z9wt9\" (UID: \"21fc2b12-c93d-438d-b3ce-8e3d57eea840\") " pod="openshift-marketplace/community-operators-z9wt9" Jan 22 09:56:31 crc kubenswrapper[4811]: I0122 09:56:31.423661 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21fc2b12-c93d-438d-b3ce-8e3d57eea840-catalog-content\") pod \"community-operators-z9wt9\" (UID: \"21fc2b12-c93d-438d-b3ce-8e3d57eea840\") " pod="openshift-marketplace/community-operators-z9wt9" Jan 22 09:56:31 crc kubenswrapper[4811]: I0122 09:56:31.438566 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5s49w\" (UniqueName: \"kubernetes.io/projected/21fc2b12-c93d-438d-b3ce-8e3d57eea840-kube-api-access-5s49w\") pod \"community-operators-z9wt9\" (UID: \"21fc2b12-c93d-438d-b3ce-8e3d57eea840\") " pod="openshift-marketplace/community-operators-z9wt9" Jan 22 09:56:31 crc kubenswrapper[4811]: I0122 09:56:31.518746 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z9wt9" Jan 22 09:56:32 crc kubenswrapper[4811]: I0122 09:56:32.047485 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-z9wt9"] Jan 22 09:56:32 crc kubenswrapper[4811]: I0122 09:56:32.719057 4811 generic.go:334] "Generic (PLEG): container finished" podID="21fc2b12-c93d-438d-b3ce-8e3d57eea840" containerID="d18da41eda57f57f27eaece597645c808ba958bc11f89c119c639863acab6999" exitCode=0 Jan 22 09:56:32 crc kubenswrapper[4811]: I0122 09:56:32.719216 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z9wt9" event={"ID":"21fc2b12-c93d-438d-b3ce-8e3d57eea840","Type":"ContainerDied","Data":"d18da41eda57f57f27eaece597645c808ba958bc11f89c119c639863acab6999"} Jan 22 09:56:32 crc kubenswrapper[4811]: I0122 09:56:32.719959 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z9wt9" event={"ID":"21fc2b12-c93d-438d-b3ce-8e3d57eea840","Type":"ContainerStarted","Data":"c28cc28eb75281bbe37f861319228eff76309abfc901a85dc18140c43d00e73e"} Jan 22 09:56:33 crc kubenswrapper[4811]: I0122 09:56:33.729040 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z9wt9" event={"ID":"21fc2b12-c93d-438d-b3ce-8e3d57eea840","Type":"ContainerStarted","Data":"ddd2f283ebeb6b8105fcd50ca8615d8e9f2a2355910c3097c16d24e684176639"} Jan 22 09:56:34 crc kubenswrapper[4811]: I0122 09:56:34.737438 4811 generic.go:334] "Generic (PLEG): container finished" podID="21fc2b12-c93d-438d-b3ce-8e3d57eea840" containerID="ddd2f283ebeb6b8105fcd50ca8615d8e9f2a2355910c3097c16d24e684176639" exitCode=0 Jan 22 09:56:34 crc kubenswrapper[4811]: I0122 09:56:34.737527 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z9wt9" event={"ID":"21fc2b12-c93d-438d-b3ce-8e3d57eea840","Type":"ContainerDied","Data":"ddd2f283ebeb6b8105fcd50ca8615d8e9f2a2355910c3097c16d24e684176639"} Jan 22 09:56:35 crc kubenswrapper[4811]: I0122 09:56:35.747093 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z9wt9" event={"ID":"21fc2b12-c93d-438d-b3ce-8e3d57eea840","Type":"ContainerStarted","Data":"1ab91fbbd2a30fb5a8700e603257bae43b4a4f1dd25eb62554fd97a4581ea6a5"} Jan 22 09:56:35 crc kubenswrapper[4811]: I0122 09:56:35.765817 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-z9wt9" podStartSLOduration=2.267410292 podStartE2EDuration="4.765803949s" podCreationTimestamp="2026-01-22 09:56:31 +0000 UTC" firstStartedPulling="2026-01-22 09:56:32.720652494 +0000 UTC m=+3037.042839618" lastFinishedPulling="2026-01-22 09:56:35.219046162 +0000 UTC m=+3039.541233275" observedRunningTime="2026-01-22 09:56:35.758595176 +0000 UTC m=+3040.080782299" watchObservedRunningTime="2026-01-22 09:56:35.765803949 +0000 UTC m=+3040.087991072" Jan 22 09:56:39 crc kubenswrapper[4811]: I0122 09:56:39.992102 4811 scope.go:117] "RemoveContainer" containerID="27d99ddba75ace4f5b92aada119cc94fd0acd38a4e4c54101859524605bfeb4c" Jan 22 09:56:39 crc kubenswrapper[4811]: E0122 09:56:39.992698 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-txvcq_openshift-machine-config-operator(84068a6b-e189-419b-87f5-f31428f6eafe)\"" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" Jan 22 09:56:41 crc kubenswrapper[4811]: I0122 09:56:41.520484 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-z9wt9" Jan 22 09:56:41 crc kubenswrapper[4811]: I0122 09:56:41.521006 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-z9wt9" Jan 22 09:56:41 crc kubenswrapper[4811]: I0122 09:56:41.555841 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-z9wt9" Jan 22 09:56:41 crc kubenswrapper[4811]: I0122 09:56:41.874667 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-z9wt9" Jan 22 09:56:41 crc kubenswrapper[4811]: I0122 09:56:41.941868 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-z9wt9"] Jan 22 09:56:43 crc kubenswrapper[4811]: I0122 09:56:43.795384 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-z9wt9" podUID="21fc2b12-c93d-438d-b3ce-8e3d57eea840" containerName="registry-server" containerID="cri-o://1ab91fbbd2a30fb5a8700e603257bae43b4a4f1dd25eb62554fd97a4581ea6a5" gracePeriod=2 Jan 22 09:56:44 crc kubenswrapper[4811]: I0122 09:56:44.293841 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z9wt9" Jan 22 09:56:44 crc kubenswrapper[4811]: I0122 09:56:44.381156 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21fc2b12-c93d-438d-b3ce-8e3d57eea840-catalog-content\") pod \"21fc2b12-c93d-438d-b3ce-8e3d57eea840\" (UID: \"21fc2b12-c93d-438d-b3ce-8e3d57eea840\") " Jan 22 09:56:44 crc kubenswrapper[4811]: I0122 09:56:44.381206 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21fc2b12-c93d-438d-b3ce-8e3d57eea840-utilities\") pod \"21fc2b12-c93d-438d-b3ce-8e3d57eea840\" (UID: \"21fc2b12-c93d-438d-b3ce-8e3d57eea840\") " Jan 22 09:56:44 crc kubenswrapper[4811]: I0122 09:56:44.381306 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5s49w\" (UniqueName: \"kubernetes.io/projected/21fc2b12-c93d-438d-b3ce-8e3d57eea840-kube-api-access-5s49w\") pod \"21fc2b12-c93d-438d-b3ce-8e3d57eea840\" (UID: \"21fc2b12-c93d-438d-b3ce-8e3d57eea840\") " Jan 22 09:56:44 crc kubenswrapper[4811]: I0122 09:56:44.382789 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/21fc2b12-c93d-438d-b3ce-8e3d57eea840-utilities" (OuterVolumeSpecName: "utilities") pod "21fc2b12-c93d-438d-b3ce-8e3d57eea840" (UID: "21fc2b12-c93d-438d-b3ce-8e3d57eea840"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:56:44 crc kubenswrapper[4811]: I0122 09:56:44.385637 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21fc2b12-c93d-438d-b3ce-8e3d57eea840-kube-api-access-5s49w" (OuterVolumeSpecName: "kube-api-access-5s49w") pod "21fc2b12-c93d-438d-b3ce-8e3d57eea840" (UID: "21fc2b12-c93d-438d-b3ce-8e3d57eea840"). InnerVolumeSpecName "kube-api-access-5s49w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 09:56:44 crc kubenswrapper[4811]: I0122 09:56:44.417878 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/21fc2b12-c93d-438d-b3ce-8e3d57eea840-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "21fc2b12-c93d-438d-b3ce-8e3d57eea840" (UID: "21fc2b12-c93d-438d-b3ce-8e3d57eea840"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 09:56:44 crc kubenswrapper[4811]: I0122 09:56:44.483760 4811 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21fc2b12-c93d-438d-b3ce-8e3d57eea840-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 09:56:44 crc kubenswrapper[4811]: I0122 09:56:44.483788 4811 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21fc2b12-c93d-438d-b3ce-8e3d57eea840-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 09:56:44 crc kubenswrapper[4811]: I0122 09:56:44.483797 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5s49w\" (UniqueName: \"kubernetes.io/projected/21fc2b12-c93d-438d-b3ce-8e3d57eea840-kube-api-access-5s49w\") on node \"crc\" DevicePath \"\"" Jan 22 09:56:44 crc kubenswrapper[4811]: I0122 09:56:44.803515 4811 generic.go:334] "Generic (PLEG): container finished" podID="21fc2b12-c93d-438d-b3ce-8e3d57eea840" containerID="1ab91fbbd2a30fb5a8700e603257bae43b4a4f1dd25eb62554fd97a4581ea6a5" exitCode=0 Jan 22 09:56:44 crc kubenswrapper[4811]: I0122 09:56:44.803566 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z9wt9" Jan 22 09:56:44 crc kubenswrapper[4811]: I0122 09:56:44.803559 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z9wt9" event={"ID":"21fc2b12-c93d-438d-b3ce-8e3d57eea840","Type":"ContainerDied","Data":"1ab91fbbd2a30fb5a8700e603257bae43b4a4f1dd25eb62554fd97a4581ea6a5"} Jan 22 09:56:44 crc kubenswrapper[4811]: I0122 09:56:44.803714 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z9wt9" event={"ID":"21fc2b12-c93d-438d-b3ce-8e3d57eea840","Type":"ContainerDied","Data":"c28cc28eb75281bbe37f861319228eff76309abfc901a85dc18140c43d00e73e"} Jan 22 09:56:44 crc kubenswrapper[4811]: I0122 09:56:44.803735 4811 scope.go:117] "RemoveContainer" containerID="1ab91fbbd2a30fb5a8700e603257bae43b4a4f1dd25eb62554fd97a4581ea6a5" Jan 22 09:56:44 crc kubenswrapper[4811]: I0122 09:56:44.819512 4811 scope.go:117] "RemoveContainer" containerID="ddd2f283ebeb6b8105fcd50ca8615d8e9f2a2355910c3097c16d24e684176639" Jan 22 09:56:44 crc kubenswrapper[4811]: I0122 09:56:44.832722 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-z9wt9"] Jan 22 09:56:44 crc kubenswrapper[4811]: I0122 09:56:44.840142 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-z9wt9"] Jan 22 09:56:44 crc kubenswrapper[4811]: I0122 09:56:44.848303 4811 scope.go:117] "RemoveContainer" containerID="d18da41eda57f57f27eaece597645c808ba958bc11f89c119c639863acab6999" Jan 22 09:56:44 crc kubenswrapper[4811]: I0122 09:56:44.873248 4811 scope.go:117] "RemoveContainer" containerID="1ab91fbbd2a30fb5a8700e603257bae43b4a4f1dd25eb62554fd97a4581ea6a5" Jan 22 09:56:44 crc kubenswrapper[4811]: E0122 09:56:44.873641 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ab91fbbd2a30fb5a8700e603257bae43b4a4f1dd25eb62554fd97a4581ea6a5\": container with ID starting with 1ab91fbbd2a30fb5a8700e603257bae43b4a4f1dd25eb62554fd97a4581ea6a5 not found: ID does not exist" containerID="1ab91fbbd2a30fb5a8700e603257bae43b4a4f1dd25eb62554fd97a4581ea6a5" Jan 22 09:56:44 crc kubenswrapper[4811]: I0122 09:56:44.873682 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ab91fbbd2a30fb5a8700e603257bae43b4a4f1dd25eb62554fd97a4581ea6a5"} err="failed to get container status \"1ab91fbbd2a30fb5a8700e603257bae43b4a4f1dd25eb62554fd97a4581ea6a5\": rpc error: code = NotFound desc = could not find container \"1ab91fbbd2a30fb5a8700e603257bae43b4a4f1dd25eb62554fd97a4581ea6a5\": container with ID starting with 1ab91fbbd2a30fb5a8700e603257bae43b4a4f1dd25eb62554fd97a4581ea6a5 not found: ID does not exist" Jan 22 09:56:44 crc kubenswrapper[4811]: I0122 09:56:44.873708 4811 scope.go:117] "RemoveContainer" containerID="ddd2f283ebeb6b8105fcd50ca8615d8e9f2a2355910c3097c16d24e684176639" Jan 22 09:56:44 crc kubenswrapper[4811]: E0122 09:56:44.873997 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ddd2f283ebeb6b8105fcd50ca8615d8e9f2a2355910c3097c16d24e684176639\": container with ID starting with ddd2f283ebeb6b8105fcd50ca8615d8e9f2a2355910c3097c16d24e684176639 not found: ID does not exist" containerID="ddd2f283ebeb6b8105fcd50ca8615d8e9f2a2355910c3097c16d24e684176639" Jan 22 09:56:44 crc kubenswrapper[4811]: I0122 09:56:44.874018 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ddd2f283ebeb6b8105fcd50ca8615d8e9f2a2355910c3097c16d24e684176639"} err="failed to get container status \"ddd2f283ebeb6b8105fcd50ca8615d8e9f2a2355910c3097c16d24e684176639\": rpc error: code = NotFound desc = could not find container \"ddd2f283ebeb6b8105fcd50ca8615d8e9f2a2355910c3097c16d24e684176639\": container with ID starting with ddd2f283ebeb6b8105fcd50ca8615d8e9f2a2355910c3097c16d24e684176639 not found: ID does not exist" Jan 22 09:56:44 crc kubenswrapper[4811]: I0122 09:56:44.874033 4811 scope.go:117] "RemoveContainer" containerID="d18da41eda57f57f27eaece597645c808ba958bc11f89c119c639863acab6999" Jan 22 09:56:44 crc kubenswrapper[4811]: E0122 09:56:44.874264 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d18da41eda57f57f27eaece597645c808ba958bc11f89c119c639863acab6999\": container with ID starting with d18da41eda57f57f27eaece597645c808ba958bc11f89c119c639863acab6999 not found: ID does not exist" containerID="d18da41eda57f57f27eaece597645c808ba958bc11f89c119c639863acab6999" Jan 22 09:56:44 crc kubenswrapper[4811]: I0122 09:56:44.874283 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d18da41eda57f57f27eaece597645c808ba958bc11f89c119c639863acab6999"} err="failed to get container status \"d18da41eda57f57f27eaece597645c808ba958bc11f89c119c639863acab6999\": rpc error: code = NotFound desc = could not find container \"d18da41eda57f57f27eaece597645c808ba958bc11f89c119c639863acab6999\": container with ID starting with d18da41eda57f57f27eaece597645c808ba958bc11f89c119c639863acab6999 not found: ID does not exist" Jan 22 09:56:45 crc kubenswrapper[4811]: I0122 09:56:45.999857 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21fc2b12-c93d-438d-b3ce-8e3d57eea840" path="/var/lib/kubelet/pods/21fc2b12-c93d-438d-b3ce-8e3d57eea840/volumes" Jan 22 09:56:54 crc kubenswrapper[4811]: I0122 09:56:54.992069 4811 scope.go:117] "RemoveContainer" containerID="27d99ddba75ace4f5b92aada119cc94fd0acd38a4e4c54101859524605bfeb4c" Jan 22 09:56:54 crc kubenswrapper[4811]: E0122 09:56:54.992754 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-txvcq_openshift-machine-config-operator(84068a6b-e189-419b-87f5-f31428f6eafe)\"" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" Jan 22 09:57:09 crc kubenswrapper[4811]: I0122 09:57:09.992830 4811 scope.go:117] "RemoveContainer" containerID="27d99ddba75ace4f5b92aada119cc94fd0acd38a4e4c54101859524605bfeb4c" Jan 22 09:57:09 crc kubenswrapper[4811]: E0122 09:57:09.993367 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-txvcq_openshift-machine-config-operator(84068a6b-e189-419b-87f5-f31428f6eafe)\"" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" Jan 22 09:57:21 crc kubenswrapper[4811]: I0122 09:57:21.992299 4811 scope.go:117] "RemoveContainer" containerID="27d99ddba75ace4f5b92aada119cc94fd0acd38a4e4c54101859524605bfeb4c" Jan 22 09:57:21 crc kubenswrapper[4811]: E0122 09:57:21.993050 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-txvcq_openshift-machine-config-operator(84068a6b-e189-419b-87f5-f31428f6eafe)\"" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" Jan 22 09:57:36 crc kubenswrapper[4811]: I0122 09:57:36.991707 4811 scope.go:117] "RemoveContainer" containerID="27d99ddba75ace4f5b92aada119cc94fd0acd38a4e4c54101859524605bfeb4c" Jan 22 09:57:36 crc kubenswrapper[4811]: E0122 09:57:36.992320 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-txvcq_openshift-machine-config-operator(84068a6b-e189-419b-87f5-f31428f6eafe)\"" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" Jan 22 09:57:48 crc kubenswrapper[4811]: I0122 09:57:48.992682 4811 scope.go:117] "RemoveContainer" containerID="27d99ddba75ace4f5b92aada119cc94fd0acd38a4e4c54101859524605bfeb4c" Jan 22 09:57:48 crc kubenswrapper[4811]: E0122 09:57:48.993220 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-txvcq_openshift-machine-config-operator(84068a6b-e189-419b-87f5-f31428f6eafe)\"" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" Jan 22 09:58:01 crc kubenswrapper[4811]: I0122 09:58:01.992810 4811 scope.go:117] "RemoveContainer" containerID="27d99ddba75ace4f5b92aada119cc94fd0acd38a4e4c54101859524605bfeb4c" Jan 22 09:58:01 crc kubenswrapper[4811]: E0122 09:58:01.993576 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-txvcq_openshift-machine-config-operator(84068a6b-e189-419b-87f5-f31428f6eafe)\"" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" Jan 22 09:58:16 crc kubenswrapper[4811]: I0122 09:58:16.992107 4811 scope.go:117] "RemoveContainer" containerID="27d99ddba75ace4f5b92aada119cc94fd0acd38a4e4c54101859524605bfeb4c" Jan 22 09:58:16 crc kubenswrapper[4811]: E0122 09:58:16.992695 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-txvcq_openshift-machine-config-operator(84068a6b-e189-419b-87f5-f31428f6eafe)\"" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" Jan 22 09:58:25 crc kubenswrapper[4811]: I0122 09:58:25.035034 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-db-create-mp7qp"] Jan 22 09:58:25 crc kubenswrapper[4811]: I0122 09:58:25.040867 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-db-create-mp7qp"] Jan 22 09:58:25 crc kubenswrapper[4811]: I0122 09:58:25.999795 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60012e62-6a64-4ac5-8d6b-9fc52699dad4" path="/var/lib/kubelet/pods/60012e62-6a64-4ac5-8d6b-9fc52699dad4/volumes" Jan 22 09:58:26 crc kubenswrapper[4811]: I0122 09:58:26.024381 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-66c8-account-create-update-2ts7l"] Jan 22 09:58:26 crc kubenswrapper[4811]: I0122 09:58:26.032942 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-66c8-account-create-update-2ts7l"] Jan 22 09:58:28 crc kubenswrapper[4811]: I0122 09:58:27.999912 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10c22353-10d6-4de5-b438-369773462111" path="/var/lib/kubelet/pods/10c22353-10d6-4de5-b438-369773462111/volumes" Jan 22 09:58:29 crc kubenswrapper[4811]: I0122 09:58:29.992727 4811 scope.go:117] "RemoveContainer" containerID="27d99ddba75ace4f5b92aada119cc94fd0acd38a4e4c54101859524605bfeb4c" Jan 22 09:58:29 crc kubenswrapper[4811]: E0122 09:58:29.993507 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-txvcq_openshift-machine-config-operator(84068a6b-e189-419b-87f5-f31428f6eafe)\"" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" Jan 22 09:58:42 crc kubenswrapper[4811]: I0122 09:58:42.992766 4811 scope.go:117] "RemoveContainer" containerID="27d99ddba75ace4f5b92aada119cc94fd0acd38a4e4c54101859524605bfeb4c" Jan 22 09:58:43 crc kubenswrapper[4811]: I0122 09:58:43.627201 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" event={"ID":"84068a6b-e189-419b-87f5-f31428f6eafe","Type":"ContainerStarted","Data":"3a06585913d6ba918f6c52903bb7850c2377d5698106e38de260a0e7343ce390"} Jan 22 09:58:52 crc kubenswrapper[4811]: I0122 09:58:52.030931 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-db-sync-wlfrs"] Jan 22 09:58:52 crc kubenswrapper[4811]: I0122 09:58:52.036038 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-db-sync-wlfrs"] Jan 22 09:58:54 crc kubenswrapper[4811]: I0122 09:58:54.000362 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7356c23-bed8-4798-b71e-9e29994fd1e6" path="/var/lib/kubelet/pods/c7356c23-bed8-4798-b71e-9e29994fd1e6/volumes" Jan 22 09:59:01 crc kubenswrapper[4811]: I0122 09:59:01.040882 4811 scope.go:117] "RemoveContainer" containerID="14042f42d138a0c5f60aca5a157b5572db5aa79559a0d2953032432700f4041b" Jan 22 09:59:01 crc kubenswrapper[4811]: I0122 09:59:01.061316 4811 scope.go:117] "RemoveContainer" containerID="987d46eb8e54394ac1e23c730227f52ea9b87b2b86c5502229530ee870f34b55" Jan 22 09:59:01 crc kubenswrapper[4811]: I0122 09:59:01.101872 4811 scope.go:117] "RemoveContainer" containerID="e0fed3130963339053b6efb6a796a5e782059b3227ebde0d00e159a45f82532c" Jan 22 10:00:00 crc kubenswrapper[4811]: I0122 10:00:00.141492 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484600-58b7t"] Jan 22 10:00:00 crc kubenswrapper[4811]: E0122 10:00:00.142390 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21fc2b12-c93d-438d-b3ce-8e3d57eea840" containerName="registry-server" Jan 22 10:00:00 crc kubenswrapper[4811]: I0122 10:00:00.142405 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="21fc2b12-c93d-438d-b3ce-8e3d57eea840" containerName="registry-server" Jan 22 10:00:00 crc kubenswrapper[4811]: E0122 10:00:00.142419 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21fc2b12-c93d-438d-b3ce-8e3d57eea840" containerName="extract-content" Jan 22 10:00:00 crc kubenswrapper[4811]: I0122 10:00:00.142425 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="21fc2b12-c93d-438d-b3ce-8e3d57eea840" containerName="extract-content" Jan 22 10:00:00 crc kubenswrapper[4811]: E0122 10:00:00.142444 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21fc2b12-c93d-438d-b3ce-8e3d57eea840" containerName="extract-utilities" Jan 22 10:00:00 crc kubenswrapper[4811]: I0122 10:00:00.142450 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="21fc2b12-c93d-438d-b3ce-8e3d57eea840" containerName="extract-utilities" Jan 22 10:00:00 crc kubenswrapper[4811]: I0122 10:00:00.142610 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="21fc2b12-c93d-438d-b3ce-8e3d57eea840" containerName="registry-server" Jan 22 10:00:00 crc kubenswrapper[4811]: I0122 10:00:00.143230 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484600-58b7t" Jan 22 10:00:00 crc kubenswrapper[4811]: I0122 10:00:00.147252 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 22 10:00:00 crc kubenswrapper[4811]: I0122 10:00:00.147567 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 22 10:00:00 crc kubenswrapper[4811]: I0122 10:00:00.150081 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484600-58b7t"] Jan 22 10:00:00 crc kubenswrapper[4811]: I0122 10:00:00.275146 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dab37180-34fe-47d1-b2fe-b1935fa9b043-config-volume\") pod \"collect-profiles-29484600-58b7t\" (UID: \"dab37180-34fe-47d1-b2fe-b1935fa9b043\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484600-58b7t" Jan 22 10:00:00 crc kubenswrapper[4811]: I0122 10:00:00.275477 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nm85l\" (UniqueName: \"kubernetes.io/projected/dab37180-34fe-47d1-b2fe-b1935fa9b043-kube-api-access-nm85l\") pod \"collect-profiles-29484600-58b7t\" (UID: \"dab37180-34fe-47d1-b2fe-b1935fa9b043\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484600-58b7t" Jan 22 10:00:00 crc kubenswrapper[4811]: I0122 10:00:00.275518 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dab37180-34fe-47d1-b2fe-b1935fa9b043-secret-volume\") pod \"collect-profiles-29484600-58b7t\" (UID: \"dab37180-34fe-47d1-b2fe-b1935fa9b043\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484600-58b7t" Jan 22 10:00:00 crc kubenswrapper[4811]: I0122 10:00:00.377460 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dab37180-34fe-47d1-b2fe-b1935fa9b043-config-volume\") pod \"collect-profiles-29484600-58b7t\" (UID: \"dab37180-34fe-47d1-b2fe-b1935fa9b043\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484600-58b7t" Jan 22 10:00:00 crc kubenswrapper[4811]: I0122 10:00:00.377662 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nm85l\" (UniqueName: \"kubernetes.io/projected/dab37180-34fe-47d1-b2fe-b1935fa9b043-kube-api-access-nm85l\") pod \"collect-profiles-29484600-58b7t\" (UID: \"dab37180-34fe-47d1-b2fe-b1935fa9b043\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484600-58b7t" Jan 22 10:00:00 crc kubenswrapper[4811]: I0122 10:00:00.377688 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dab37180-34fe-47d1-b2fe-b1935fa9b043-secret-volume\") pod \"collect-profiles-29484600-58b7t\" (UID: \"dab37180-34fe-47d1-b2fe-b1935fa9b043\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484600-58b7t" Jan 22 10:00:00 crc kubenswrapper[4811]: I0122 10:00:00.378251 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dab37180-34fe-47d1-b2fe-b1935fa9b043-config-volume\") pod \"collect-profiles-29484600-58b7t\" (UID: \"dab37180-34fe-47d1-b2fe-b1935fa9b043\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484600-58b7t" Jan 22 10:00:00 crc kubenswrapper[4811]: I0122 10:00:00.383540 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dab37180-34fe-47d1-b2fe-b1935fa9b043-secret-volume\") pod \"collect-profiles-29484600-58b7t\" (UID: \"dab37180-34fe-47d1-b2fe-b1935fa9b043\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484600-58b7t" Jan 22 10:00:00 crc kubenswrapper[4811]: I0122 10:00:00.390907 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nm85l\" (UniqueName: \"kubernetes.io/projected/dab37180-34fe-47d1-b2fe-b1935fa9b043-kube-api-access-nm85l\") pod \"collect-profiles-29484600-58b7t\" (UID: \"dab37180-34fe-47d1-b2fe-b1935fa9b043\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484600-58b7t" Jan 22 10:00:00 crc kubenswrapper[4811]: I0122 10:00:00.461300 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484600-58b7t" Jan 22 10:00:00 crc kubenswrapper[4811]: I0122 10:00:00.876561 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484600-58b7t"] Jan 22 10:00:01 crc kubenswrapper[4811]: I0122 10:00:01.188203 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484600-58b7t" event={"ID":"dab37180-34fe-47d1-b2fe-b1935fa9b043","Type":"ContainerStarted","Data":"4b964d277e9727aff2782e3c5c58ca71137cceac54d834b04c1d8d08afa229f9"} Jan 22 10:00:01 crc kubenswrapper[4811]: I0122 10:00:01.188419 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484600-58b7t" event={"ID":"dab37180-34fe-47d1-b2fe-b1935fa9b043","Type":"ContainerStarted","Data":"8fc5955ba0d8cf1f544e270c7456c9afca10fe76853ddfb98d90d2fbd93de12f"} Jan 22 10:00:01 crc kubenswrapper[4811]: I0122 10:00:01.203696 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29484600-58b7t" podStartSLOduration=1.20368041 podStartE2EDuration="1.20368041s" podCreationTimestamp="2026-01-22 10:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:00:01.201435166 +0000 UTC m=+3245.523622289" watchObservedRunningTime="2026-01-22 10:00:01.20368041 +0000 UTC m=+3245.525867533" Jan 22 10:00:02 crc kubenswrapper[4811]: I0122 10:00:02.195734 4811 generic.go:334] "Generic (PLEG): container finished" podID="dab37180-34fe-47d1-b2fe-b1935fa9b043" containerID="4b964d277e9727aff2782e3c5c58ca71137cceac54d834b04c1d8d08afa229f9" exitCode=0 Jan 22 10:00:02 crc kubenswrapper[4811]: I0122 10:00:02.195784 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484600-58b7t" event={"ID":"dab37180-34fe-47d1-b2fe-b1935fa9b043","Type":"ContainerDied","Data":"4b964d277e9727aff2782e3c5c58ca71137cceac54d834b04c1d8d08afa229f9"} Jan 22 10:00:03 crc kubenswrapper[4811]: I0122 10:00:03.599670 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484600-58b7t" Jan 22 10:00:03 crc kubenswrapper[4811]: I0122 10:00:03.639917 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dab37180-34fe-47d1-b2fe-b1935fa9b043-secret-volume\") pod \"dab37180-34fe-47d1-b2fe-b1935fa9b043\" (UID: \"dab37180-34fe-47d1-b2fe-b1935fa9b043\") " Jan 22 10:00:03 crc kubenswrapper[4811]: I0122 10:00:03.639975 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dab37180-34fe-47d1-b2fe-b1935fa9b043-config-volume\") pod \"dab37180-34fe-47d1-b2fe-b1935fa9b043\" (UID: \"dab37180-34fe-47d1-b2fe-b1935fa9b043\") " Jan 22 10:00:03 crc kubenswrapper[4811]: I0122 10:00:03.640101 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nm85l\" (UniqueName: \"kubernetes.io/projected/dab37180-34fe-47d1-b2fe-b1935fa9b043-kube-api-access-nm85l\") pod \"dab37180-34fe-47d1-b2fe-b1935fa9b043\" (UID: \"dab37180-34fe-47d1-b2fe-b1935fa9b043\") " Jan 22 10:00:03 crc kubenswrapper[4811]: I0122 10:00:03.641906 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dab37180-34fe-47d1-b2fe-b1935fa9b043-config-volume" (OuterVolumeSpecName: "config-volume") pod "dab37180-34fe-47d1-b2fe-b1935fa9b043" (UID: "dab37180-34fe-47d1-b2fe-b1935fa9b043"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:00:03 crc kubenswrapper[4811]: I0122 10:00:03.647901 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dab37180-34fe-47d1-b2fe-b1935fa9b043-kube-api-access-nm85l" (OuterVolumeSpecName: "kube-api-access-nm85l") pod "dab37180-34fe-47d1-b2fe-b1935fa9b043" (UID: "dab37180-34fe-47d1-b2fe-b1935fa9b043"). InnerVolumeSpecName "kube-api-access-nm85l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:00:03 crc kubenswrapper[4811]: I0122 10:00:03.647904 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dab37180-34fe-47d1-b2fe-b1935fa9b043-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "dab37180-34fe-47d1-b2fe-b1935fa9b043" (UID: "dab37180-34fe-47d1-b2fe-b1935fa9b043"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:00:03 crc kubenswrapper[4811]: I0122 10:00:03.743263 4811 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dab37180-34fe-47d1-b2fe-b1935fa9b043-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 22 10:00:03 crc kubenswrapper[4811]: I0122 10:00:03.743290 4811 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dab37180-34fe-47d1-b2fe-b1935fa9b043-config-volume\") on node \"crc\" DevicePath \"\"" Jan 22 10:00:03 crc kubenswrapper[4811]: I0122 10:00:03.743300 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nm85l\" (UniqueName: \"kubernetes.io/projected/dab37180-34fe-47d1-b2fe-b1935fa9b043-kube-api-access-nm85l\") on node \"crc\" DevicePath \"\"" Jan 22 10:00:04 crc kubenswrapper[4811]: I0122 10:00:04.211997 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484600-58b7t" event={"ID":"dab37180-34fe-47d1-b2fe-b1935fa9b043","Type":"ContainerDied","Data":"8fc5955ba0d8cf1f544e270c7456c9afca10fe76853ddfb98d90d2fbd93de12f"} Jan 22 10:00:04 crc kubenswrapper[4811]: I0122 10:00:04.212049 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8fc5955ba0d8cf1f544e270c7456c9afca10fe76853ddfb98d90d2fbd93de12f" Jan 22 10:00:04 crc kubenswrapper[4811]: I0122 10:00:04.212047 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484600-58b7t" Jan 22 10:00:04 crc kubenswrapper[4811]: I0122 10:00:04.262002 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484555-ggvph"] Jan 22 10:00:04 crc kubenswrapper[4811]: I0122 10:00:04.269061 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484555-ggvph"] Jan 22 10:00:06 crc kubenswrapper[4811]: I0122 10:00:06.003437 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a17cbe8b-d81b-4480-ae34-a9467583c105" path="/var/lib/kubelet/pods/a17cbe8b-d81b-4480-ae34-a9467583c105/volumes" Jan 22 10:00:36 crc kubenswrapper[4811]: I0122 10:00:36.416753 4811 generic.go:334] "Generic (PLEG): container finished" podID="eb23b893-6bb1-4d84-bb05-09c701024b37" containerID="0c3ba8e7a7044b4324c75965e2c29fb239bd9fc702eb8bfea55d006a5513ee9c" exitCode=0 Jan 22 10:00:36 crc kubenswrapper[4811]: I0122 10:00:36.416850 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"eb23b893-6bb1-4d84-bb05-09c701024b37","Type":"ContainerDied","Data":"0c3ba8e7a7044b4324c75965e2c29fb239bd9fc702eb8bfea55d006a5513ee9c"} Jan 22 10:00:37 crc kubenswrapper[4811]: I0122 10:00:37.816940 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 22 10:00:37 crc kubenswrapper[4811]: I0122 10:00:37.841639 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/eb23b893-6bb1-4d84-bb05-09c701024b37-test-operator-ephemeral-temporary\") pod \"eb23b893-6bb1-4d84-bb05-09c701024b37\" (UID: \"eb23b893-6bb1-4d84-bb05-09c701024b37\") " Jan 22 10:00:37 crc kubenswrapper[4811]: I0122 10:00:37.841683 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"eb23b893-6bb1-4d84-bb05-09c701024b37\" (UID: \"eb23b893-6bb1-4d84-bb05-09c701024b37\") " Jan 22 10:00:37 crc kubenswrapper[4811]: I0122 10:00:37.841836 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/eb23b893-6bb1-4d84-bb05-09c701024b37-config-data\") pod \"eb23b893-6bb1-4d84-bb05-09c701024b37\" (UID: \"eb23b893-6bb1-4d84-bb05-09c701024b37\") " Jan 22 10:00:37 crc kubenswrapper[4811]: I0122 10:00:37.841851 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/eb23b893-6bb1-4d84-bb05-09c701024b37-openstack-config\") pod \"eb23b893-6bb1-4d84-bb05-09c701024b37\" (UID: \"eb23b893-6bb1-4d84-bb05-09c701024b37\") " Jan 22 10:00:37 crc kubenswrapper[4811]: I0122 10:00:37.841893 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/eb23b893-6bb1-4d84-bb05-09c701024b37-ssh-key\") pod \"eb23b893-6bb1-4d84-bb05-09c701024b37\" (UID: \"eb23b893-6bb1-4d84-bb05-09c701024b37\") " Jan 22 10:00:37 crc kubenswrapper[4811]: I0122 10:00:37.841958 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/eb23b893-6bb1-4d84-bb05-09c701024b37-ca-certs\") pod \"eb23b893-6bb1-4d84-bb05-09c701024b37\" (UID: \"eb23b893-6bb1-4d84-bb05-09c701024b37\") " Jan 22 10:00:37 crc kubenswrapper[4811]: I0122 10:00:37.841985 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9pp7\" (UniqueName: \"kubernetes.io/projected/eb23b893-6bb1-4d84-bb05-09c701024b37-kube-api-access-w9pp7\") pod \"eb23b893-6bb1-4d84-bb05-09c701024b37\" (UID: \"eb23b893-6bb1-4d84-bb05-09c701024b37\") " Jan 22 10:00:37 crc kubenswrapper[4811]: I0122 10:00:37.842054 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/eb23b893-6bb1-4d84-bb05-09c701024b37-openstack-config-secret\") pod \"eb23b893-6bb1-4d84-bb05-09c701024b37\" (UID: \"eb23b893-6bb1-4d84-bb05-09c701024b37\") " Jan 22 10:00:37 crc kubenswrapper[4811]: I0122 10:00:37.842105 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/eb23b893-6bb1-4d84-bb05-09c701024b37-test-operator-ephemeral-workdir\") pod \"eb23b893-6bb1-4d84-bb05-09c701024b37\" (UID: \"eb23b893-6bb1-4d84-bb05-09c701024b37\") " Jan 22 10:00:37 crc kubenswrapper[4811]: I0122 10:00:37.843637 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb23b893-6bb1-4d84-bb05-09c701024b37-config-data" (OuterVolumeSpecName: "config-data") pod "eb23b893-6bb1-4d84-bb05-09c701024b37" (UID: "eb23b893-6bb1-4d84-bb05-09c701024b37"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:00:37 crc kubenswrapper[4811]: I0122 10:00:37.843847 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb23b893-6bb1-4d84-bb05-09c701024b37-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "eb23b893-6bb1-4d84-bb05-09c701024b37" (UID: "eb23b893-6bb1-4d84-bb05-09c701024b37"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:00:37 crc kubenswrapper[4811]: I0122 10:00:37.845350 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb23b893-6bb1-4d84-bb05-09c701024b37-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "eb23b893-6bb1-4d84-bb05-09c701024b37" (UID: "eb23b893-6bb1-4d84-bb05-09c701024b37"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:00:37 crc kubenswrapper[4811]: I0122 10:00:37.852535 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "test-operator-logs") pod "eb23b893-6bb1-4d84-bb05-09c701024b37" (UID: "eb23b893-6bb1-4d84-bb05-09c701024b37"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 22 10:00:37 crc kubenswrapper[4811]: I0122 10:00:37.852604 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb23b893-6bb1-4d84-bb05-09c701024b37-kube-api-access-w9pp7" (OuterVolumeSpecName: "kube-api-access-w9pp7") pod "eb23b893-6bb1-4d84-bb05-09c701024b37" (UID: "eb23b893-6bb1-4d84-bb05-09c701024b37"). InnerVolumeSpecName "kube-api-access-w9pp7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:00:37 crc kubenswrapper[4811]: I0122 10:00:37.865819 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb23b893-6bb1-4d84-bb05-09c701024b37-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "eb23b893-6bb1-4d84-bb05-09c701024b37" (UID: "eb23b893-6bb1-4d84-bb05-09c701024b37"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:00:37 crc kubenswrapper[4811]: I0122 10:00:37.865944 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb23b893-6bb1-4d84-bb05-09c701024b37-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "eb23b893-6bb1-4d84-bb05-09c701024b37" (UID: "eb23b893-6bb1-4d84-bb05-09c701024b37"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:00:37 crc kubenswrapper[4811]: I0122 10:00:37.869679 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb23b893-6bb1-4d84-bb05-09c701024b37-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "eb23b893-6bb1-4d84-bb05-09c701024b37" (UID: "eb23b893-6bb1-4d84-bb05-09c701024b37"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:00:37 crc kubenswrapper[4811]: I0122 10:00:37.883991 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb23b893-6bb1-4d84-bb05-09c701024b37-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "eb23b893-6bb1-4d84-bb05-09c701024b37" (UID: "eb23b893-6bb1-4d84-bb05-09c701024b37"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:00:37 crc kubenswrapper[4811]: I0122 10:00:37.944368 4811 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/eb23b893-6bb1-4d84-bb05-09c701024b37-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 22 10:00:37 crc kubenswrapper[4811]: I0122 10:00:37.944392 4811 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/eb23b893-6bb1-4d84-bb05-09c701024b37-ca-certs\") on node \"crc\" DevicePath \"\"" Jan 22 10:00:37 crc kubenswrapper[4811]: I0122 10:00:37.944402 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9pp7\" (UniqueName: \"kubernetes.io/projected/eb23b893-6bb1-4d84-bb05-09c701024b37-kube-api-access-w9pp7\") on node \"crc\" DevicePath \"\"" Jan 22 10:00:37 crc kubenswrapper[4811]: I0122 10:00:37.944412 4811 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/eb23b893-6bb1-4d84-bb05-09c701024b37-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 22 10:00:37 crc kubenswrapper[4811]: I0122 10:00:37.944422 4811 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/eb23b893-6bb1-4d84-bb05-09c701024b37-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Jan 22 10:00:37 crc kubenswrapper[4811]: I0122 10:00:37.944432 4811 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/eb23b893-6bb1-4d84-bb05-09c701024b37-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Jan 22 10:00:37 crc kubenswrapper[4811]: I0122 10:00:37.944803 4811 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Jan 22 10:00:37 crc kubenswrapper[4811]: I0122 10:00:37.944821 4811 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/eb23b893-6bb1-4d84-bb05-09c701024b37-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 10:00:37 crc kubenswrapper[4811]: I0122 10:00:37.944830 4811 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/eb23b893-6bb1-4d84-bb05-09c701024b37-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 22 10:00:37 crc kubenswrapper[4811]: I0122 10:00:37.959512 4811 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Jan 22 10:00:38 crc kubenswrapper[4811]: I0122 10:00:38.047308 4811 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Jan 22 10:00:38 crc kubenswrapper[4811]: I0122 10:00:38.431281 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"eb23b893-6bb1-4d84-bb05-09c701024b37","Type":"ContainerDied","Data":"4c2308e19de2de4edb99c50f9a6625850f46bda0f125ad43cd11b1c8fa07fba2"} Jan 22 10:00:38 crc kubenswrapper[4811]: I0122 10:00:38.431322 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4c2308e19de2de4edb99c50f9a6625850f46bda0f125ad43cd11b1c8fa07fba2" Jan 22 10:00:38 crc kubenswrapper[4811]: I0122 10:00:38.431324 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 22 10:00:45 crc kubenswrapper[4811]: I0122 10:00:45.439698 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 22 10:00:45 crc kubenswrapper[4811]: E0122 10:00:45.440608 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dab37180-34fe-47d1-b2fe-b1935fa9b043" containerName="collect-profiles" Jan 22 10:00:45 crc kubenswrapper[4811]: I0122 10:00:45.440640 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="dab37180-34fe-47d1-b2fe-b1935fa9b043" containerName="collect-profiles" Jan 22 10:00:45 crc kubenswrapper[4811]: E0122 10:00:45.440662 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb23b893-6bb1-4d84-bb05-09c701024b37" containerName="tempest-tests-tempest-tests-runner" Jan 22 10:00:45 crc kubenswrapper[4811]: I0122 10:00:45.440669 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb23b893-6bb1-4d84-bb05-09c701024b37" containerName="tempest-tests-tempest-tests-runner" Jan 22 10:00:45 crc kubenswrapper[4811]: I0122 10:00:45.440873 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb23b893-6bb1-4d84-bb05-09c701024b37" containerName="tempest-tests-tempest-tests-runner" Jan 22 10:00:45 crc kubenswrapper[4811]: I0122 10:00:45.440889 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="dab37180-34fe-47d1-b2fe-b1935fa9b043" containerName="collect-profiles" Jan 22 10:00:45 crc kubenswrapper[4811]: I0122 10:00:45.441513 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 22 10:00:45 crc kubenswrapper[4811]: I0122 10:00:45.443213 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-h6mm9" Jan 22 10:00:45 crc kubenswrapper[4811]: I0122 10:00:45.447982 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 22 10:00:45 crc kubenswrapper[4811]: I0122 10:00:45.588068 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4j7f9\" (UniqueName: \"kubernetes.io/projected/37849dcb-3091-4827-b55c-97806fb09eef-kube-api-access-4j7f9\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"37849dcb-3091-4827-b55c-97806fb09eef\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 22 10:00:45 crc kubenswrapper[4811]: I0122 10:00:45.588141 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"37849dcb-3091-4827-b55c-97806fb09eef\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 22 10:00:45 crc kubenswrapper[4811]: I0122 10:00:45.693747 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4j7f9\" (UniqueName: \"kubernetes.io/projected/37849dcb-3091-4827-b55c-97806fb09eef-kube-api-access-4j7f9\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"37849dcb-3091-4827-b55c-97806fb09eef\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 22 10:00:45 crc kubenswrapper[4811]: I0122 10:00:45.693822 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"37849dcb-3091-4827-b55c-97806fb09eef\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 22 10:00:45 crc kubenswrapper[4811]: I0122 10:00:45.694175 4811 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"37849dcb-3091-4827-b55c-97806fb09eef\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 22 10:00:45 crc kubenswrapper[4811]: I0122 10:00:45.710239 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4j7f9\" (UniqueName: \"kubernetes.io/projected/37849dcb-3091-4827-b55c-97806fb09eef-kube-api-access-4j7f9\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"37849dcb-3091-4827-b55c-97806fb09eef\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 22 10:00:45 crc kubenswrapper[4811]: I0122 10:00:45.712998 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"37849dcb-3091-4827-b55c-97806fb09eef\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 22 10:00:45 crc kubenswrapper[4811]: I0122 10:00:45.762737 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 22 10:00:46 crc kubenswrapper[4811]: I0122 10:00:46.129280 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 22 10:00:46 crc kubenswrapper[4811]: I0122 10:00:46.132713 4811 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 10:00:46 crc kubenswrapper[4811]: I0122 10:00:46.485546 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"37849dcb-3091-4827-b55c-97806fb09eef","Type":"ContainerStarted","Data":"ccbdcc38fd130ccdaccb3c80196a39c506900d8085e03adbb4b8591a74120a2c"} Jan 22 10:00:47 crc kubenswrapper[4811]: I0122 10:00:47.492520 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"37849dcb-3091-4827-b55c-97806fb09eef","Type":"ContainerStarted","Data":"130364d7e133e6acd584cbf95d4de0325a05af67d1ad497b57797577da0bd6e8"} Jan 22 10:00:47 crc kubenswrapper[4811]: I0122 10:00:47.507725 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=1.6122787779999999 podStartE2EDuration="2.50771371s" podCreationTimestamp="2026-01-22 10:00:45 +0000 UTC" firstStartedPulling="2026-01-22 10:00:46.132416547 +0000 UTC m=+3290.454603670" lastFinishedPulling="2026-01-22 10:00:47.027851478 +0000 UTC m=+3291.350038602" observedRunningTime="2026-01-22 10:00:47.502417623 +0000 UTC m=+3291.824604746" watchObservedRunningTime="2026-01-22 10:00:47.50771371 +0000 UTC m=+3291.829900833" Jan 22 10:01:00 crc kubenswrapper[4811]: I0122 10:01:00.131655 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29484601-6fs6b"] Jan 22 10:01:00 crc kubenswrapper[4811]: I0122 10:01:00.133084 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29484601-6fs6b" Jan 22 10:01:00 crc kubenswrapper[4811]: I0122 10:01:00.140910 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29484601-6fs6b"] Jan 22 10:01:00 crc kubenswrapper[4811]: I0122 10:01:00.240704 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f7ffe266-6663-41bb-a7f3-3e7807cd62e4-fernet-keys\") pod \"keystone-cron-29484601-6fs6b\" (UID: \"f7ffe266-6663-41bb-a7f3-3e7807cd62e4\") " pod="openstack/keystone-cron-29484601-6fs6b" Jan 22 10:01:00 crc kubenswrapper[4811]: I0122 10:01:00.240820 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7ffe266-6663-41bb-a7f3-3e7807cd62e4-combined-ca-bundle\") pod \"keystone-cron-29484601-6fs6b\" (UID: \"f7ffe266-6663-41bb-a7f3-3e7807cd62e4\") " pod="openstack/keystone-cron-29484601-6fs6b" Jan 22 10:01:00 crc kubenswrapper[4811]: I0122 10:01:00.241048 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7ffe266-6663-41bb-a7f3-3e7807cd62e4-config-data\") pod \"keystone-cron-29484601-6fs6b\" (UID: \"f7ffe266-6663-41bb-a7f3-3e7807cd62e4\") " pod="openstack/keystone-cron-29484601-6fs6b" Jan 22 10:01:00 crc kubenswrapper[4811]: I0122 10:01:00.241122 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vz7np\" (UniqueName: \"kubernetes.io/projected/f7ffe266-6663-41bb-a7f3-3e7807cd62e4-kube-api-access-vz7np\") pod \"keystone-cron-29484601-6fs6b\" (UID: \"f7ffe266-6663-41bb-a7f3-3e7807cd62e4\") " pod="openstack/keystone-cron-29484601-6fs6b" Jan 22 10:01:00 crc kubenswrapper[4811]: I0122 10:01:00.342346 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7ffe266-6663-41bb-a7f3-3e7807cd62e4-config-data\") pod \"keystone-cron-29484601-6fs6b\" (UID: \"f7ffe266-6663-41bb-a7f3-3e7807cd62e4\") " pod="openstack/keystone-cron-29484601-6fs6b" Jan 22 10:01:00 crc kubenswrapper[4811]: I0122 10:01:00.342393 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vz7np\" (UniqueName: \"kubernetes.io/projected/f7ffe266-6663-41bb-a7f3-3e7807cd62e4-kube-api-access-vz7np\") pod \"keystone-cron-29484601-6fs6b\" (UID: \"f7ffe266-6663-41bb-a7f3-3e7807cd62e4\") " pod="openstack/keystone-cron-29484601-6fs6b" Jan 22 10:01:00 crc kubenswrapper[4811]: I0122 10:01:00.342480 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f7ffe266-6663-41bb-a7f3-3e7807cd62e4-fernet-keys\") pod \"keystone-cron-29484601-6fs6b\" (UID: \"f7ffe266-6663-41bb-a7f3-3e7807cd62e4\") " pod="openstack/keystone-cron-29484601-6fs6b" Jan 22 10:01:00 crc kubenswrapper[4811]: I0122 10:01:00.342584 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7ffe266-6663-41bb-a7f3-3e7807cd62e4-combined-ca-bundle\") pod \"keystone-cron-29484601-6fs6b\" (UID: \"f7ffe266-6663-41bb-a7f3-3e7807cd62e4\") " pod="openstack/keystone-cron-29484601-6fs6b" Jan 22 10:01:00 crc kubenswrapper[4811]: I0122 10:01:00.347060 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7ffe266-6663-41bb-a7f3-3e7807cd62e4-combined-ca-bundle\") pod \"keystone-cron-29484601-6fs6b\" (UID: \"f7ffe266-6663-41bb-a7f3-3e7807cd62e4\") " pod="openstack/keystone-cron-29484601-6fs6b" Jan 22 10:01:00 crc kubenswrapper[4811]: I0122 10:01:00.348257 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7ffe266-6663-41bb-a7f3-3e7807cd62e4-config-data\") pod \"keystone-cron-29484601-6fs6b\" (UID: \"f7ffe266-6663-41bb-a7f3-3e7807cd62e4\") " pod="openstack/keystone-cron-29484601-6fs6b" Jan 22 10:01:00 crc kubenswrapper[4811]: I0122 10:01:00.349731 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f7ffe266-6663-41bb-a7f3-3e7807cd62e4-fernet-keys\") pod \"keystone-cron-29484601-6fs6b\" (UID: \"f7ffe266-6663-41bb-a7f3-3e7807cd62e4\") " pod="openstack/keystone-cron-29484601-6fs6b" Jan 22 10:01:00 crc kubenswrapper[4811]: I0122 10:01:00.358471 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vz7np\" (UniqueName: \"kubernetes.io/projected/f7ffe266-6663-41bb-a7f3-3e7807cd62e4-kube-api-access-vz7np\") pod \"keystone-cron-29484601-6fs6b\" (UID: \"f7ffe266-6663-41bb-a7f3-3e7807cd62e4\") " pod="openstack/keystone-cron-29484601-6fs6b" Jan 22 10:01:00 crc kubenswrapper[4811]: I0122 10:01:00.447783 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29484601-6fs6b" Jan 22 10:01:00 crc kubenswrapper[4811]: I0122 10:01:00.823194 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29484601-6fs6b"] Jan 22 10:01:01 crc kubenswrapper[4811]: I0122 10:01:01.194200 4811 scope.go:117] "RemoveContainer" containerID="cb012ca0648dd3e27c8806e2210b67d3c6a4841a15fd17948d36677964180120" Jan 22 10:01:01 crc kubenswrapper[4811]: I0122 10:01:01.584222 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29484601-6fs6b" event={"ID":"f7ffe266-6663-41bb-a7f3-3e7807cd62e4","Type":"ContainerStarted","Data":"ced7032b3324dc927dfb2c3e48c54a9fdf833c716c94dc4879aa7a978bd5172f"} Jan 22 10:01:01 crc kubenswrapper[4811]: I0122 10:01:01.584413 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29484601-6fs6b" event={"ID":"f7ffe266-6663-41bb-a7f3-3e7807cd62e4","Type":"ContainerStarted","Data":"f8e201423cb7beed1f9da271e8870168dc90ce8cfc42fba5e6fbef519762e041"} Jan 22 10:01:01 crc kubenswrapper[4811]: I0122 10:01:01.598960 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29484601-6fs6b" podStartSLOduration=1.598932285 podStartE2EDuration="1.598932285s" podCreationTimestamp="2026-01-22 10:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:01:01.596578637 +0000 UTC m=+3305.918765760" watchObservedRunningTime="2026-01-22 10:01:01.598932285 +0000 UTC m=+3305.921119408" Jan 22 10:01:03 crc kubenswrapper[4811]: I0122 10:01:03.600228 4811 generic.go:334] "Generic (PLEG): container finished" podID="f7ffe266-6663-41bb-a7f3-3e7807cd62e4" containerID="ced7032b3324dc927dfb2c3e48c54a9fdf833c716c94dc4879aa7a978bd5172f" exitCode=0 Jan 22 10:01:03 crc kubenswrapper[4811]: I0122 10:01:03.600385 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29484601-6fs6b" event={"ID":"f7ffe266-6663-41bb-a7f3-3e7807cd62e4","Type":"ContainerDied","Data":"ced7032b3324dc927dfb2c3e48c54a9fdf833c716c94dc4879aa7a978bd5172f"} Jan 22 10:01:04 crc kubenswrapper[4811]: I0122 10:01:04.965971 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29484601-6fs6b" Jan 22 10:01:05 crc kubenswrapper[4811]: I0122 10:01:05.024660 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f7ffe266-6663-41bb-a7f3-3e7807cd62e4-fernet-keys\") pod \"f7ffe266-6663-41bb-a7f3-3e7807cd62e4\" (UID: \"f7ffe266-6663-41bb-a7f3-3e7807cd62e4\") " Jan 22 10:01:05 crc kubenswrapper[4811]: I0122 10:01:05.024742 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7ffe266-6663-41bb-a7f3-3e7807cd62e4-combined-ca-bundle\") pod \"f7ffe266-6663-41bb-a7f3-3e7807cd62e4\" (UID: \"f7ffe266-6663-41bb-a7f3-3e7807cd62e4\") " Jan 22 10:01:05 crc kubenswrapper[4811]: I0122 10:01:05.024897 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vz7np\" (UniqueName: \"kubernetes.io/projected/f7ffe266-6663-41bb-a7f3-3e7807cd62e4-kube-api-access-vz7np\") pod \"f7ffe266-6663-41bb-a7f3-3e7807cd62e4\" (UID: \"f7ffe266-6663-41bb-a7f3-3e7807cd62e4\") " Jan 22 10:01:05 crc kubenswrapper[4811]: I0122 10:01:05.024963 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7ffe266-6663-41bb-a7f3-3e7807cd62e4-config-data\") pod \"f7ffe266-6663-41bb-a7f3-3e7807cd62e4\" (UID: \"f7ffe266-6663-41bb-a7f3-3e7807cd62e4\") " Jan 22 10:01:05 crc kubenswrapper[4811]: I0122 10:01:05.029725 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7ffe266-6663-41bb-a7f3-3e7807cd62e4-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "f7ffe266-6663-41bb-a7f3-3e7807cd62e4" (UID: "f7ffe266-6663-41bb-a7f3-3e7807cd62e4"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:01:05 crc kubenswrapper[4811]: I0122 10:01:05.030231 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7ffe266-6663-41bb-a7f3-3e7807cd62e4-kube-api-access-vz7np" (OuterVolumeSpecName: "kube-api-access-vz7np") pod "f7ffe266-6663-41bb-a7f3-3e7807cd62e4" (UID: "f7ffe266-6663-41bb-a7f3-3e7807cd62e4"). InnerVolumeSpecName "kube-api-access-vz7np". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:01:05 crc kubenswrapper[4811]: I0122 10:01:05.045128 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7ffe266-6663-41bb-a7f3-3e7807cd62e4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f7ffe266-6663-41bb-a7f3-3e7807cd62e4" (UID: "f7ffe266-6663-41bb-a7f3-3e7807cd62e4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:01:05 crc kubenswrapper[4811]: I0122 10:01:05.061341 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7ffe266-6663-41bb-a7f3-3e7807cd62e4-config-data" (OuterVolumeSpecName: "config-data") pod "f7ffe266-6663-41bb-a7f3-3e7807cd62e4" (UID: "f7ffe266-6663-41bb-a7f3-3e7807cd62e4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:01:05 crc kubenswrapper[4811]: I0122 10:01:05.126751 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vz7np\" (UniqueName: \"kubernetes.io/projected/f7ffe266-6663-41bb-a7f3-3e7807cd62e4-kube-api-access-vz7np\") on node \"crc\" DevicePath \"\"" Jan 22 10:01:05 crc kubenswrapper[4811]: I0122 10:01:05.126778 4811 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7ffe266-6663-41bb-a7f3-3e7807cd62e4-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 10:01:05 crc kubenswrapper[4811]: I0122 10:01:05.126788 4811 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f7ffe266-6663-41bb-a7f3-3e7807cd62e4-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 22 10:01:05 crc kubenswrapper[4811]: I0122 10:01:05.126798 4811 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7ffe266-6663-41bb-a7f3-3e7807cd62e4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 10:01:05 crc kubenswrapper[4811]: I0122 10:01:05.502140 4811 patch_prober.go:28] interesting pod/machine-config-daemon-txvcq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 10:01:05 crc kubenswrapper[4811]: I0122 10:01:05.502194 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 10:01:05 crc kubenswrapper[4811]: I0122 10:01:05.618309 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29484601-6fs6b" event={"ID":"f7ffe266-6663-41bb-a7f3-3e7807cd62e4","Type":"ContainerDied","Data":"f8e201423cb7beed1f9da271e8870168dc90ce8cfc42fba5e6fbef519762e041"} Jan 22 10:01:05 crc kubenswrapper[4811]: I0122 10:01:05.618352 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f8e201423cb7beed1f9da271e8870168dc90ce8cfc42fba5e6fbef519762e041" Jan 22 10:01:05 crc kubenswrapper[4811]: I0122 10:01:05.618429 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29484601-6fs6b" Jan 22 10:01:07 crc kubenswrapper[4811]: I0122 10:01:07.162535 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-t86ls/must-gather-mmvbb"] Jan 22 10:01:07 crc kubenswrapper[4811]: E0122 10:01:07.163100 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7ffe266-6663-41bb-a7f3-3e7807cd62e4" containerName="keystone-cron" Jan 22 10:01:07 crc kubenswrapper[4811]: I0122 10:01:07.163114 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7ffe266-6663-41bb-a7f3-3e7807cd62e4" containerName="keystone-cron" Jan 22 10:01:07 crc kubenswrapper[4811]: I0122 10:01:07.163328 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7ffe266-6663-41bb-a7f3-3e7807cd62e4" containerName="keystone-cron" Jan 22 10:01:07 crc kubenswrapper[4811]: I0122 10:01:07.164210 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-t86ls/must-gather-mmvbb" Jan 22 10:01:07 crc kubenswrapper[4811]: I0122 10:01:07.169200 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-t86ls"/"kube-root-ca.crt" Jan 22 10:01:07 crc kubenswrapper[4811]: I0122 10:01:07.169389 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-t86ls"/"openshift-service-ca.crt" Jan 22 10:01:07 crc kubenswrapper[4811]: I0122 10:01:07.207796 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-t86ls/must-gather-mmvbb"] Jan 22 10:01:07 crc kubenswrapper[4811]: I0122 10:01:07.265794 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/fc7c94c9-4bc5-4fff-b46a-60facf41a2df-must-gather-output\") pod \"must-gather-mmvbb\" (UID: \"fc7c94c9-4bc5-4fff-b46a-60facf41a2df\") " pod="openshift-must-gather-t86ls/must-gather-mmvbb" Jan 22 10:01:07 crc kubenswrapper[4811]: I0122 10:01:07.265920 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlptg\" (UniqueName: \"kubernetes.io/projected/fc7c94c9-4bc5-4fff-b46a-60facf41a2df-kube-api-access-rlptg\") pod \"must-gather-mmvbb\" (UID: \"fc7c94c9-4bc5-4fff-b46a-60facf41a2df\") " pod="openshift-must-gather-t86ls/must-gather-mmvbb" Jan 22 10:01:07 crc kubenswrapper[4811]: I0122 10:01:07.367807 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/fc7c94c9-4bc5-4fff-b46a-60facf41a2df-must-gather-output\") pod \"must-gather-mmvbb\" (UID: \"fc7c94c9-4bc5-4fff-b46a-60facf41a2df\") " pod="openshift-must-gather-t86ls/must-gather-mmvbb" Jan 22 10:01:07 crc kubenswrapper[4811]: I0122 10:01:07.367922 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rlptg\" (UniqueName: \"kubernetes.io/projected/fc7c94c9-4bc5-4fff-b46a-60facf41a2df-kube-api-access-rlptg\") pod \"must-gather-mmvbb\" (UID: \"fc7c94c9-4bc5-4fff-b46a-60facf41a2df\") " pod="openshift-must-gather-t86ls/must-gather-mmvbb" Jan 22 10:01:07 crc kubenswrapper[4811]: I0122 10:01:07.368591 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/fc7c94c9-4bc5-4fff-b46a-60facf41a2df-must-gather-output\") pod \"must-gather-mmvbb\" (UID: \"fc7c94c9-4bc5-4fff-b46a-60facf41a2df\") " pod="openshift-must-gather-t86ls/must-gather-mmvbb" Jan 22 10:01:07 crc kubenswrapper[4811]: I0122 10:01:07.384452 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rlptg\" (UniqueName: \"kubernetes.io/projected/fc7c94c9-4bc5-4fff-b46a-60facf41a2df-kube-api-access-rlptg\") pod \"must-gather-mmvbb\" (UID: \"fc7c94c9-4bc5-4fff-b46a-60facf41a2df\") " pod="openshift-must-gather-t86ls/must-gather-mmvbb" Jan 22 10:01:07 crc kubenswrapper[4811]: I0122 10:01:07.477555 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-t86ls/must-gather-mmvbb" Jan 22 10:01:07 crc kubenswrapper[4811]: I0122 10:01:07.874925 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-t86ls/must-gather-mmvbb"] Jan 22 10:01:08 crc kubenswrapper[4811]: I0122 10:01:08.649986 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-t86ls/must-gather-mmvbb" event={"ID":"fc7c94c9-4bc5-4fff-b46a-60facf41a2df","Type":"ContainerStarted","Data":"f33eebe9360aa24a644217d835f748b46a1ef5c0c1bf8faf0a4df4c73b9ed0cd"} Jan 22 10:01:14 crc kubenswrapper[4811]: I0122 10:01:14.709273 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-t86ls/must-gather-mmvbb" event={"ID":"fc7c94c9-4bc5-4fff-b46a-60facf41a2df","Type":"ContainerStarted","Data":"20c0f56a69ac7107a0e0116f3709ad1db2dee911429e9c3f7b3462ebc0dba599"} Jan 22 10:01:14 crc kubenswrapper[4811]: I0122 10:01:14.710121 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-t86ls/must-gather-mmvbb" event={"ID":"fc7c94c9-4bc5-4fff-b46a-60facf41a2df","Type":"ContainerStarted","Data":"8e0f244cca1ec15f381eaed5ae8c13ad119d433e9e9abcebc181c9ba0c09f3f6"} Jan 22 10:01:14 crc kubenswrapper[4811]: I0122 10:01:14.730468 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-t86ls/must-gather-mmvbb" podStartSLOduration=1.986813014 podStartE2EDuration="7.730453127s" podCreationTimestamp="2026-01-22 10:01:07 +0000 UTC" firstStartedPulling="2026-01-22 10:01:07.88215698 +0000 UTC m=+3312.204344104" lastFinishedPulling="2026-01-22 10:01:13.625797095 +0000 UTC m=+3317.947984217" observedRunningTime="2026-01-22 10:01:14.728684472 +0000 UTC m=+3319.050871595" watchObservedRunningTime="2026-01-22 10:01:14.730453127 +0000 UTC m=+3319.052640250" Jan 22 10:01:18 crc kubenswrapper[4811]: I0122 10:01:18.052067 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-t86ls/crc-debug-8qdj8"] Jan 22 10:01:18 crc kubenswrapper[4811]: I0122 10:01:18.053406 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-t86ls/crc-debug-8qdj8" Jan 22 10:01:18 crc kubenswrapper[4811]: I0122 10:01:18.058190 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-t86ls"/"default-dockercfg-77w4v" Jan 22 10:01:18 crc kubenswrapper[4811]: I0122 10:01:18.208401 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8c1580dd-d98c-4e53-bba2-96343641ade7-host\") pod \"crc-debug-8qdj8\" (UID: \"8c1580dd-d98c-4e53-bba2-96343641ade7\") " pod="openshift-must-gather-t86ls/crc-debug-8qdj8" Jan 22 10:01:18 crc kubenswrapper[4811]: I0122 10:01:18.208688 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lz7zt\" (UniqueName: \"kubernetes.io/projected/8c1580dd-d98c-4e53-bba2-96343641ade7-kube-api-access-lz7zt\") pod \"crc-debug-8qdj8\" (UID: \"8c1580dd-d98c-4e53-bba2-96343641ade7\") " pod="openshift-must-gather-t86ls/crc-debug-8qdj8" Jan 22 10:01:18 crc kubenswrapper[4811]: I0122 10:01:18.310532 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8c1580dd-d98c-4e53-bba2-96343641ade7-host\") pod \"crc-debug-8qdj8\" (UID: \"8c1580dd-d98c-4e53-bba2-96343641ade7\") " pod="openshift-must-gather-t86ls/crc-debug-8qdj8" Jan 22 10:01:18 crc kubenswrapper[4811]: I0122 10:01:18.310618 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lz7zt\" (UniqueName: \"kubernetes.io/projected/8c1580dd-d98c-4e53-bba2-96343641ade7-kube-api-access-lz7zt\") pod \"crc-debug-8qdj8\" (UID: \"8c1580dd-d98c-4e53-bba2-96343641ade7\") " pod="openshift-must-gather-t86ls/crc-debug-8qdj8" Jan 22 10:01:18 crc kubenswrapper[4811]: I0122 10:01:18.310691 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8c1580dd-d98c-4e53-bba2-96343641ade7-host\") pod \"crc-debug-8qdj8\" (UID: \"8c1580dd-d98c-4e53-bba2-96343641ade7\") " pod="openshift-must-gather-t86ls/crc-debug-8qdj8" Jan 22 10:01:18 crc kubenswrapper[4811]: I0122 10:01:18.331759 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lz7zt\" (UniqueName: \"kubernetes.io/projected/8c1580dd-d98c-4e53-bba2-96343641ade7-kube-api-access-lz7zt\") pod \"crc-debug-8qdj8\" (UID: \"8c1580dd-d98c-4e53-bba2-96343641ade7\") " pod="openshift-must-gather-t86ls/crc-debug-8qdj8" Jan 22 10:01:18 crc kubenswrapper[4811]: I0122 10:01:18.367144 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-t86ls/crc-debug-8qdj8" Jan 22 10:01:18 crc kubenswrapper[4811]: I0122 10:01:18.740919 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-t86ls/crc-debug-8qdj8" event={"ID":"8c1580dd-d98c-4e53-bba2-96343641ade7","Type":"ContainerStarted","Data":"943f3ee5a7612e8e213f22b10be577c7031abd21ea671449b2775ba9cbe72405"} Jan 22 10:01:20 crc kubenswrapper[4811]: I0122 10:01:20.401686 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-76dcc8f4f4-dv85h_63cdd7a1-0295-4009-8c18-b3b3e24770b3/barbican-api-log/0.log" Jan 22 10:01:20 crc kubenswrapper[4811]: I0122 10:01:20.415792 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-76dcc8f4f4-dv85h_63cdd7a1-0295-4009-8c18-b3b3e24770b3/barbican-api/0.log" Jan 22 10:01:20 crc kubenswrapper[4811]: I0122 10:01:20.457694 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-5f966bcc4-4n44q_312cf490-6d44-416e-8238-06667bf8efee/barbican-keystone-listener-log/0.log" Jan 22 10:01:20 crc kubenswrapper[4811]: I0122 10:01:20.464268 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-5f966bcc4-4n44q_312cf490-6d44-416e-8238-06667bf8efee/barbican-keystone-listener/0.log" Jan 22 10:01:20 crc kubenswrapper[4811]: I0122 10:01:20.476259 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-7c8cfbdccc-5kmj8_220f6baa-23c9-4cf8-b91f-5245734fc341/barbican-worker-log/0.log" Jan 22 10:01:20 crc kubenswrapper[4811]: I0122 10:01:20.479893 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-7c8cfbdccc-5kmj8_220f6baa-23c9-4cf8-b91f-5245734fc341/barbican-worker/0.log" Jan 22 10:01:20 crc kubenswrapper[4811]: I0122 10:01:20.518173 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-mffrf_c799f725-4c74-42ab-9217-06e6c0310194/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Jan 22 10:01:20 crc kubenswrapper[4811]: I0122 10:01:20.538496 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_a18f46ac-7cea-410a-ac94-959fc43823bc/ceilometer-central-agent/0.log" Jan 22 10:01:20 crc kubenswrapper[4811]: I0122 10:01:20.551818 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_a18f46ac-7cea-410a-ac94-959fc43823bc/ceilometer-notification-agent/0.log" Jan 22 10:01:20 crc kubenswrapper[4811]: I0122 10:01:20.556209 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_a18f46ac-7cea-410a-ac94-959fc43823bc/sg-core/0.log" Jan 22 10:01:20 crc kubenswrapper[4811]: I0122 10:01:20.565109 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_a18f46ac-7cea-410a-ac94-959fc43823bc/proxy-httpd/0.log" Jan 22 10:01:20 crc kubenswrapper[4811]: I0122 10:01:20.574485 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceph-client-edpm-deployment-openstack-edpm-ipam-q2556_c61e1813-0266-4558-9a3d-5895a166d67f/ceph-client-edpm-deployment-openstack-edpm-ipam/0.log" Jan 22 10:01:20 crc kubenswrapper[4811]: I0122 10:01:20.585616 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-mk9fn_2994821b-e7da-4315-a718-9cc885e55fa4/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam/0.log" Jan 22 10:01:20 crc kubenswrapper[4811]: I0122 10:01:20.607644 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_8191719a-7bd8-44c9-9a24-65074b9bfa10/cinder-api-log/0.log" Jan 22 10:01:20 crc kubenswrapper[4811]: I0122 10:01:20.646785 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_8191719a-7bd8-44c9-9a24-65074b9bfa10/cinder-api/0.log" Jan 22 10:01:20 crc kubenswrapper[4811]: I0122 10:01:20.806310 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_747abd8a-15a3-42fe-b8bd-a74f2e03c00c/cinder-backup/0.log" Jan 22 10:01:20 crc kubenswrapper[4811]: I0122 10:01:20.816338 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_747abd8a-15a3-42fe-b8bd-a74f2e03c00c/probe/0.log" Jan 22 10:01:20 crc kubenswrapper[4811]: I0122 10:01:20.876969 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_82d89bb4-3738-46d0-8268-d14e298c13c8/cinder-scheduler/0.log" Jan 22 10:01:20 crc kubenswrapper[4811]: I0122 10:01:20.901011 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_82d89bb4-3738-46d0-8268-d14e298c13c8/probe/0.log" Jan 22 10:01:20 crc kubenswrapper[4811]: I0122 10:01:20.968914 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_e3a4d222-4fbd-4c90-9bb0-d787f257d7c0/cinder-volume/0.log" Jan 22 10:01:20 crc kubenswrapper[4811]: I0122 10:01:20.986133 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_e3a4d222-4fbd-4c90-9bb0-d787f257d7c0/probe/0.log" Jan 22 10:01:21 crc kubenswrapper[4811]: I0122 10:01:21.012028 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-8jwn7_6251494a-e332-4222-b95c-80c7205dc4ce/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 22 10:01:21 crc kubenswrapper[4811]: I0122 10:01:21.043218 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-hnjt7_4baf2862-a8ca-4314-a70a-67e087e5c897/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 22 10:01:23 crc kubenswrapper[4811]: I0122 10:01:23.010756 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-595b86679f-c5h4r_4943dd74-260e-4c75-af13-64455ecded8f/dnsmasq-dns/0.log" Jan 22 10:01:23 crc kubenswrapper[4811]: I0122 10:01:23.017249 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-595b86679f-c5h4r_4943dd74-260e-4c75-af13-64455ecded8f/init/0.log" Jan 22 10:01:23 crc kubenswrapper[4811]: I0122 10:01:23.027508 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_4494328e-eef4-42b6-993f-654585a11db3/glance-log/0.log" Jan 22 10:01:23 crc kubenswrapper[4811]: I0122 10:01:23.038680 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_4494328e-eef4-42b6-993f-654585a11db3/glance-httpd/0.log" Jan 22 10:01:23 crc kubenswrapper[4811]: I0122 10:01:23.048531 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_4ff328b8-5b8b-4a66-85e9-b083d86f2811/glance-log/0.log" Jan 22 10:01:23 crc kubenswrapper[4811]: I0122 10:01:23.068915 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_4ff328b8-5b8b-4a66-85e9-b083d86f2811/glance-httpd/0.log" Jan 22 10:01:23 crc kubenswrapper[4811]: I0122 10:01:23.165760 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-f75b46fc8-4l2b8_9c9eef01-268a-4d3c-b3c3-f30cd80694e0/horizon-log/0.log" Jan 22 10:01:23 crc kubenswrapper[4811]: I0122 10:01:23.248723 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-f75b46fc8-4l2b8_9c9eef01-268a-4d3c-b3c3-f30cd80694e0/horizon/0.log" Jan 22 10:01:23 crc kubenswrapper[4811]: I0122 10:01:23.287744 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-5m56f_d2d7b6d9-f9f5-4548-a6c3-01248c076247/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Jan 22 10:01:23 crc kubenswrapper[4811]: I0122 10:01:23.316048 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-kctxk_1d8c1630-ca31-4da8-a66d-54d6649558d4/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 22 10:01:23 crc kubenswrapper[4811]: I0122 10:01:23.503831 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-5bf9c84c75-rbgml_42068723-76f8-4a1a-8210-f0a70f10897a/keystone-api/0.log" Jan 22 10:01:23 crc kubenswrapper[4811]: I0122 10:01:23.515513 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29484601-6fs6b_f7ffe266-6663-41bb-a7f3-3e7807cd62e4/keystone-cron/0.log" Jan 22 10:01:23 crc kubenswrapper[4811]: I0122 10:01:23.525992 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_8b61b8e9-fda4-46d3-a494-f3e804e7f4d4/kube-state-metrics/0.log" Jan 22 10:01:23 crc kubenswrapper[4811]: I0122 10:01:23.579462 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-hklz6_0f4688b1-29e2-475b-80c0-63afbc3b1afa/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Jan 22 10:01:23 crc kubenswrapper[4811]: I0122 10:01:23.592373 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_8c25067a-ed34-4109-b8f4-d82320dedb05/manila-api-log/0.log" Jan 22 10:01:23 crc kubenswrapper[4811]: I0122 10:01:23.672008 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_8c25067a-ed34-4109-b8f4-d82320dedb05/manila-api/0.log" Jan 22 10:01:23 crc kubenswrapper[4811]: I0122 10:01:23.765970 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_4acbe231-7f63-499d-8813-f7a18c9d70fa/manila-scheduler/0.log" Jan 22 10:01:23 crc kubenswrapper[4811]: I0122 10:01:23.772038 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_4acbe231-7f63-499d-8813-f7a18c9d70fa/probe/0.log" Jan 22 10:01:23 crc kubenswrapper[4811]: I0122 10:01:23.834431 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_f3e1d3e9-c984-442b-8a77-28b88a934ebc/manila-share/0.log" Jan 22 10:01:23 crc kubenswrapper[4811]: I0122 10:01:23.842465 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_f3e1d3e9-c984-442b-8a77-28b88a934ebc/probe/0.log" Jan 22 10:01:33 crc kubenswrapper[4811]: I0122 10:01:33.886794 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-t86ls/crc-debug-8qdj8" event={"ID":"8c1580dd-d98c-4e53-bba2-96343641ade7","Type":"ContainerStarted","Data":"e42f63fbf7642873cc98fbb33deb39f5376832add747d37647c4367d9abc4d6d"} Jan 22 10:01:33 crc kubenswrapper[4811]: I0122 10:01:33.903763 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-t86ls/crc-debug-8qdj8" podStartSLOduration=0.939730548 podStartE2EDuration="15.903750504s" podCreationTimestamp="2026-01-22 10:01:18 +0000 UTC" firstStartedPulling="2026-01-22 10:01:18.404255366 +0000 UTC m=+3322.726442489" lastFinishedPulling="2026-01-22 10:01:33.368275322 +0000 UTC m=+3337.690462445" observedRunningTime="2026-01-22 10:01:33.897430115 +0000 UTC m=+3338.219617239" watchObservedRunningTime="2026-01-22 10:01:33.903750504 +0000 UTC m=+3338.225937628" Jan 22 10:01:35 crc kubenswrapper[4811]: I0122 10:01:35.501059 4811 patch_prober.go:28] interesting pod/machine-config-daemon-txvcq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 10:01:35 crc kubenswrapper[4811]: I0122 10:01:35.501379 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 10:01:42 crc kubenswrapper[4811]: I0122 10:01:42.053985 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-rl9k7_346ed4cd-2bb8-470d-a275-6c297994fb3f/controller/0.log" Jan 22 10:01:42 crc kubenswrapper[4811]: I0122 10:01:42.059650 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-rl9k7_346ed4cd-2bb8-470d-a275-6c297994fb3f/kube-rbac-proxy/0.log" Jan 22 10:01:42 crc kubenswrapper[4811]: I0122 10:01:42.096410 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rnvr5_d2a3b77d-fd7c-421e-aa23-d206419b1c7d/controller/0.log" Jan 22 10:01:43 crc kubenswrapper[4811]: I0122 10:01:43.885871 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rnvr5_d2a3b77d-fd7c-421e-aa23-d206419b1c7d/frr/0.log" Jan 22 10:01:43 crc kubenswrapper[4811]: I0122 10:01:43.895221 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rnvr5_d2a3b77d-fd7c-421e-aa23-d206419b1c7d/reloader/0.log" Jan 22 10:01:43 crc kubenswrapper[4811]: I0122 10:01:43.902907 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rnvr5_d2a3b77d-fd7c-421e-aa23-d206419b1c7d/frr-metrics/0.log" Jan 22 10:01:43 crc kubenswrapper[4811]: I0122 10:01:43.912371 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rnvr5_d2a3b77d-fd7c-421e-aa23-d206419b1c7d/kube-rbac-proxy/0.log" Jan 22 10:01:43 crc kubenswrapper[4811]: I0122 10:01:43.923083 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rnvr5_d2a3b77d-fd7c-421e-aa23-d206419b1c7d/kube-rbac-proxy-frr/0.log" Jan 22 10:01:43 crc kubenswrapper[4811]: I0122 10:01:43.929514 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rnvr5_d2a3b77d-fd7c-421e-aa23-d206419b1c7d/cp-frr-files/0.log" Jan 22 10:01:43 crc kubenswrapper[4811]: I0122 10:01:43.936325 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rnvr5_d2a3b77d-fd7c-421e-aa23-d206419b1c7d/cp-reloader/0.log" Jan 22 10:01:43 crc kubenswrapper[4811]: I0122 10:01:43.946118 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rnvr5_d2a3b77d-fd7c-421e-aa23-d206419b1c7d/cp-metrics/0.log" Jan 22 10:01:43 crc kubenswrapper[4811]: I0122 10:01:43.955197 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-zfggh_2f6eae9c-374b-4ac3-b5d7-04267fe9bf73/frr-k8s-webhook-server/0.log" Jan 22 10:01:43 crc kubenswrapper[4811]: I0122 10:01:43.980998 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-64bd67c58d-k58sk_13617657-7245-4223-9b20-03a56378edaf/manager/0.log" Jan 22 10:01:43 crc kubenswrapper[4811]: I0122 10:01:43.997130 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-5bc67d6df-ckhh4_1008e895-ec53-4fdd-9423-bbb4d249a6b9/webhook-server/0.log" Jan 22 10:01:44 crc kubenswrapper[4811]: I0122 10:01:44.436805 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-k88w9_faa36c07-3c7a-4b4a-a04e-58b43a178890/speaker/0.log" Jan 22 10:01:44 crc kubenswrapper[4811]: I0122 10:01:44.444241 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-k88w9_faa36c07-3c7a-4b4a-a04e-58b43a178890/kube-rbac-proxy/0.log" Jan 22 10:01:46 crc kubenswrapper[4811]: I0122 10:01:46.722166 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_864b7037-2c34-48b6-b75d-38110d9816dc/memcached/0.log" Jan 22 10:01:46 crc kubenswrapper[4811]: I0122 10:01:46.811599 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-864bc8bfcf-nvbzn_3cd66dd0-aadf-46e8-b0d0-48d0563efa06/neutron-api/0.log" Jan 22 10:01:46 crc kubenswrapper[4811]: I0122 10:01:46.854977 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-864bc8bfcf-nvbzn_3cd66dd0-aadf-46e8-b0d0-48d0563efa06/neutron-httpd/0.log" Jan 22 10:01:46 crc kubenswrapper[4811]: I0122 10:01:46.877741 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-lqp9d_3c16de7c-e366-4871-b006-d63a565fb17e/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Jan 22 10:01:47 crc kubenswrapper[4811]: I0122 10:01:47.047223 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_dacc4f5b-746c-48bb-9d16-a30e402aa461/nova-api-log/0.log" Jan 22 10:01:47 crc kubenswrapper[4811]: I0122 10:01:47.315050 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_dacc4f5b-746c-48bb-9d16-a30e402aa461/nova-api-api/0.log" Jan 22 10:01:47 crc kubenswrapper[4811]: I0122 10:01:47.419527 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_bb52292c-6627-4dbb-a981-f97886db6f7a/nova-cell0-conductor-conductor/0.log" Jan 22 10:01:47 crc kubenswrapper[4811]: I0122 10:01:47.475597 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_f96a2c3b-9e7d-4e81-8744-bcaeca5db4bd/nova-cell1-conductor-conductor/0.log" Jan 22 10:01:47 crc kubenswrapper[4811]: I0122 10:01:47.533041 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_ee19b0fe-2250-40e7-9917-230c53ad0f13/nova-cell1-novncproxy-novncproxy/0.log" Jan 22 10:01:47 crc kubenswrapper[4811]: I0122 10:01:47.587350 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-g62qq_93c19706-aa1a-40b2-96cb-ea74c87866d6/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam/0.log" Jan 22 10:01:47 crc kubenswrapper[4811]: I0122 10:01:47.644757 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_9ecace20-035f-4590-a0f0-32914d411253/nova-metadata-log/0.log" Jan 22 10:01:48 crc kubenswrapper[4811]: I0122 10:01:48.294749 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_9ecace20-035f-4590-a0f0-32914d411253/nova-metadata-metadata/0.log" Jan 22 10:01:48 crc kubenswrapper[4811]: I0122 10:01:48.410588 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_d8786fe9-36cf-4e1e-8d24-9eaeba9b1f9b/nova-scheduler-scheduler/0.log" Jan 22 10:01:48 crc kubenswrapper[4811]: I0122 10:01:48.428561 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_db2ecb97-db87-43bf-8ffb-7cbd7460ba19/galera/0.log" Jan 22 10:01:48 crc kubenswrapper[4811]: I0122 10:01:48.437240 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_db2ecb97-db87-43bf-8ffb-7cbd7460ba19/mysql-bootstrap/0.log" Jan 22 10:01:48 crc kubenswrapper[4811]: I0122 10:01:48.456845 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_02bd0635-dfd1-4e78-8fbf-57366ce83cdb/galera/0.log" Jan 22 10:01:48 crc kubenswrapper[4811]: I0122 10:01:48.492584 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_02bd0635-dfd1-4e78-8fbf-57366ce83cdb/mysql-bootstrap/0.log" Jan 22 10:01:48 crc kubenswrapper[4811]: I0122 10:01:48.500653 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_616ddacf-6ee0-46d9-9e03-c234d53b5dd8/openstackclient/0.log" Jan 22 10:01:48 crc kubenswrapper[4811]: I0122 10:01:48.509206 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-kz6sk_374f91f7-413d-4830-afc1-0d75c2946fc3/openstack-network-exporter/0.log" Jan 22 10:01:48 crc kubenswrapper[4811]: I0122 10:01:48.520209 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-nm2bb_29754ede-0901-4bbd-aa87-49a8e93050b9/ovn-controller/0.log" Jan 22 10:01:48 crc kubenswrapper[4811]: I0122 10:01:48.528828 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-kdqzr_cbf54ce8-3114-43c1-a1ce-6a13dd41297a/ovsdb-server/0.log" Jan 22 10:01:48 crc kubenswrapper[4811]: I0122 10:01:48.541261 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-kdqzr_cbf54ce8-3114-43c1-a1ce-6a13dd41297a/ovs-vswitchd/0.log" Jan 22 10:01:48 crc kubenswrapper[4811]: I0122 10:01:48.550004 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-kdqzr_cbf54ce8-3114-43c1-a1ce-6a13dd41297a/ovsdb-server-init/0.log" Jan 22 10:01:48 crc kubenswrapper[4811]: I0122 10:01:48.582852 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-5q54d_8fafd202-523c-44b0-b229-527193721bb1/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Jan 22 10:01:48 crc kubenswrapper[4811]: I0122 10:01:48.596526 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_68ff0a82-cf02-4e4e-bf49-b46f3e0f361a/ovn-northd/0.log" Jan 22 10:01:48 crc kubenswrapper[4811]: I0122 10:01:48.609325 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_68ff0a82-cf02-4e4e-bf49-b46f3e0f361a/openstack-network-exporter/0.log" Jan 22 10:01:48 crc kubenswrapper[4811]: I0122 10:01:48.626451 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_4344c0dd-b6d6-4448-b943-0e036ee2098b/ovsdbserver-nb/0.log" Jan 22 10:01:48 crc kubenswrapper[4811]: I0122 10:01:48.656390 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_4344c0dd-b6d6-4448-b943-0e036ee2098b/openstack-network-exporter/0.log" Jan 22 10:01:48 crc kubenswrapper[4811]: I0122 10:01:48.674834 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_d0e5eec8-870c-4be3-8ac6-4ad6698d8a4b/ovsdbserver-sb/0.log" Jan 22 10:01:48 crc kubenswrapper[4811]: I0122 10:01:48.683782 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_d0e5eec8-870c-4be3-8ac6-4ad6698d8a4b/openstack-network-exporter/0.log" Jan 22 10:01:48 crc kubenswrapper[4811]: I0122 10:01:48.740370 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6976f89774-xh5fd_32532baf-c6cc-4f91-91f5-7f81462d369a/placement-log/0.log" Jan 22 10:01:48 crc kubenswrapper[4811]: I0122 10:01:48.781110 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6976f89774-xh5fd_32532baf-c6cc-4f91-91f5-7f81462d369a/placement-api/0.log" Jan 22 10:01:48 crc kubenswrapper[4811]: I0122 10:01:48.800534 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_a2ce439e-8652-40cb-9d5d-90913d18bea1/rabbitmq/0.log" Jan 22 10:01:48 crc kubenswrapper[4811]: I0122 10:01:48.804588 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_a2ce439e-8652-40cb-9d5d-90913d18bea1/setup-container/0.log" Jan 22 10:01:48 crc kubenswrapper[4811]: I0122 10:01:48.829445 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_f7aa6fa4-4d1f-4243-bd9c-8b9caa013bb8/rabbitmq/0.log" Jan 22 10:01:48 crc kubenswrapper[4811]: I0122 10:01:48.832689 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_f7aa6fa4-4d1f-4243-bd9c-8b9caa013bb8/setup-container/0.log" Jan 22 10:01:48 crc kubenswrapper[4811]: I0122 10:01:48.847726 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-rgn6q_84f96cc8-d392-47a2-baab-998459b83025/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 22 10:01:48 crc kubenswrapper[4811]: I0122 10:01:48.855008 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-5ddtk_016a5684-671f-4e6a-81dc-15c2a55a6911/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Jan 22 10:01:48 crc kubenswrapper[4811]: I0122 10:01:48.864219 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-5n2sj_247204c2-fb25-45be-a1ec-8bc4b64e41d6/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 22 10:01:48 crc kubenswrapper[4811]: I0122 10:01:48.880810 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-pn5nw_b3144d30-a0bb-4788-bf66-089587cabbf5/ssh-known-hosts-edpm-deployment/0.log" Jan 22 10:01:48 crc kubenswrapper[4811]: I0122 10:01:48.901054 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_eb23b893-6bb1-4d84-bb05-09c701024b37/tempest-tests-tempest-tests-runner/0.log" Jan 22 10:01:48 crc kubenswrapper[4811]: I0122 10:01:48.905923 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_37849dcb-3091-4827-b55c-97806fb09eef/test-operator-logs-container/0.log" Jan 22 10:01:48 crc kubenswrapper[4811]: I0122 10:01:48.926296 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-f92wm_ca9b0d63-2524-406e-bd65-36224327f50f/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 22 10:01:56 crc kubenswrapper[4811]: I0122 10:01:56.908058 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-q2j6v"] Jan 22 10:01:56 crc kubenswrapper[4811]: I0122 10:01:56.909896 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q2j6v" Jan 22 10:01:56 crc kubenswrapper[4811]: I0122 10:01:56.922945 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-q2j6v"] Jan 22 10:01:56 crc kubenswrapper[4811]: I0122 10:01:56.962978 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2476cdc2-a97d-41a6-8a68-d8d2537180d4-catalog-content\") pod \"certified-operators-q2j6v\" (UID: \"2476cdc2-a97d-41a6-8a68-d8d2537180d4\") " pod="openshift-marketplace/certified-operators-q2j6v" Jan 22 10:01:56 crc kubenswrapper[4811]: I0122 10:01:56.963178 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2476cdc2-a97d-41a6-8a68-d8d2537180d4-utilities\") pod \"certified-operators-q2j6v\" (UID: \"2476cdc2-a97d-41a6-8a68-d8d2537180d4\") " pod="openshift-marketplace/certified-operators-q2j6v" Jan 22 10:01:56 crc kubenswrapper[4811]: I0122 10:01:56.963207 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bq9gq\" (UniqueName: \"kubernetes.io/projected/2476cdc2-a97d-41a6-8a68-d8d2537180d4-kube-api-access-bq9gq\") pod \"certified-operators-q2j6v\" (UID: \"2476cdc2-a97d-41a6-8a68-d8d2537180d4\") " pod="openshift-marketplace/certified-operators-q2j6v" Jan 22 10:01:57 crc kubenswrapper[4811]: I0122 10:01:57.064603 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2476cdc2-a97d-41a6-8a68-d8d2537180d4-utilities\") pod \"certified-operators-q2j6v\" (UID: \"2476cdc2-a97d-41a6-8a68-d8d2537180d4\") " pod="openshift-marketplace/certified-operators-q2j6v" Jan 22 10:01:57 crc kubenswrapper[4811]: I0122 10:01:57.064666 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bq9gq\" (UniqueName: \"kubernetes.io/projected/2476cdc2-a97d-41a6-8a68-d8d2537180d4-kube-api-access-bq9gq\") pod \"certified-operators-q2j6v\" (UID: \"2476cdc2-a97d-41a6-8a68-d8d2537180d4\") " pod="openshift-marketplace/certified-operators-q2j6v" Jan 22 10:01:57 crc kubenswrapper[4811]: I0122 10:01:57.064809 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2476cdc2-a97d-41a6-8a68-d8d2537180d4-catalog-content\") pod \"certified-operators-q2j6v\" (UID: \"2476cdc2-a97d-41a6-8a68-d8d2537180d4\") " pod="openshift-marketplace/certified-operators-q2j6v" Jan 22 10:01:57 crc kubenswrapper[4811]: I0122 10:01:57.065394 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2476cdc2-a97d-41a6-8a68-d8d2537180d4-catalog-content\") pod \"certified-operators-q2j6v\" (UID: \"2476cdc2-a97d-41a6-8a68-d8d2537180d4\") " pod="openshift-marketplace/certified-operators-q2j6v" Jan 22 10:01:57 crc kubenswrapper[4811]: I0122 10:01:57.065618 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2476cdc2-a97d-41a6-8a68-d8d2537180d4-utilities\") pod \"certified-operators-q2j6v\" (UID: \"2476cdc2-a97d-41a6-8a68-d8d2537180d4\") " pod="openshift-marketplace/certified-operators-q2j6v" Jan 22 10:01:57 crc kubenswrapper[4811]: I0122 10:01:57.082533 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bq9gq\" (UniqueName: \"kubernetes.io/projected/2476cdc2-a97d-41a6-8a68-d8d2537180d4-kube-api-access-bq9gq\") pod \"certified-operators-q2j6v\" (UID: \"2476cdc2-a97d-41a6-8a68-d8d2537180d4\") " pod="openshift-marketplace/certified-operators-q2j6v" Jan 22 10:01:57 crc kubenswrapper[4811]: I0122 10:01:57.224446 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q2j6v" Jan 22 10:01:57 crc kubenswrapper[4811]: I0122 10:01:57.882910 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-q2j6v"] Jan 22 10:01:58 crc kubenswrapper[4811]: I0122 10:01:58.067231 4811 generic.go:334] "Generic (PLEG): container finished" podID="2476cdc2-a97d-41a6-8a68-d8d2537180d4" containerID="dd84646ba0bb8908ced23737b806d320f9b0095179a66fe689face2d50fccbdc" exitCode=0 Jan 22 10:01:58 crc kubenswrapper[4811]: I0122 10:01:58.067417 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q2j6v" event={"ID":"2476cdc2-a97d-41a6-8a68-d8d2537180d4","Type":"ContainerDied","Data":"dd84646ba0bb8908ced23737b806d320f9b0095179a66fe689face2d50fccbdc"} Jan 22 10:01:58 crc kubenswrapper[4811]: I0122 10:01:58.067458 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q2j6v" event={"ID":"2476cdc2-a97d-41a6-8a68-d8d2537180d4","Type":"ContainerStarted","Data":"1b66c58b06ae298bc7b53a047b277ab9785e83df477a4a21daea932363df8905"} Jan 22 10:02:04 crc kubenswrapper[4811]: I0122 10:02:04.116027 4811 generic.go:334] "Generic (PLEG): container finished" podID="2476cdc2-a97d-41a6-8a68-d8d2537180d4" containerID="deff9a6ddf022bf3d30bd17fc37957c2a600f3e37422f27acb09c2fa3b0b5214" exitCode=0 Jan 22 10:02:04 crc kubenswrapper[4811]: I0122 10:02:04.116089 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q2j6v" event={"ID":"2476cdc2-a97d-41a6-8a68-d8d2537180d4","Type":"ContainerDied","Data":"deff9a6ddf022bf3d30bd17fc37957c2a600f3e37422f27acb09c2fa3b0b5214"} Jan 22 10:02:05 crc kubenswrapper[4811]: I0122 10:02:05.124669 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q2j6v" event={"ID":"2476cdc2-a97d-41a6-8a68-d8d2537180d4","Type":"ContainerStarted","Data":"96f8512a64698a0d877c56a0da02e5523d6e29f7fb3d3357ca8c7143ad92bb29"} Jan 22 10:02:05 crc kubenswrapper[4811]: I0122 10:02:05.140636 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-q2j6v" podStartSLOduration=2.599610558 podStartE2EDuration="9.140608519s" podCreationTimestamp="2026-01-22 10:01:56 +0000 UTC" firstStartedPulling="2026-01-22 10:01:58.068880083 +0000 UTC m=+3362.391067206" lastFinishedPulling="2026-01-22 10:02:04.609878044 +0000 UTC m=+3368.932065167" observedRunningTime="2026-01-22 10:02:05.139324387 +0000 UTC m=+3369.461511510" watchObservedRunningTime="2026-01-22 10:02:05.140608519 +0000 UTC m=+3369.462795642" Jan 22 10:02:05 crc kubenswrapper[4811]: I0122 10:02:05.501411 4811 patch_prober.go:28] interesting pod/machine-config-daemon-txvcq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 10:02:05 crc kubenswrapper[4811]: I0122 10:02:05.501469 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 10:02:05 crc kubenswrapper[4811]: I0122 10:02:05.501515 4811 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" Jan 22 10:02:05 crc kubenswrapper[4811]: I0122 10:02:05.502110 4811 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3a06585913d6ba918f6c52903bb7850c2377d5698106e38de260a0e7343ce390"} pod="openshift-machine-config-operator/machine-config-daemon-txvcq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 10:02:05 crc kubenswrapper[4811]: I0122 10:02:05.502164 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" containerName="machine-config-daemon" containerID="cri-o://3a06585913d6ba918f6c52903bb7850c2377d5698106e38de260a0e7343ce390" gracePeriod=600 Jan 22 10:02:06 crc kubenswrapper[4811]: I0122 10:02:06.133430 4811 generic.go:334] "Generic (PLEG): container finished" podID="84068a6b-e189-419b-87f5-f31428f6eafe" containerID="3a06585913d6ba918f6c52903bb7850c2377d5698106e38de260a0e7343ce390" exitCode=0 Jan 22 10:02:06 crc kubenswrapper[4811]: I0122 10:02:06.134156 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" event={"ID":"84068a6b-e189-419b-87f5-f31428f6eafe","Type":"ContainerDied","Data":"3a06585913d6ba918f6c52903bb7850c2377d5698106e38de260a0e7343ce390"} Jan 22 10:02:06 crc kubenswrapper[4811]: I0122 10:02:06.134219 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" event={"ID":"84068a6b-e189-419b-87f5-f31428f6eafe","Type":"ContainerStarted","Data":"48adbe58c0edc6e2dbd02dd77a6da6c68c74447ea1d3ab9ca56908346b26b622"} Jan 22 10:02:06 crc kubenswrapper[4811]: I0122 10:02:06.134239 4811 scope.go:117] "RemoveContainer" containerID="27d99ddba75ace4f5b92aada119cc94fd0acd38a4e4c54101859524605bfeb4c" Jan 22 10:02:07 crc kubenswrapper[4811]: I0122 10:02:07.143225 4811 generic.go:334] "Generic (PLEG): container finished" podID="8c1580dd-d98c-4e53-bba2-96343641ade7" containerID="e42f63fbf7642873cc98fbb33deb39f5376832add747d37647c4367d9abc4d6d" exitCode=0 Jan 22 10:02:07 crc kubenswrapper[4811]: I0122 10:02:07.143267 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-t86ls/crc-debug-8qdj8" event={"ID":"8c1580dd-d98c-4e53-bba2-96343641ade7","Type":"ContainerDied","Data":"e42f63fbf7642873cc98fbb33deb39f5376832add747d37647c4367d9abc4d6d"} Jan 22 10:02:07 crc kubenswrapper[4811]: I0122 10:02:07.225326 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-q2j6v" Jan 22 10:02:07 crc kubenswrapper[4811]: I0122 10:02:07.225867 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-q2j6v" Jan 22 10:02:07 crc kubenswrapper[4811]: I0122 10:02:07.263426 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-q2j6v" Jan 22 10:02:08 crc kubenswrapper[4811]: I0122 10:02:08.221852 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-t86ls/crc-debug-8qdj8" Jan 22 10:02:08 crc kubenswrapper[4811]: I0122 10:02:08.247715 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-t86ls/crc-debug-8qdj8"] Jan 22 10:02:08 crc kubenswrapper[4811]: I0122 10:02:08.253610 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-t86ls/crc-debug-8qdj8"] Jan 22 10:02:08 crc kubenswrapper[4811]: I0122 10:02:08.266188 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8c1580dd-d98c-4e53-bba2-96343641ade7-host\") pod \"8c1580dd-d98c-4e53-bba2-96343641ade7\" (UID: \"8c1580dd-d98c-4e53-bba2-96343641ade7\") " Jan 22 10:02:08 crc kubenswrapper[4811]: I0122 10:02:08.266241 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz7zt\" (UniqueName: \"kubernetes.io/projected/8c1580dd-d98c-4e53-bba2-96343641ade7-kube-api-access-lz7zt\") pod \"8c1580dd-d98c-4e53-bba2-96343641ade7\" (UID: \"8c1580dd-d98c-4e53-bba2-96343641ade7\") " Jan 22 10:02:08 crc kubenswrapper[4811]: I0122 10:02:08.266308 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8c1580dd-d98c-4e53-bba2-96343641ade7-host" (OuterVolumeSpecName: "host") pod "8c1580dd-d98c-4e53-bba2-96343641ade7" (UID: "8c1580dd-d98c-4e53-bba2-96343641ade7"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 10:02:08 crc kubenswrapper[4811]: I0122 10:02:08.266975 4811 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8c1580dd-d98c-4e53-bba2-96343641ade7-host\") on node \"crc\" DevicePath \"\"" Jan 22 10:02:08 crc kubenswrapper[4811]: I0122 10:02:08.274129 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c1580dd-d98c-4e53-bba2-96343641ade7-kube-api-access-lz7zt" (OuterVolumeSpecName: "kube-api-access-lz7zt") pod "8c1580dd-d98c-4e53-bba2-96343641ade7" (UID: "8c1580dd-d98c-4e53-bba2-96343641ade7"). InnerVolumeSpecName "kube-api-access-lz7zt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:02:08 crc kubenswrapper[4811]: I0122 10:02:08.369002 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz7zt\" (UniqueName: \"kubernetes.io/projected/8c1580dd-d98c-4e53-bba2-96343641ade7-kube-api-access-lz7zt\") on node \"crc\" DevicePath \"\"" Jan 22 10:02:09 crc kubenswrapper[4811]: I0122 10:02:09.158843 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="943f3ee5a7612e8e213f22b10be577c7031abd21ea671449b2775ba9cbe72405" Jan 22 10:02:09 crc kubenswrapper[4811]: I0122 10:02:09.158878 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-t86ls/crc-debug-8qdj8" Jan 22 10:02:09 crc kubenswrapper[4811]: I0122 10:02:09.200773 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-q2j6v" Jan 22 10:02:09 crc kubenswrapper[4811]: I0122 10:02:09.295167 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-q2j6v"] Jan 22 10:02:09 crc kubenswrapper[4811]: I0122 10:02:09.341642 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wct2f"] Jan 22 10:02:09 crc kubenswrapper[4811]: I0122 10:02:09.343178 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-wct2f" podUID="6d36eba9-846c-4d4e-8623-513ada4d04d7" containerName="registry-server" containerID="cri-o://d579490859c07558547011ef211a3a7d9672bb81380d8c9f5694fc149c75ea3e" gracePeriod=2 Jan 22 10:02:09 crc kubenswrapper[4811]: I0122 10:02:09.490857 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-t86ls/crc-debug-cgzkh"] Jan 22 10:02:09 crc kubenswrapper[4811]: E0122 10:02:09.491282 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c1580dd-d98c-4e53-bba2-96343641ade7" containerName="container-00" Jan 22 10:02:09 crc kubenswrapper[4811]: I0122 10:02:09.491305 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c1580dd-d98c-4e53-bba2-96343641ade7" containerName="container-00" Jan 22 10:02:09 crc kubenswrapper[4811]: I0122 10:02:09.491486 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c1580dd-d98c-4e53-bba2-96343641ade7" containerName="container-00" Jan 22 10:02:09 crc kubenswrapper[4811]: I0122 10:02:09.492073 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-t86ls/crc-debug-cgzkh" Jan 22 10:02:09 crc kubenswrapper[4811]: I0122 10:02:09.493811 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-t86ls"/"default-dockercfg-77w4v" Jan 22 10:02:09 crc kubenswrapper[4811]: I0122 10:02:09.497420 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4cccf4b6-78f6-4f21-837a-9951270c3f0f-host\") pod \"crc-debug-cgzkh\" (UID: \"4cccf4b6-78f6-4f21-837a-9951270c3f0f\") " pod="openshift-must-gather-t86ls/crc-debug-cgzkh" Jan 22 10:02:09 crc kubenswrapper[4811]: I0122 10:02:09.497460 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfkh9\" (UniqueName: \"kubernetes.io/projected/4cccf4b6-78f6-4f21-837a-9951270c3f0f-kube-api-access-pfkh9\") pod \"crc-debug-cgzkh\" (UID: \"4cccf4b6-78f6-4f21-837a-9951270c3f0f\") " pod="openshift-must-gather-t86ls/crc-debug-cgzkh" Jan 22 10:02:09 crc kubenswrapper[4811]: I0122 10:02:09.599184 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4cccf4b6-78f6-4f21-837a-9951270c3f0f-host\") pod \"crc-debug-cgzkh\" (UID: \"4cccf4b6-78f6-4f21-837a-9951270c3f0f\") " pod="openshift-must-gather-t86ls/crc-debug-cgzkh" Jan 22 10:02:09 crc kubenswrapper[4811]: I0122 10:02:09.599243 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pfkh9\" (UniqueName: \"kubernetes.io/projected/4cccf4b6-78f6-4f21-837a-9951270c3f0f-kube-api-access-pfkh9\") pod \"crc-debug-cgzkh\" (UID: \"4cccf4b6-78f6-4f21-837a-9951270c3f0f\") " pod="openshift-must-gather-t86ls/crc-debug-cgzkh" Jan 22 10:02:09 crc kubenswrapper[4811]: I0122 10:02:09.599686 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4cccf4b6-78f6-4f21-837a-9951270c3f0f-host\") pod \"crc-debug-cgzkh\" (UID: \"4cccf4b6-78f6-4f21-837a-9951270c3f0f\") " pod="openshift-must-gather-t86ls/crc-debug-cgzkh" Jan 22 10:02:09 crc kubenswrapper[4811]: I0122 10:02:09.619831 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pfkh9\" (UniqueName: \"kubernetes.io/projected/4cccf4b6-78f6-4f21-837a-9951270c3f0f-kube-api-access-pfkh9\") pod \"crc-debug-cgzkh\" (UID: \"4cccf4b6-78f6-4f21-837a-9951270c3f0f\") " pod="openshift-must-gather-t86ls/crc-debug-cgzkh" Jan 22 10:02:09 crc kubenswrapper[4811]: I0122 10:02:09.808660 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-t86ls/crc-debug-cgzkh" Jan 22 10:02:09 crc kubenswrapper[4811]: I0122 10:02:09.972878 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wct2f" Jan 22 10:02:10 crc kubenswrapper[4811]: I0122 10:02:10.000161 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c1580dd-d98c-4e53-bba2-96343641ade7" path="/var/lib/kubelet/pods/8c1580dd-d98c-4e53-bba2-96343641ade7/volumes" Jan 22 10:02:10 crc kubenswrapper[4811]: I0122 10:02:10.116173 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bg4qd\" (UniqueName: \"kubernetes.io/projected/6d36eba9-846c-4d4e-8623-513ada4d04d7-kube-api-access-bg4qd\") pod \"6d36eba9-846c-4d4e-8623-513ada4d04d7\" (UID: \"6d36eba9-846c-4d4e-8623-513ada4d04d7\") " Jan 22 10:02:10 crc kubenswrapper[4811]: I0122 10:02:10.116217 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d36eba9-846c-4d4e-8623-513ada4d04d7-utilities\") pod \"6d36eba9-846c-4d4e-8623-513ada4d04d7\" (UID: \"6d36eba9-846c-4d4e-8623-513ada4d04d7\") " Jan 22 10:02:10 crc kubenswrapper[4811]: I0122 10:02:10.116459 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d36eba9-846c-4d4e-8623-513ada4d04d7-catalog-content\") pod \"6d36eba9-846c-4d4e-8623-513ada4d04d7\" (UID: \"6d36eba9-846c-4d4e-8623-513ada4d04d7\") " Jan 22 10:02:10 crc kubenswrapper[4811]: I0122 10:02:10.117078 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d36eba9-846c-4d4e-8623-513ada4d04d7-utilities" (OuterVolumeSpecName: "utilities") pod "6d36eba9-846c-4d4e-8623-513ada4d04d7" (UID: "6d36eba9-846c-4d4e-8623-513ada4d04d7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:02:10 crc kubenswrapper[4811]: I0122 10:02:10.123751 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d36eba9-846c-4d4e-8623-513ada4d04d7-kube-api-access-bg4qd" (OuterVolumeSpecName: "kube-api-access-bg4qd") pod "6d36eba9-846c-4d4e-8623-513ada4d04d7" (UID: "6d36eba9-846c-4d4e-8623-513ada4d04d7"). InnerVolumeSpecName "kube-api-access-bg4qd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:02:10 crc kubenswrapper[4811]: I0122 10:02:10.157931 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d36eba9-846c-4d4e-8623-513ada4d04d7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6d36eba9-846c-4d4e-8623-513ada4d04d7" (UID: "6d36eba9-846c-4d4e-8623-513ada4d04d7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:02:10 crc kubenswrapper[4811]: I0122 10:02:10.169361 4811 generic.go:334] "Generic (PLEG): container finished" podID="6d36eba9-846c-4d4e-8623-513ada4d04d7" containerID="d579490859c07558547011ef211a3a7d9672bb81380d8c9f5694fc149c75ea3e" exitCode=0 Jan 22 10:02:10 crc kubenswrapper[4811]: I0122 10:02:10.169423 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wct2f" event={"ID":"6d36eba9-846c-4d4e-8623-513ada4d04d7","Type":"ContainerDied","Data":"d579490859c07558547011ef211a3a7d9672bb81380d8c9f5694fc149c75ea3e"} Jan 22 10:02:10 crc kubenswrapper[4811]: I0122 10:02:10.169451 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wct2f" event={"ID":"6d36eba9-846c-4d4e-8623-513ada4d04d7","Type":"ContainerDied","Data":"4893534545c2c32079de3354d902fee3a3222fe348699234b938078100c061fd"} Jan 22 10:02:10 crc kubenswrapper[4811]: I0122 10:02:10.169469 4811 scope.go:117] "RemoveContainer" containerID="d579490859c07558547011ef211a3a7d9672bb81380d8c9f5694fc149c75ea3e" Jan 22 10:02:10 crc kubenswrapper[4811]: I0122 10:02:10.169594 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wct2f" Jan 22 10:02:10 crc kubenswrapper[4811]: I0122 10:02:10.172697 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-t86ls/crc-debug-cgzkh" event={"ID":"4cccf4b6-78f6-4f21-837a-9951270c3f0f","Type":"ContainerStarted","Data":"fb8be3b4d7db35e2d699cc1424cc6d23067c59a8004277a47ebbaf95a4f5fe40"} Jan 22 10:02:10 crc kubenswrapper[4811]: I0122 10:02:10.192976 4811 scope.go:117] "RemoveContainer" containerID="245d10dcad4a4e0a49a72497217e7fad02f73e67be87ede65f40513cf88c92a1" Jan 22 10:02:10 crc kubenswrapper[4811]: I0122 10:02:10.202666 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wct2f"] Jan 22 10:02:10 crc kubenswrapper[4811]: I0122 10:02:10.211513 4811 scope.go:117] "RemoveContainer" containerID="58293b9f5d817867de03dcee540bfb8240a0357fb782ef3e86ff9fc089ea39ad" Jan 22 10:02:10 crc kubenswrapper[4811]: I0122 10:02:10.218703 4811 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d36eba9-846c-4d4e-8623-513ada4d04d7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 10:02:10 crc kubenswrapper[4811]: I0122 10:02:10.218726 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bg4qd\" (UniqueName: \"kubernetes.io/projected/6d36eba9-846c-4d4e-8623-513ada4d04d7-kube-api-access-bg4qd\") on node \"crc\" DevicePath \"\"" Jan 22 10:02:10 crc kubenswrapper[4811]: I0122 10:02:10.218737 4811 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d36eba9-846c-4d4e-8623-513ada4d04d7-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 10:02:10 crc kubenswrapper[4811]: I0122 10:02:10.221413 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-wct2f"] Jan 22 10:02:10 crc kubenswrapper[4811]: I0122 10:02:10.237294 4811 scope.go:117] "RemoveContainer" containerID="d579490859c07558547011ef211a3a7d9672bb81380d8c9f5694fc149c75ea3e" Jan 22 10:02:10 crc kubenswrapper[4811]: E0122 10:02:10.237635 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d579490859c07558547011ef211a3a7d9672bb81380d8c9f5694fc149c75ea3e\": container with ID starting with d579490859c07558547011ef211a3a7d9672bb81380d8c9f5694fc149c75ea3e not found: ID does not exist" containerID="d579490859c07558547011ef211a3a7d9672bb81380d8c9f5694fc149c75ea3e" Jan 22 10:02:10 crc kubenswrapper[4811]: I0122 10:02:10.237682 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d579490859c07558547011ef211a3a7d9672bb81380d8c9f5694fc149c75ea3e"} err="failed to get container status \"d579490859c07558547011ef211a3a7d9672bb81380d8c9f5694fc149c75ea3e\": rpc error: code = NotFound desc = could not find container \"d579490859c07558547011ef211a3a7d9672bb81380d8c9f5694fc149c75ea3e\": container with ID starting with d579490859c07558547011ef211a3a7d9672bb81380d8c9f5694fc149c75ea3e not found: ID does not exist" Jan 22 10:02:10 crc kubenswrapper[4811]: I0122 10:02:10.237706 4811 scope.go:117] "RemoveContainer" containerID="245d10dcad4a4e0a49a72497217e7fad02f73e67be87ede65f40513cf88c92a1" Jan 22 10:02:10 crc kubenswrapper[4811]: E0122 10:02:10.237977 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"245d10dcad4a4e0a49a72497217e7fad02f73e67be87ede65f40513cf88c92a1\": container with ID starting with 245d10dcad4a4e0a49a72497217e7fad02f73e67be87ede65f40513cf88c92a1 not found: ID does not exist" containerID="245d10dcad4a4e0a49a72497217e7fad02f73e67be87ede65f40513cf88c92a1" Jan 22 10:02:10 crc kubenswrapper[4811]: I0122 10:02:10.238009 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"245d10dcad4a4e0a49a72497217e7fad02f73e67be87ede65f40513cf88c92a1"} err="failed to get container status \"245d10dcad4a4e0a49a72497217e7fad02f73e67be87ede65f40513cf88c92a1\": rpc error: code = NotFound desc = could not find container \"245d10dcad4a4e0a49a72497217e7fad02f73e67be87ede65f40513cf88c92a1\": container with ID starting with 245d10dcad4a4e0a49a72497217e7fad02f73e67be87ede65f40513cf88c92a1 not found: ID does not exist" Jan 22 10:02:10 crc kubenswrapper[4811]: I0122 10:02:10.238028 4811 scope.go:117] "RemoveContainer" containerID="58293b9f5d817867de03dcee540bfb8240a0357fb782ef3e86ff9fc089ea39ad" Jan 22 10:02:10 crc kubenswrapper[4811]: E0122 10:02:10.238243 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58293b9f5d817867de03dcee540bfb8240a0357fb782ef3e86ff9fc089ea39ad\": container with ID starting with 58293b9f5d817867de03dcee540bfb8240a0357fb782ef3e86ff9fc089ea39ad not found: ID does not exist" containerID="58293b9f5d817867de03dcee540bfb8240a0357fb782ef3e86ff9fc089ea39ad" Jan 22 10:02:10 crc kubenswrapper[4811]: I0122 10:02:10.238265 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58293b9f5d817867de03dcee540bfb8240a0357fb782ef3e86ff9fc089ea39ad"} err="failed to get container status \"58293b9f5d817867de03dcee540bfb8240a0357fb782ef3e86ff9fc089ea39ad\": rpc error: code = NotFound desc = could not find container \"58293b9f5d817867de03dcee540bfb8240a0357fb782ef3e86ff9fc089ea39ad\": container with ID starting with 58293b9f5d817867de03dcee540bfb8240a0357fb782ef3e86ff9fc089ea39ad not found: ID does not exist" Jan 22 10:02:11 crc kubenswrapper[4811]: I0122 10:02:11.181718 4811 generic.go:334] "Generic (PLEG): container finished" podID="4cccf4b6-78f6-4f21-837a-9951270c3f0f" containerID="fe2d3e134a8813121106d11ca9dd94b2b269072fa530ac900c19badef879e375" exitCode=0 Jan 22 10:02:11 crc kubenswrapper[4811]: I0122 10:02:11.181828 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-t86ls/crc-debug-cgzkh" event={"ID":"4cccf4b6-78f6-4f21-837a-9951270c3f0f","Type":"ContainerDied","Data":"fe2d3e134a8813121106d11ca9dd94b2b269072fa530ac900c19badef879e375"} Jan 22 10:02:11 crc kubenswrapper[4811]: I0122 10:02:11.558508 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-t86ls/crc-debug-cgzkh"] Jan 22 10:02:11 crc kubenswrapper[4811]: I0122 10:02:11.566532 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-t86ls/crc-debug-cgzkh"] Jan 22 10:02:12 crc kubenswrapper[4811]: I0122 10:02:12.000477 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d36eba9-846c-4d4e-8623-513ada4d04d7" path="/var/lib/kubelet/pods/6d36eba9-846c-4d4e-8623-513ada4d04d7/volumes" Jan 22 10:02:12 crc kubenswrapper[4811]: I0122 10:02:12.280825 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-t86ls/crc-debug-cgzkh" Jan 22 10:02:12 crc kubenswrapper[4811]: I0122 10:02:12.361488 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pfkh9\" (UniqueName: \"kubernetes.io/projected/4cccf4b6-78f6-4f21-837a-9951270c3f0f-kube-api-access-pfkh9\") pod \"4cccf4b6-78f6-4f21-837a-9951270c3f0f\" (UID: \"4cccf4b6-78f6-4f21-837a-9951270c3f0f\") " Jan 22 10:02:12 crc kubenswrapper[4811]: I0122 10:02:12.361578 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4cccf4b6-78f6-4f21-837a-9951270c3f0f-host\") pod \"4cccf4b6-78f6-4f21-837a-9951270c3f0f\" (UID: \"4cccf4b6-78f6-4f21-837a-9951270c3f0f\") " Jan 22 10:02:12 crc kubenswrapper[4811]: I0122 10:02:12.361705 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4cccf4b6-78f6-4f21-837a-9951270c3f0f-host" (OuterVolumeSpecName: "host") pod "4cccf4b6-78f6-4f21-837a-9951270c3f0f" (UID: "4cccf4b6-78f6-4f21-837a-9951270c3f0f"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 10:02:12 crc kubenswrapper[4811]: I0122 10:02:12.362373 4811 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4cccf4b6-78f6-4f21-837a-9951270c3f0f-host\") on node \"crc\" DevicePath \"\"" Jan 22 10:02:12 crc kubenswrapper[4811]: I0122 10:02:12.369025 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4cccf4b6-78f6-4f21-837a-9951270c3f0f-kube-api-access-pfkh9" (OuterVolumeSpecName: "kube-api-access-pfkh9") pod "4cccf4b6-78f6-4f21-837a-9951270c3f0f" (UID: "4cccf4b6-78f6-4f21-837a-9951270c3f0f"). InnerVolumeSpecName "kube-api-access-pfkh9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:02:12 crc kubenswrapper[4811]: I0122 10:02:12.465457 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pfkh9\" (UniqueName: \"kubernetes.io/projected/4cccf4b6-78f6-4f21-837a-9951270c3f0f-kube-api-access-pfkh9\") on node \"crc\" DevicePath \"\"" Jan 22 10:02:12 crc kubenswrapper[4811]: I0122 10:02:12.776725 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-t86ls/crc-debug-58ddt"] Jan 22 10:02:12 crc kubenswrapper[4811]: E0122 10:02:12.777117 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d36eba9-846c-4d4e-8623-513ada4d04d7" containerName="extract-utilities" Jan 22 10:02:12 crc kubenswrapper[4811]: I0122 10:02:12.777137 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d36eba9-846c-4d4e-8623-513ada4d04d7" containerName="extract-utilities" Jan 22 10:02:12 crc kubenswrapper[4811]: E0122 10:02:12.777163 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d36eba9-846c-4d4e-8623-513ada4d04d7" containerName="registry-server" Jan 22 10:02:12 crc kubenswrapper[4811]: I0122 10:02:12.777170 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d36eba9-846c-4d4e-8623-513ada4d04d7" containerName="registry-server" Jan 22 10:02:12 crc kubenswrapper[4811]: E0122 10:02:12.777185 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cccf4b6-78f6-4f21-837a-9951270c3f0f" containerName="container-00" Jan 22 10:02:12 crc kubenswrapper[4811]: I0122 10:02:12.777190 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cccf4b6-78f6-4f21-837a-9951270c3f0f" containerName="container-00" Jan 22 10:02:12 crc kubenswrapper[4811]: E0122 10:02:12.777201 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d36eba9-846c-4d4e-8623-513ada4d04d7" containerName="extract-content" Jan 22 10:02:12 crc kubenswrapper[4811]: I0122 10:02:12.777206 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d36eba9-846c-4d4e-8623-513ada4d04d7" containerName="extract-content" Jan 22 10:02:12 crc kubenswrapper[4811]: I0122 10:02:12.777390 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="4cccf4b6-78f6-4f21-837a-9951270c3f0f" containerName="container-00" Jan 22 10:02:12 crc kubenswrapper[4811]: I0122 10:02:12.777400 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d36eba9-846c-4d4e-8623-513ada4d04d7" containerName="registry-server" Jan 22 10:02:12 crc kubenswrapper[4811]: I0122 10:02:12.778047 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-t86ls/crc-debug-58ddt" Jan 22 10:02:12 crc kubenswrapper[4811]: I0122 10:02:12.874362 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55qqt\" (UniqueName: \"kubernetes.io/projected/f88300a5-12a4-4e80-9495-245c2d39dc4e-kube-api-access-55qqt\") pod \"crc-debug-58ddt\" (UID: \"f88300a5-12a4-4e80-9495-245c2d39dc4e\") " pod="openshift-must-gather-t86ls/crc-debug-58ddt" Jan 22 10:02:12 crc kubenswrapper[4811]: I0122 10:02:12.874610 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f88300a5-12a4-4e80-9495-245c2d39dc4e-host\") pod \"crc-debug-58ddt\" (UID: \"f88300a5-12a4-4e80-9495-245c2d39dc4e\") " pod="openshift-must-gather-t86ls/crc-debug-58ddt" Jan 22 10:02:12 crc kubenswrapper[4811]: I0122 10:02:12.976113 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55qqt\" (UniqueName: \"kubernetes.io/projected/f88300a5-12a4-4e80-9495-245c2d39dc4e-kube-api-access-55qqt\") pod \"crc-debug-58ddt\" (UID: \"f88300a5-12a4-4e80-9495-245c2d39dc4e\") " pod="openshift-must-gather-t86ls/crc-debug-58ddt" Jan 22 10:02:12 crc kubenswrapper[4811]: I0122 10:02:12.976200 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f88300a5-12a4-4e80-9495-245c2d39dc4e-host\") pod \"crc-debug-58ddt\" (UID: \"f88300a5-12a4-4e80-9495-245c2d39dc4e\") " pod="openshift-must-gather-t86ls/crc-debug-58ddt" Jan 22 10:02:12 crc kubenswrapper[4811]: I0122 10:02:12.976390 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f88300a5-12a4-4e80-9495-245c2d39dc4e-host\") pod \"crc-debug-58ddt\" (UID: \"f88300a5-12a4-4e80-9495-245c2d39dc4e\") " pod="openshift-must-gather-t86ls/crc-debug-58ddt" Jan 22 10:02:12 crc kubenswrapper[4811]: I0122 10:02:12.991175 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55qqt\" (UniqueName: \"kubernetes.io/projected/f88300a5-12a4-4e80-9495-245c2d39dc4e-kube-api-access-55qqt\") pod \"crc-debug-58ddt\" (UID: \"f88300a5-12a4-4e80-9495-245c2d39dc4e\") " pod="openshift-must-gather-t86ls/crc-debug-58ddt" Jan 22 10:02:13 crc kubenswrapper[4811]: I0122 10:02:13.093460 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-t86ls/crc-debug-58ddt" Jan 22 10:02:13 crc kubenswrapper[4811]: W0122 10:02:13.138900 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf88300a5_12a4_4e80_9495_245c2d39dc4e.slice/crio-fd7baf02cbf810d0c50dbe9212a008251605a8bc7a40eae69c32723822d3c847 WatchSource:0}: Error finding container fd7baf02cbf810d0c50dbe9212a008251605a8bc7a40eae69c32723822d3c847: Status 404 returned error can't find the container with id fd7baf02cbf810d0c50dbe9212a008251605a8bc7a40eae69c32723822d3c847 Jan 22 10:02:13 crc kubenswrapper[4811]: I0122 10:02:13.206678 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-t86ls/crc-debug-58ddt" event={"ID":"f88300a5-12a4-4e80-9495-245c2d39dc4e","Type":"ContainerStarted","Data":"fd7baf02cbf810d0c50dbe9212a008251605a8bc7a40eae69c32723822d3c847"} Jan 22 10:02:13 crc kubenswrapper[4811]: I0122 10:02:13.207910 4811 scope.go:117] "RemoveContainer" containerID="fe2d3e134a8813121106d11ca9dd94b2b269072fa530ac900c19badef879e375" Jan 22 10:02:13 crc kubenswrapper[4811]: I0122 10:02:13.207970 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-t86ls/crc-debug-cgzkh" Jan 22 10:02:14 crc kubenswrapper[4811]: I0122 10:02:14.000733 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4cccf4b6-78f6-4f21-837a-9951270c3f0f" path="/var/lib/kubelet/pods/4cccf4b6-78f6-4f21-837a-9951270c3f0f/volumes" Jan 22 10:02:14 crc kubenswrapper[4811]: I0122 10:02:14.216359 4811 generic.go:334] "Generic (PLEG): container finished" podID="f88300a5-12a4-4e80-9495-245c2d39dc4e" containerID="52c71f4589fdd503e9e6bd34134f0282fae9190ddf9843f87ec795e4a2c08a07" exitCode=0 Jan 22 10:02:14 crc kubenswrapper[4811]: I0122 10:02:14.216488 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-t86ls/crc-debug-58ddt" event={"ID":"f88300a5-12a4-4e80-9495-245c2d39dc4e","Type":"ContainerDied","Data":"52c71f4589fdd503e9e6bd34134f0282fae9190ddf9843f87ec795e4a2c08a07"} Jan 22 10:02:14 crc kubenswrapper[4811]: I0122 10:02:14.245700 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-t86ls/crc-debug-58ddt"] Jan 22 10:02:14 crc kubenswrapper[4811]: I0122 10:02:14.254925 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-t86ls/crc-debug-58ddt"] Jan 22 10:02:15 crc kubenswrapper[4811]: I0122 10:02:15.300388 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-t86ls/crc-debug-58ddt" Jan 22 10:02:15 crc kubenswrapper[4811]: I0122 10:02:15.424531 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-55qqt\" (UniqueName: \"kubernetes.io/projected/f88300a5-12a4-4e80-9495-245c2d39dc4e-kube-api-access-55qqt\") pod \"f88300a5-12a4-4e80-9495-245c2d39dc4e\" (UID: \"f88300a5-12a4-4e80-9495-245c2d39dc4e\") " Jan 22 10:02:15 crc kubenswrapper[4811]: I0122 10:02:15.424642 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f88300a5-12a4-4e80-9495-245c2d39dc4e-host\") pod \"f88300a5-12a4-4e80-9495-245c2d39dc4e\" (UID: \"f88300a5-12a4-4e80-9495-245c2d39dc4e\") " Jan 22 10:02:15 crc kubenswrapper[4811]: I0122 10:02:15.424955 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f88300a5-12a4-4e80-9495-245c2d39dc4e-host" (OuterVolumeSpecName: "host") pod "f88300a5-12a4-4e80-9495-245c2d39dc4e" (UID: "f88300a5-12a4-4e80-9495-245c2d39dc4e"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 10:02:15 crc kubenswrapper[4811]: I0122 10:02:15.431204 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88300a5-12a4-4e80-9495-245c2d39dc4e-kube-api-access-55qqt" (OuterVolumeSpecName: "kube-api-access-55qqt") pod "f88300a5-12a4-4e80-9495-245c2d39dc4e" (UID: "f88300a5-12a4-4e80-9495-245c2d39dc4e"). InnerVolumeSpecName "kube-api-access-55qqt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:02:15 crc kubenswrapper[4811]: I0122 10:02:15.526482 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-55qqt\" (UniqueName: \"kubernetes.io/projected/f88300a5-12a4-4e80-9495-245c2d39dc4e-kube-api-access-55qqt\") on node \"crc\" DevicePath \"\"" Jan 22 10:02:15 crc kubenswrapper[4811]: I0122 10:02:15.526513 4811 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f88300a5-12a4-4e80-9495-245c2d39dc4e-host\") on node \"crc\" DevicePath \"\"" Jan 22 10:02:16 crc kubenswrapper[4811]: I0122 10:02:16.024007 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88300a5-12a4-4e80-9495-245c2d39dc4e" path="/var/lib/kubelet/pods/f88300a5-12a4-4e80-9495-245c2d39dc4e/volumes" Jan 22 10:02:16 crc kubenswrapper[4811]: I0122 10:02:16.232380 4811 scope.go:117] "RemoveContainer" containerID="52c71f4589fdd503e9e6bd34134f0282fae9190ddf9843f87ec795e4a2c08a07" Jan 22 10:02:16 crc kubenswrapper[4811]: I0122 10:02:16.232502 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-t86ls/crc-debug-58ddt" Jan 22 10:02:17 crc kubenswrapper[4811]: I0122 10:02:17.173612 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3078a1b977323d6d8f95a4018ef06f377c198eb1741282ac05e933b60384v48_d014441f-4913-4583-ba8e-b1c20aaeed47/extract/0.log" Jan 22 10:02:17 crc kubenswrapper[4811]: I0122 10:02:17.183617 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3078a1b977323d6d8f95a4018ef06f377c198eb1741282ac05e933b60384v48_d014441f-4913-4583-ba8e-b1c20aaeed47/util/0.log" Jan 22 10:02:17 crc kubenswrapper[4811]: I0122 10:02:17.188648 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3078a1b977323d6d8f95a4018ef06f377c198eb1741282ac05e933b60384v48_d014441f-4913-4583-ba8e-b1c20aaeed47/pull/0.log" Jan 22 10:02:17 crc kubenswrapper[4811]: I0122 10:02:17.283600 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-59dd8b7cbf-pklcs_09ad3a19-244b-4685-8c96-0bee227b6547/manager/0.log" Jan 22 10:02:17 crc kubenswrapper[4811]: I0122 10:02:17.331424 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-69cf5d4557-rgwhg_e6fe0bc0-30b4-4a2f-b36d-93d5b288ecf8/manager/0.log" Jan 22 10:02:17 crc kubenswrapper[4811]: I0122 10:02:17.344532 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-b45d7bf98-2ltqr_62aa676a-95ae-40a8-9db5-b5fd24a293c2/manager/0.log" Jan 22 10:02:17 crc kubenswrapper[4811]: I0122 10:02:17.427005 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-78fdd796fd-26vqb_b0f07719-5203-4d79-82b4-995b8af81a00/manager/0.log" Jan 22 10:02:17 crc kubenswrapper[4811]: I0122 10:02:17.449013 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-vbbnq_e019bc4b-f0e7-4a4f-a42c-1486010a63fd/manager/0.log" Jan 22 10:02:17 crc kubenswrapper[4811]: I0122 10:02:17.471818 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-7p5h9_62a9fc61-630e-4f4d-9788-f21e25ab4dda/manager/0.log" Jan 22 10:02:17 crc kubenswrapper[4811]: I0122 10:02:17.703447 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-54ccf4f85d-r6z6t_81d4cd92-880c-4806-ab95-fcb009827075/manager/0.log" Jan 22 10:02:17 crc kubenswrapper[4811]: I0122 10:02:17.719273 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-69d6c9f5b8-fx6zn_ce893825-4e8e-4c9b-b37e-a974d7cfda21/manager/0.log" Jan 22 10:02:17 crc kubenswrapper[4811]: I0122 10:02:17.770190 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b8b6d4659-99m2t_9dfc4274-a6f5-4dcb-8bcf-b2e0ff337574/manager/0.log" Jan 22 10:02:17 crc kubenswrapper[4811]: I0122 10:02:17.819345 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-78c6999f6f-4wtlm_688057d8-0445-42c1-b073-83deb026ab4c/manager/0.log" Jan 22 10:02:17 crc kubenswrapper[4811]: I0122 10:02:17.845693 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-c87fff755-kc9m5_02697a04-4401-498c-9b69-ff0b57ce8f4b/manager/0.log" Jan 22 10:02:17 crc kubenswrapper[4811]: I0122 10:02:17.887954 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5d8f59fb49-tll52_c284b86e-5a0e-4bd4-aa67-4e2c208ea4fa/manager/0.log" Jan 22 10:02:17 crc kubenswrapper[4811]: I0122 10:02:17.945983 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-6b8bc8d87d-h7wzt_b157cb38-af8a-41bf-a29a-2da5b59aa500/manager/0.log" Jan 22 10:02:17 crc kubenswrapper[4811]: I0122 10:02:17.954965 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7bd9774b6-t9djx_a247bb8f-a274-481d-916b-8ad80521af31/manager/0.log" Jan 22 10:02:17 crc kubenswrapper[4811]: I0122 10:02:17.965687 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-7fbbcdb4d6qmzp5_7c209919-fd54-40e8-a741-7006cf8dd361/manager/0.log" Jan 22 10:02:18 crc kubenswrapper[4811]: I0122 10:02:18.072362 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-5cd76577f9-kn8dt_a01a5eb9-0bef-4a6b-af9e-d71281e2ae34/operator/0.log" Jan 22 10:02:19 crc kubenswrapper[4811]: I0122 10:02:19.304465 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-647bb87bbd-v227g_c0b74933-8fe4-4fb1-82af-eda7df5c3c06/manager/0.log" Jan 22 10:02:19 crc kubenswrapper[4811]: I0122 10:02:19.359016 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-c2sr9_5518af80-1f74-4caf-8bc0-80680646bfca/registry-server/0.log" Jan 22 10:02:19 crc kubenswrapper[4811]: I0122 10:02:19.405678 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-55db956ddc-xp5jv_e343c2da-412a-4226-b711-81f83fdbb04b/manager/0.log" Jan 22 10:02:19 crc kubenswrapper[4811]: I0122 10:02:19.427910 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5d646b7d76-rvsm7_b579b636-697b-4a23-9de7-1f9a8537eb94/manager/0.log" Jan 22 10:02:19 crc kubenswrapper[4811]: I0122 10:02:19.446497 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-2l9vr_c624375e-a5cf-49b8-a54a-5770a6c7e738/operator/0.log" Jan 22 10:02:19 crc kubenswrapper[4811]: I0122 10:02:19.454839 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-547cbdb99f-4xkc8_b3409f06-ef57-4717-b3e2-9b4f788fd7f0/manager/0.log" Jan 22 10:02:19 crc kubenswrapper[4811]: I0122 10:02:19.511832 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-85cd9769bb-627nz_42323d0d-05b6-4a0d-a809-405dec7c2893/manager/0.log" Jan 22 10:02:19 crc kubenswrapper[4811]: I0122 10:02:19.524814 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-69797bbcbd-mg5cr_d990df50-3df1-46b6-b6df-5b84bf8eeb20/manager/0.log" Jan 22 10:02:19 crc kubenswrapper[4811]: I0122 10:02:19.533375 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-5ffb9c6597-kxs2j_15eb97d5-2508-4c32-8b7e-65f1015767cf/manager/0.log" Jan 22 10:02:24 crc kubenswrapper[4811]: I0122 10:02:24.104162 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-9pxnj_8b7094aa-cc4a-49eb-be77-715a4efbc1d0/control-plane-machine-set-operator/0.log" Jan 22 10:02:24 crc kubenswrapper[4811]: I0122 10:02:24.115417 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-rx42r_8a9d91fa-d887-4128-af43-cfe3cad79784/kube-rbac-proxy/0.log" Jan 22 10:02:24 crc kubenswrapper[4811]: I0122 10:02:24.125243 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-rx42r_8a9d91fa-d887-4128-af43-cfe3cad79784/machine-api-operator/0.log" Jan 22 10:02:43 crc kubenswrapper[4811]: I0122 10:02:43.421907 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-xvdbh_97d60b95-f52c-4946-919a-e8fd73251ed5/cert-manager-controller/0.log" Jan 22 10:02:43 crc kubenswrapper[4811]: I0122 10:02:43.433320 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-jbj4c_4dbc71dd-a371-4735-bc7e-6c29eb855fbd/cert-manager-cainjector/0.log" Jan 22 10:02:43 crc kubenswrapper[4811]: I0122 10:02:43.442929 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-sgnhm_34e7fc62-f1c1-41cb-b44c-2ef705fa2a15/cert-manager-webhook/0.log" Jan 22 10:02:47 crc kubenswrapper[4811]: I0122 10:02:47.649876 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-22qhf_d63fcc2e-ef3c-4a10-9444-43070aa0dc77/nmstate-console-plugin/0.log" Jan 22 10:02:47 crc kubenswrapper[4811]: I0122 10:02:47.672693 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-tvjnz_74fc22de-195f-452c-b18c-f12c53f2465f/nmstate-handler/0.log" Jan 22 10:02:47 crc kubenswrapper[4811]: I0122 10:02:47.688316 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-nmrg4_66e8ec28-33fd-440b-9064-dd5c40cf4b61/nmstate-metrics/0.log" Jan 22 10:02:47 crc kubenswrapper[4811]: I0122 10:02:47.695349 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-nmrg4_66e8ec28-33fd-440b-9064-dd5c40cf4b61/kube-rbac-proxy/0.log" Jan 22 10:02:47 crc kubenswrapper[4811]: I0122 10:02:47.709541 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-v76qn_952cfa08-4a5f-43b8-aa83-58839cc92523/nmstate-operator/0.log" Jan 22 10:02:47 crc kubenswrapper[4811]: I0122 10:02:47.718609 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-tnk97_9e9c9633-f916-440c-b02c-5bb58eb51e76/nmstate-webhook/0.log" Jan 22 10:02:57 crc kubenswrapper[4811]: I0122 10:02:57.416914 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-rl9k7_346ed4cd-2bb8-470d-a275-6c297994fb3f/controller/0.log" Jan 22 10:02:57 crc kubenswrapper[4811]: I0122 10:02:57.425520 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-rl9k7_346ed4cd-2bb8-470d-a275-6c297994fb3f/kube-rbac-proxy/0.log" Jan 22 10:02:57 crc kubenswrapper[4811]: I0122 10:02:57.452157 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rnvr5_d2a3b77d-fd7c-421e-aa23-d206419b1c7d/controller/0.log" Jan 22 10:02:58 crc kubenswrapper[4811]: I0122 10:02:58.605050 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rnvr5_d2a3b77d-fd7c-421e-aa23-d206419b1c7d/frr/0.log" Jan 22 10:02:58 crc kubenswrapper[4811]: I0122 10:02:58.615737 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rnvr5_d2a3b77d-fd7c-421e-aa23-d206419b1c7d/reloader/0.log" Jan 22 10:02:58 crc kubenswrapper[4811]: I0122 10:02:58.622852 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rnvr5_d2a3b77d-fd7c-421e-aa23-d206419b1c7d/frr-metrics/0.log" Jan 22 10:02:58 crc kubenswrapper[4811]: I0122 10:02:58.633518 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rnvr5_d2a3b77d-fd7c-421e-aa23-d206419b1c7d/kube-rbac-proxy/0.log" Jan 22 10:02:58 crc kubenswrapper[4811]: I0122 10:02:58.642016 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rnvr5_d2a3b77d-fd7c-421e-aa23-d206419b1c7d/kube-rbac-proxy-frr/0.log" Jan 22 10:02:58 crc kubenswrapper[4811]: I0122 10:02:58.649258 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rnvr5_d2a3b77d-fd7c-421e-aa23-d206419b1c7d/cp-frr-files/0.log" Jan 22 10:02:58 crc kubenswrapper[4811]: I0122 10:02:58.657779 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rnvr5_d2a3b77d-fd7c-421e-aa23-d206419b1c7d/cp-reloader/0.log" Jan 22 10:02:58 crc kubenswrapper[4811]: I0122 10:02:58.661752 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rnvr5_d2a3b77d-fd7c-421e-aa23-d206419b1c7d/cp-metrics/0.log" Jan 22 10:02:58 crc kubenswrapper[4811]: I0122 10:02:58.669840 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-zfggh_2f6eae9c-374b-4ac3-b5d7-04267fe9bf73/frr-k8s-webhook-server/0.log" Jan 22 10:02:58 crc kubenswrapper[4811]: I0122 10:02:58.686543 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-64bd67c58d-k58sk_13617657-7245-4223-9b20-03a56378edaf/manager/0.log" Jan 22 10:02:58 crc kubenswrapper[4811]: I0122 10:02:58.697081 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-5bc67d6df-ckhh4_1008e895-ec53-4fdd-9423-bbb4d249a6b9/webhook-server/0.log" Jan 22 10:02:58 crc kubenswrapper[4811]: I0122 10:02:58.941520 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-k88w9_faa36c07-3c7a-4b4a-a04e-58b43a178890/speaker/0.log" Jan 22 10:02:58 crc kubenswrapper[4811]: I0122 10:02:58.950094 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-k88w9_faa36c07-3c7a-4b4a-a04e-58b43a178890/kube-rbac-proxy/0.log" Jan 22 10:03:02 crc kubenswrapper[4811]: I0122 10:03:02.772806 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfbp48_4069e1a9-a40a-4b76-bee8-4b35c06e818e/extract/0.log" Jan 22 10:03:02 crc kubenswrapper[4811]: I0122 10:03:02.782710 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfbp48_4069e1a9-a40a-4b76-bee8-4b35c06e818e/util/0.log" Jan 22 10:03:02 crc kubenswrapper[4811]: I0122 10:03:02.789697 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfbp48_4069e1a9-a40a-4b76-bee8-4b35c06e818e/pull/0.log" Jan 22 10:03:02 crc kubenswrapper[4811]: I0122 10:03:02.811545 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713p6bdc_df76885d-11e8-4fce-a69a-dee26f62c562/extract/0.log" Jan 22 10:03:02 crc kubenswrapper[4811]: I0122 10:03:02.819232 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713p6bdc_df76885d-11e8-4fce-a69a-dee26f62c562/util/0.log" Jan 22 10:03:02 crc kubenswrapper[4811]: I0122 10:03:02.827431 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713p6bdc_df76885d-11e8-4fce-a69a-dee26f62c562/pull/0.log" Jan 22 10:03:02 crc kubenswrapper[4811]: I0122 10:03:02.978385 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-q2j6v_2476cdc2-a97d-41a6-8a68-d8d2537180d4/registry-server/0.log" Jan 22 10:03:02 crc kubenswrapper[4811]: I0122 10:03:02.983529 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-q2j6v_2476cdc2-a97d-41a6-8a68-d8d2537180d4/extract-utilities/0.log" Jan 22 10:03:02 crc kubenswrapper[4811]: I0122 10:03:02.989576 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-q2j6v_2476cdc2-a97d-41a6-8a68-d8d2537180d4/extract-content/0.log" Jan 22 10:03:03 crc kubenswrapper[4811]: I0122 10:03:03.419578 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-xn9pt_73a05511-7e27-41cd-9da6-e9277550936d/registry-server/0.log" Jan 22 10:03:03 crc kubenswrapper[4811]: I0122 10:03:03.424727 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-xn9pt_73a05511-7e27-41cd-9da6-e9277550936d/extract-utilities/0.log" Jan 22 10:03:03 crc kubenswrapper[4811]: I0122 10:03:03.439773 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-xn9pt_73a05511-7e27-41cd-9da6-e9277550936d/extract-content/0.log" Jan 22 10:03:03 crc kubenswrapper[4811]: I0122 10:03:03.453670 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-cnjv9_f62c6396-82e9-4314-912a-42f5265b03bb/marketplace-operator/0.log" Jan 22 10:03:03 crc kubenswrapper[4811]: I0122 10:03:03.562052 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pbqkb_aedb9efe-c04f-46f9-9b3c-c231b81440e7/registry-server/0.log" Jan 22 10:03:03 crc kubenswrapper[4811]: I0122 10:03:03.570660 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pbqkb_aedb9efe-c04f-46f9-9b3c-c231b81440e7/extract-utilities/0.log" Jan 22 10:03:03 crc kubenswrapper[4811]: I0122 10:03:03.583928 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pbqkb_aedb9efe-c04f-46f9-9b3c-c231b81440e7/extract-content/0.log" Jan 22 10:03:03 crc kubenswrapper[4811]: I0122 10:03:03.962336 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-65jbw_d1f41cc2-bb4a-415e-80a6-8ae31b4c354f/registry-server/0.log" Jan 22 10:03:03 crc kubenswrapper[4811]: I0122 10:03:03.968953 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-65jbw_d1f41cc2-bb4a-415e-80a6-8ae31b4c354f/extract-utilities/0.log" Jan 22 10:03:03 crc kubenswrapper[4811]: I0122 10:03:03.975709 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-65jbw_d1f41cc2-bb4a-415e-80a6-8ae31b4c354f/extract-content/0.log" Jan 22 10:03:48 crc kubenswrapper[4811]: I0122 10:03:48.309192 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4vkck"] Jan 22 10:03:48 crc kubenswrapper[4811]: E0122 10:03:48.309825 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f88300a5-12a4-4e80-9495-245c2d39dc4e" containerName="container-00" Jan 22 10:03:48 crc kubenswrapper[4811]: I0122 10:03:48.309839 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="f88300a5-12a4-4e80-9495-245c2d39dc4e" containerName="container-00" Jan 22 10:03:48 crc kubenswrapper[4811]: I0122 10:03:48.310043 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="f88300a5-12a4-4e80-9495-245c2d39dc4e" containerName="container-00" Jan 22 10:03:48 crc kubenswrapper[4811]: I0122 10:03:48.311138 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4vkck" Jan 22 10:03:48 crc kubenswrapper[4811]: I0122 10:03:48.411126 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4vkck"] Jan 22 10:03:48 crc kubenswrapper[4811]: I0122 10:03:48.436997 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbr7l\" (UniqueName: \"kubernetes.io/projected/860dea3c-1cf3-45aa-977d-6d06db7ba74e-kube-api-access-gbr7l\") pod \"redhat-marketplace-4vkck\" (UID: \"860dea3c-1cf3-45aa-977d-6d06db7ba74e\") " pod="openshift-marketplace/redhat-marketplace-4vkck" Jan 22 10:03:48 crc kubenswrapper[4811]: I0122 10:03:48.437108 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/860dea3c-1cf3-45aa-977d-6d06db7ba74e-catalog-content\") pod \"redhat-marketplace-4vkck\" (UID: \"860dea3c-1cf3-45aa-977d-6d06db7ba74e\") " pod="openshift-marketplace/redhat-marketplace-4vkck" Jan 22 10:03:48 crc kubenswrapper[4811]: I0122 10:03:48.437204 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/860dea3c-1cf3-45aa-977d-6d06db7ba74e-utilities\") pod \"redhat-marketplace-4vkck\" (UID: \"860dea3c-1cf3-45aa-977d-6d06db7ba74e\") " pod="openshift-marketplace/redhat-marketplace-4vkck" Jan 22 10:03:48 crc kubenswrapper[4811]: I0122 10:03:48.539051 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/860dea3c-1cf3-45aa-977d-6d06db7ba74e-utilities\") pod \"redhat-marketplace-4vkck\" (UID: \"860dea3c-1cf3-45aa-977d-6d06db7ba74e\") " pod="openshift-marketplace/redhat-marketplace-4vkck" Jan 22 10:03:48 crc kubenswrapper[4811]: I0122 10:03:48.539180 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gbr7l\" (UniqueName: \"kubernetes.io/projected/860dea3c-1cf3-45aa-977d-6d06db7ba74e-kube-api-access-gbr7l\") pod \"redhat-marketplace-4vkck\" (UID: \"860dea3c-1cf3-45aa-977d-6d06db7ba74e\") " pod="openshift-marketplace/redhat-marketplace-4vkck" Jan 22 10:03:48 crc kubenswrapper[4811]: I0122 10:03:48.539282 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/860dea3c-1cf3-45aa-977d-6d06db7ba74e-catalog-content\") pod \"redhat-marketplace-4vkck\" (UID: \"860dea3c-1cf3-45aa-977d-6d06db7ba74e\") " pod="openshift-marketplace/redhat-marketplace-4vkck" Jan 22 10:03:48 crc kubenswrapper[4811]: I0122 10:03:48.539506 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/860dea3c-1cf3-45aa-977d-6d06db7ba74e-utilities\") pod \"redhat-marketplace-4vkck\" (UID: \"860dea3c-1cf3-45aa-977d-6d06db7ba74e\") " pod="openshift-marketplace/redhat-marketplace-4vkck" Jan 22 10:03:48 crc kubenswrapper[4811]: I0122 10:03:48.539593 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/860dea3c-1cf3-45aa-977d-6d06db7ba74e-catalog-content\") pod \"redhat-marketplace-4vkck\" (UID: \"860dea3c-1cf3-45aa-977d-6d06db7ba74e\") " pod="openshift-marketplace/redhat-marketplace-4vkck" Jan 22 10:03:48 crc kubenswrapper[4811]: I0122 10:03:48.561883 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbr7l\" (UniqueName: \"kubernetes.io/projected/860dea3c-1cf3-45aa-977d-6d06db7ba74e-kube-api-access-gbr7l\") pod \"redhat-marketplace-4vkck\" (UID: \"860dea3c-1cf3-45aa-977d-6d06db7ba74e\") " pod="openshift-marketplace/redhat-marketplace-4vkck" Jan 22 10:03:48 crc kubenswrapper[4811]: I0122 10:03:48.625799 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4vkck" Jan 22 10:03:49 crc kubenswrapper[4811]: I0122 10:03:49.325173 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4vkck"] Jan 22 10:03:49 crc kubenswrapper[4811]: W0122 10:03:49.335960 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod860dea3c_1cf3_45aa_977d_6d06db7ba74e.slice/crio-ba3f83f01715dde92fc3a2c9cfab96dc87aec3eeb5f1669028cba4c591a6e4a4 WatchSource:0}: Error finding container ba3f83f01715dde92fc3a2c9cfab96dc87aec3eeb5f1669028cba4c591a6e4a4: Status 404 returned error can't find the container with id ba3f83f01715dde92fc3a2c9cfab96dc87aec3eeb5f1669028cba4c591a6e4a4 Jan 22 10:03:49 crc kubenswrapper[4811]: I0122 10:03:49.926014 4811 generic.go:334] "Generic (PLEG): container finished" podID="860dea3c-1cf3-45aa-977d-6d06db7ba74e" containerID="c3aa7ae40950b3b70afe9a214631aed40dbf5e939c023bf2d359a355780e7835" exitCode=0 Jan 22 10:03:49 crc kubenswrapper[4811]: I0122 10:03:49.926105 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4vkck" event={"ID":"860dea3c-1cf3-45aa-977d-6d06db7ba74e","Type":"ContainerDied","Data":"c3aa7ae40950b3b70afe9a214631aed40dbf5e939c023bf2d359a355780e7835"} Jan 22 10:03:49 crc kubenswrapper[4811]: I0122 10:03:49.926242 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4vkck" event={"ID":"860dea3c-1cf3-45aa-977d-6d06db7ba74e","Type":"ContainerStarted","Data":"ba3f83f01715dde92fc3a2c9cfab96dc87aec3eeb5f1669028cba4c591a6e4a4"} Jan 22 10:03:50 crc kubenswrapper[4811]: I0122 10:03:50.933882 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4vkck" event={"ID":"860dea3c-1cf3-45aa-977d-6d06db7ba74e","Type":"ContainerStarted","Data":"2976159f2ac7668aea1e6a64a77305a55cff957e7415de001f6452533d4bffb1"} Jan 22 10:03:51 crc kubenswrapper[4811]: I0122 10:03:51.964432 4811 generic.go:334] "Generic (PLEG): container finished" podID="860dea3c-1cf3-45aa-977d-6d06db7ba74e" containerID="2976159f2ac7668aea1e6a64a77305a55cff957e7415de001f6452533d4bffb1" exitCode=0 Jan 22 10:03:51 crc kubenswrapper[4811]: I0122 10:03:51.964775 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4vkck" event={"ID":"860dea3c-1cf3-45aa-977d-6d06db7ba74e","Type":"ContainerDied","Data":"2976159f2ac7668aea1e6a64a77305a55cff957e7415de001f6452533d4bffb1"} Jan 22 10:03:52 crc kubenswrapper[4811]: I0122 10:03:52.973969 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4vkck" event={"ID":"860dea3c-1cf3-45aa-977d-6d06db7ba74e","Type":"ContainerStarted","Data":"70a117265385f79f6f66cf41fca20a55297b512c4da7a310d45d4870176ea3e5"} Jan 22 10:03:58 crc kubenswrapper[4811]: I0122 10:03:58.626603 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4vkck" Jan 22 10:03:58 crc kubenswrapper[4811]: I0122 10:03:58.626960 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4vkck" Jan 22 10:03:58 crc kubenswrapper[4811]: I0122 10:03:58.688221 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4vkck" Jan 22 10:03:58 crc kubenswrapper[4811]: I0122 10:03:58.703857 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4vkck" podStartSLOduration=8.190393185 podStartE2EDuration="10.703843086s" podCreationTimestamp="2026-01-22 10:03:48 +0000 UTC" firstStartedPulling="2026-01-22 10:03:49.927405442 +0000 UTC m=+3474.249592554" lastFinishedPulling="2026-01-22 10:03:52.440855331 +0000 UTC m=+3476.763042455" observedRunningTime="2026-01-22 10:03:52.993323598 +0000 UTC m=+3477.315510721" watchObservedRunningTime="2026-01-22 10:03:58.703843086 +0000 UTC m=+3483.026030209" Jan 22 10:03:59 crc kubenswrapper[4811]: I0122 10:03:59.056143 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4vkck" Jan 22 10:03:59 crc kubenswrapper[4811]: I0122 10:03:59.101707 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4vkck"] Jan 22 10:04:01 crc kubenswrapper[4811]: I0122 10:04:01.032928 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4vkck" podUID="860dea3c-1cf3-45aa-977d-6d06db7ba74e" containerName="registry-server" containerID="cri-o://70a117265385f79f6f66cf41fca20a55297b512c4da7a310d45d4870176ea3e5" gracePeriod=2 Jan 22 10:04:01 crc kubenswrapper[4811]: I0122 10:04:01.588858 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4vkck" Jan 22 10:04:01 crc kubenswrapper[4811]: I0122 10:04:01.677915 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/860dea3c-1cf3-45aa-977d-6d06db7ba74e-utilities\") pod \"860dea3c-1cf3-45aa-977d-6d06db7ba74e\" (UID: \"860dea3c-1cf3-45aa-977d-6d06db7ba74e\") " Jan 22 10:04:01 crc kubenswrapper[4811]: I0122 10:04:01.678047 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/860dea3c-1cf3-45aa-977d-6d06db7ba74e-catalog-content\") pod \"860dea3c-1cf3-45aa-977d-6d06db7ba74e\" (UID: \"860dea3c-1cf3-45aa-977d-6d06db7ba74e\") " Jan 22 10:04:01 crc kubenswrapper[4811]: I0122 10:04:01.678174 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gbr7l\" (UniqueName: \"kubernetes.io/projected/860dea3c-1cf3-45aa-977d-6d06db7ba74e-kube-api-access-gbr7l\") pod \"860dea3c-1cf3-45aa-977d-6d06db7ba74e\" (UID: \"860dea3c-1cf3-45aa-977d-6d06db7ba74e\") " Jan 22 10:04:01 crc kubenswrapper[4811]: I0122 10:04:01.678668 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/860dea3c-1cf3-45aa-977d-6d06db7ba74e-utilities" (OuterVolumeSpecName: "utilities") pod "860dea3c-1cf3-45aa-977d-6d06db7ba74e" (UID: "860dea3c-1cf3-45aa-977d-6d06db7ba74e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:04:01 crc kubenswrapper[4811]: I0122 10:04:01.690739 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/860dea3c-1cf3-45aa-977d-6d06db7ba74e-kube-api-access-gbr7l" (OuterVolumeSpecName: "kube-api-access-gbr7l") pod "860dea3c-1cf3-45aa-977d-6d06db7ba74e" (UID: "860dea3c-1cf3-45aa-977d-6d06db7ba74e"). InnerVolumeSpecName "kube-api-access-gbr7l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:04:01 crc kubenswrapper[4811]: I0122 10:04:01.701522 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/860dea3c-1cf3-45aa-977d-6d06db7ba74e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "860dea3c-1cf3-45aa-977d-6d06db7ba74e" (UID: "860dea3c-1cf3-45aa-977d-6d06db7ba74e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:04:01 crc kubenswrapper[4811]: I0122 10:04:01.780908 4811 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/860dea3c-1cf3-45aa-977d-6d06db7ba74e-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 10:04:01 crc kubenswrapper[4811]: I0122 10:04:01.780947 4811 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/860dea3c-1cf3-45aa-977d-6d06db7ba74e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 10:04:01 crc kubenswrapper[4811]: I0122 10:04:01.780959 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gbr7l\" (UniqueName: \"kubernetes.io/projected/860dea3c-1cf3-45aa-977d-6d06db7ba74e-kube-api-access-gbr7l\") on node \"crc\" DevicePath \"\"" Jan 22 10:04:02 crc kubenswrapper[4811]: I0122 10:04:02.041417 4811 generic.go:334] "Generic (PLEG): container finished" podID="860dea3c-1cf3-45aa-977d-6d06db7ba74e" containerID="70a117265385f79f6f66cf41fca20a55297b512c4da7a310d45d4870176ea3e5" exitCode=0 Jan 22 10:04:02 crc kubenswrapper[4811]: I0122 10:04:02.041457 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4vkck" event={"ID":"860dea3c-1cf3-45aa-977d-6d06db7ba74e","Type":"ContainerDied","Data":"70a117265385f79f6f66cf41fca20a55297b512c4da7a310d45d4870176ea3e5"} Jan 22 10:04:02 crc kubenswrapper[4811]: I0122 10:04:02.041488 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4vkck" event={"ID":"860dea3c-1cf3-45aa-977d-6d06db7ba74e","Type":"ContainerDied","Data":"ba3f83f01715dde92fc3a2c9cfab96dc87aec3eeb5f1669028cba4c591a6e4a4"} Jan 22 10:04:02 crc kubenswrapper[4811]: I0122 10:04:02.041505 4811 scope.go:117] "RemoveContainer" containerID="70a117265385f79f6f66cf41fca20a55297b512c4da7a310d45d4870176ea3e5" Jan 22 10:04:02 crc kubenswrapper[4811]: I0122 10:04:02.042167 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4vkck" Jan 22 10:04:02 crc kubenswrapper[4811]: I0122 10:04:02.056360 4811 scope.go:117] "RemoveContainer" containerID="2976159f2ac7668aea1e6a64a77305a55cff957e7415de001f6452533d4bffb1" Jan 22 10:04:02 crc kubenswrapper[4811]: I0122 10:04:02.073384 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4vkck"] Jan 22 10:04:02 crc kubenswrapper[4811]: I0122 10:04:02.079385 4811 scope.go:117] "RemoveContainer" containerID="c3aa7ae40950b3b70afe9a214631aed40dbf5e939c023bf2d359a355780e7835" Jan 22 10:04:02 crc kubenswrapper[4811]: I0122 10:04:02.081359 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-4vkck"] Jan 22 10:04:02 crc kubenswrapper[4811]: I0122 10:04:02.117654 4811 scope.go:117] "RemoveContainer" containerID="70a117265385f79f6f66cf41fca20a55297b512c4da7a310d45d4870176ea3e5" Jan 22 10:04:02 crc kubenswrapper[4811]: E0122 10:04:02.118032 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"70a117265385f79f6f66cf41fca20a55297b512c4da7a310d45d4870176ea3e5\": container with ID starting with 70a117265385f79f6f66cf41fca20a55297b512c4da7a310d45d4870176ea3e5 not found: ID does not exist" containerID="70a117265385f79f6f66cf41fca20a55297b512c4da7a310d45d4870176ea3e5" Jan 22 10:04:02 crc kubenswrapper[4811]: I0122 10:04:02.118077 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70a117265385f79f6f66cf41fca20a55297b512c4da7a310d45d4870176ea3e5"} err="failed to get container status \"70a117265385f79f6f66cf41fca20a55297b512c4da7a310d45d4870176ea3e5\": rpc error: code = NotFound desc = could not find container \"70a117265385f79f6f66cf41fca20a55297b512c4da7a310d45d4870176ea3e5\": container with ID starting with 70a117265385f79f6f66cf41fca20a55297b512c4da7a310d45d4870176ea3e5 not found: ID does not exist" Jan 22 10:04:02 crc kubenswrapper[4811]: I0122 10:04:02.118107 4811 scope.go:117] "RemoveContainer" containerID="2976159f2ac7668aea1e6a64a77305a55cff957e7415de001f6452533d4bffb1" Jan 22 10:04:02 crc kubenswrapper[4811]: E0122 10:04:02.118368 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2976159f2ac7668aea1e6a64a77305a55cff957e7415de001f6452533d4bffb1\": container with ID starting with 2976159f2ac7668aea1e6a64a77305a55cff957e7415de001f6452533d4bffb1 not found: ID does not exist" containerID="2976159f2ac7668aea1e6a64a77305a55cff957e7415de001f6452533d4bffb1" Jan 22 10:04:02 crc kubenswrapper[4811]: I0122 10:04:02.118389 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2976159f2ac7668aea1e6a64a77305a55cff957e7415de001f6452533d4bffb1"} err="failed to get container status \"2976159f2ac7668aea1e6a64a77305a55cff957e7415de001f6452533d4bffb1\": rpc error: code = NotFound desc = could not find container \"2976159f2ac7668aea1e6a64a77305a55cff957e7415de001f6452533d4bffb1\": container with ID starting with 2976159f2ac7668aea1e6a64a77305a55cff957e7415de001f6452533d4bffb1 not found: ID does not exist" Jan 22 10:04:02 crc kubenswrapper[4811]: I0122 10:04:02.118428 4811 scope.go:117] "RemoveContainer" containerID="c3aa7ae40950b3b70afe9a214631aed40dbf5e939c023bf2d359a355780e7835" Jan 22 10:04:02 crc kubenswrapper[4811]: E0122 10:04:02.118617 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c3aa7ae40950b3b70afe9a214631aed40dbf5e939c023bf2d359a355780e7835\": container with ID starting with c3aa7ae40950b3b70afe9a214631aed40dbf5e939c023bf2d359a355780e7835 not found: ID does not exist" containerID="c3aa7ae40950b3b70afe9a214631aed40dbf5e939c023bf2d359a355780e7835" Jan 22 10:04:02 crc kubenswrapper[4811]: I0122 10:04:02.118654 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3aa7ae40950b3b70afe9a214631aed40dbf5e939c023bf2d359a355780e7835"} err="failed to get container status \"c3aa7ae40950b3b70afe9a214631aed40dbf5e939c023bf2d359a355780e7835\": rpc error: code = NotFound desc = could not find container \"c3aa7ae40950b3b70afe9a214631aed40dbf5e939c023bf2d359a355780e7835\": container with ID starting with c3aa7ae40950b3b70afe9a214631aed40dbf5e939c023bf2d359a355780e7835 not found: ID does not exist" Jan 22 10:04:04 crc kubenswrapper[4811]: I0122 10:04:04.000532 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="860dea3c-1cf3-45aa-977d-6d06db7ba74e" path="/var/lib/kubelet/pods/860dea3c-1cf3-45aa-977d-6d06db7ba74e/volumes" Jan 22 10:04:05 crc kubenswrapper[4811]: I0122 10:04:05.501402 4811 patch_prober.go:28] interesting pod/machine-config-daemon-txvcq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 10:04:05 crc kubenswrapper[4811]: I0122 10:04:05.501450 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 10:04:09 crc kubenswrapper[4811]: I0122 10:04:09.763679 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-ppfm9"] Jan 22 10:04:09 crc kubenswrapper[4811]: E0122 10:04:09.764292 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="860dea3c-1cf3-45aa-977d-6d06db7ba74e" containerName="extract-content" Jan 22 10:04:09 crc kubenswrapper[4811]: I0122 10:04:09.764304 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="860dea3c-1cf3-45aa-977d-6d06db7ba74e" containerName="extract-content" Jan 22 10:04:09 crc kubenswrapper[4811]: E0122 10:04:09.764321 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="860dea3c-1cf3-45aa-977d-6d06db7ba74e" containerName="extract-utilities" Jan 22 10:04:09 crc kubenswrapper[4811]: I0122 10:04:09.764327 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="860dea3c-1cf3-45aa-977d-6d06db7ba74e" containerName="extract-utilities" Jan 22 10:04:09 crc kubenswrapper[4811]: E0122 10:04:09.764335 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="860dea3c-1cf3-45aa-977d-6d06db7ba74e" containerName="registry-server" Jan 22 10:04:09 crc kubenswrapper[4811]: I0122 10:04:09.764342 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="860dea3c-1cf3-45aa-977d-6d06db7ba74e" containerName="registry-server" Jan 22 10:04:09 crc kubenswrapper[4811]: I0122 10:04:09.764504 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="860dea3c-1cf3-45aa-977d-6d06db7ba74e" containerName="registry-server" Jan 22 10:04:09 crc kubenswrapper[4811]: I0122 10:04:09.765570 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ppfm9" Jan 22 10:04:09 crc kubenswrapper[4811]: I0122 10:04:09.777696 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ppfm9"] Jan 22 10:04:09 crc kubenswrapper[4811]: I0122 10:04:09.818186 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77c08286-fb8e-49c1-a29d-91754ab41a94-catalog-content\") pod \"redhat-operators-ppfm9\" (UID: \"77c08286-fb8e-49c1-a29d-91754ab41a94\") " pod="openshift-marketplace/redhat-operators-ppfm9" Jan 22 10:04:09 crc kubenswrapper[4811]: I0122 10:04:09.818316 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmcmq\" (UniqueName: \"kubernetes.io/projected/77c08286-fb8e-49c1-a29d-91754ab41a94-kube-api-access-wmcmq\") pod \"redhat-operators-ppfm9\" (UID: \"77c08286-fb8e-49c1-a29d-91754ab41a94\") " pod="openshift-marketplace/redhat-operators-ppfm9" Jan 22 10:04:09 crc kubenswrapper[4811]: I0122 10:04:09.818357 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77c08286-fb8e-49c1-a29d-91754ab41a94-utilities\") pod \"redhat-operators-ppfm9\" (UID: \"77c08286-fb8e-49c1-a29d-91754ab41a94\") " pod="openshift-marketplace/redhat-operators-ppfm9" Jan 22 10:04:09 crc kubenswrapper[4811]: I0122 10:04:09.919595 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77c08286-fb8e-49c1-a29d-91754ab41a94-catalog-content\") pod \"redhat-operators-ppfm9\" (UID: \"77c08286-fb8e-49c1-a29d-91754ab41a94\") " pod="openshift-marketplace/redhat-operators-ppfm9" Jan 22 10:04:09 crc kubenswrapper[4811]: I0122 10:04:09.919699 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmcmq\" (UniqueName: \"kubernetes.io/projected/77c08286-fb8e-49c1-a29d-91754ab41a94-kube-api-access-wmcmq\") pod \"redhat-operators-ppfm9\" (UID: \"77c08286-fb8e-49c1-a29d-91754ab41a94\") " pod="openshift-marketplace/redhat-operators-ppfm9" Jan 22 10:04:09 crc kubenswrapper[4811]: I0122 10:04:09.919729 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77c08286-fb8e-49c1-a29d-91754ab41a94-utilities\") pod \"redhat-operators-ppfm9\" (UID: \"77c08286-fb8e-49c1-a29d-91754ab41a94\") " pod="openshift-marketplace/redhat-operators-ppfm9" Jan 22 10:04:09 crc kubenswrapper[4811]: I0122 10:04:09.919993 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77c08286-fb8e-49c1-a29d-91754ab41a94-catalog-content\") pod \"redhat-operators-ppfm9\" (UID: \"77c08286-fb8e-49c1-a29d-91754ab41a94\") " pod="openshift-marketplace/redhat-operators-ppfm9" Jan 22 10:04:09 crc kubenswrapper[4811]: I0122 10:04:09.920107 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77c08286-fb8e-49c1-a29d-91754ab41a94-utilities\") pod \"redhat-operators-ppfm9\" (UID: \"77c08286-fb8e-49c1-a29d-91754ab41a94\") " pod="openshift-marketplace/redhat-operators-ppfm9" Jan 22 10:04:09 crc kubenswrapper[4811]: I0122 10:04:09.934814 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmcmq\" (UniqueName: \"kubernetes.io/projected/77c08286-fb8e-49c1-a29d-91754ab41a94-kube-api-access-wmcmq\") pod \"redhat-operators-ppfm9\" (UID: \"77c08286-fb8e-49c1-a29d-91754ab41a94\") " pod="openshift-marketplace/redhat-operators-ppfm9" Jan 22 10:04:10 crc kubenswrapper[4811]: I0122 10:04:10.080578 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ppfm9" Jan 22 10:04:10 crc kubenswrapper[4811]: I0122 10:04:10.484471 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ppfm9"] Jan 22 10:04:11 crc kubenswrapper[4811]: I0122 10:04:11.101260 4811 generic.go:334] "Generic (PLEG): container finished" podID="77c08286-fb8e-49c1-a29d-91754ab41a94" containerID="3a37c4a56598a7208a50504f798077e909e0c53ca9d1182d46bf5a8c9446fb8d" exitCode=0 Jan 22 10:04:11 crc kubenswrapper[4811]: I0122 10:04:11.101478 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ppfm9" event={"ID":"77c08286-fb8e-49c1-a29d-91754ab41a94","Type":"ContainerDied","Data":"3a37c4a56598a7208a50504f798077e909e0c53ca9d1182d46bf5a8c9446fb8d"} Jan 22 10:04:11 crc kubenswrapper[4811]: I0122 10:04:11.101520 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ppfm9" event={"ID":"77c08286-fb8e-49c1-a29d-91754ab41a94","Type":"ContainerStarted","Data":"54113504168f25c04120f5a4303917149b1157d71aa1d322fb16c2ee03fd830a"} Jan 22 10:04:13 crc kubenswrapper[4811]: I0122 10:04:13.121578 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ppfm9" event={"ID":"77c08286-fb8e-49c1-a29d-91754ab41a94","Type":"ContainerStarted","Data":"116265326e69712e2285835e80e785213db90f095e34d6bb8eca6e8d38828e44"} Jan 22 10:04:14 crc kubenswrapper[4811]: I0122 10:04:14.337143 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-rl9k7_346ed4cd-2bb8-470d-a275-6c297994fb3f/controller/0.log" Jan 22 10:04:14 crc kubenswrapper[4811]: I0122 10:04:14.344652 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-rl9k7_346ed4cd-2bb8-470d-a275-6c297994fb3f/kube-rbac-proxy/0.log" Jan 22 10:04:14 crc kubenswrapper[4811]: I0122 10:04:14.360358 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rnvr5_d2a3b77d-fd7c-421e-aa23-d206419b1c7d/controller/0.log" Jan 22 10:04:14 crc kubenswrapper[4811]: I0122 10:04:14.586746 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-xvdbh_97d60b95-f52c-4946-919a-e8fd73251ed5/cert-manager-controller/0.log" Jan 22 10:04:14 crc kubenswrapper[4811]: I0122 10:04:14.607233 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-jbj4c_4dbc71dd-a371-4735-bc7e-6c29eb855fbd/cert-manager-cainjector/0.log" Jan 22 10:04:14 crc kubenswrapper[4811]: I0122 10:04:14.615090 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-sgnhm_34e7fc62-f1c1-41cb-b44c-2ef705fa2a15/cert-manager-webhook/0.log" Jan 22 10:04:15 crc kubenswrapper[4811]: I0122 10:04:15.137670 4811 generic.go:334] "Generic (PLEG): container finished" podID="77c08286-fb8e-49c1-a29d-91754ab41a94" containerID="116265326e69712e2285835e80e785213db90f095e34d6bb8eca6e8d38828e44" exitCode=0 Jan 22 10:04:15 crc kubenswrapper[4811]: I0122 10:04:15.137835 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ppfm9" event={"ID":"77c08286-fb8e-49c1-a29d-91754ab41a94","Type":"ContainerDied","Data":"116265326e69712e2285835e80e785213db90f095e34d6bb8eca6e8d38828e44"} Jan 22 10:04:15 crc kubenswrapper[4811]: I0122 10:04:15.582638 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rnvr5_d2a3b77d-fd7c-421e-aa23-d206419b1c7d/frr/0.log" Jan 22 10:04:15 crc kubenswrapper[4811]: I0122 10:04:15.590909 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rnvr5_d2a3b77d-fd7c-421e-aa23-d206419b1c7d/reloader/0.log" Jan 22 10:04:15 crc kubenswrapper[4811]: I0122 10:04:15.598008 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rnvr5_d2a3b77d-fd7c-421e-aa23-d206419b1c7d/frr-metrics/0.log" Jan 22 10:04:15 crc kubenswrapper[4811]: I0122 10:04:15.611924 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rnvr5_d2a3b77d-fd7c-421e-aa23-d206419b1c7d/kube-rbac-proxy/0.log" Jan 22 10:04:15 crc kubenswrapper[4811]: I0122 10:04:15.642308 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rnvr5_d2a3b77d-fd7c-421e-aa23-d206419b1c7d/kube-rbac-proxy-frr/0.log" Jan 22 10:04:15 crc kubenswrapper[4811]: I0122 10:04:15.646446 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rnvr5_d2a3b77d-fd7c-421e-aa23-d206419b1c7d/cp-frr-files/0.log" Jan 22 10:04:15 crc kubenswrapper[4811]: I0122 10:04:15.652205 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rnvr5_d2a3b77d-fd7c-421e-aa23-d206419b1c7d/cp-reloader/0.log" Jan 22 10:04:15 crc kubenswrapper[4811]: I0122 10:04:15.663470 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rnvr5_d2a3b77d-fd7c-421e-aa23-d206419b1c7d/cp-metrics/0.log" Jan 22 10:04:15 crc kubenswrapper[4811]: I0122 10:04:15.801756 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-zfggh_2f6eae9c-374b-4ac3-b5d7-04267fe9bf73/frr-k8s-webhook-server/0.log" Jan 22 10:04:15 crc kubenswrapper[4811]: I0122 10:04:15.871162 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-64bd67c58d-k58sk_13617657-7245-4223-9b20-03a56378edaf/manager/0.log" Jan 22 10:04:15 crc kubenswrapper[4811]: I0122 10:04:15.891895 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-5bc67d6df-ckhh4_1008e895-ec53-4fdd-9423-bbb4d249a6b9/webhook-server/0.log" Jan 22 10:04:16 crc kubenswrapper[4811]: I0122 10:04:16.170764 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-k88w9_faa36c07-3c7a-4b4a-a04e-58b43a178890/speaker/0.log" Jan 22 10:04:16 crc kubenswrapper[4811]: I0122 10:04:16.188570 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-k88w9_faa36c07-3c7a-4b4a-a04e-58b43a178890/kube-rbac-proxy/0.log" Jan 22 10:04:17 crc kubenswrapper[4811]: I0122 10:04:17.160831 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ppfm9" event={"ID":"77c08286-fb8e-49c1-a29d-91754ab41a94","Type":"ContainerStarted","Data":"88eb9499ce971579b77dd4ba454b06e17b2fad41d2d44208c049250998d40c41"} Jan 22 10:04:17 crc kubenswrapper[4811]: I0122 10:04:17.187461 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-ppfm9" podStartSLOduration=3.3255365550000002 podStartE2EDuration="8.187446112s" podCreationTimestamp="2026-01-22 10:04:09 +0000 UTC" firstStartedPulling="2026-01-22 10:04:11.102724045 +0000 UTC m=+3495.424911169" lastFinishedPulling="2026-01-22 10:04:15.964633603 +0000 UTC m=+3500.286820726" observedRunningTime="2026-01-22 10:04:17.178971851 +0000 UTC m=+3501.501158974" watchObservedRunningTime="2026-01-22 10:04:17.187446112 +0000 UTC m=+3501.509633235" Jan 22 10:04:17 crc kubenswrapper[4811]: I0122 10:04:17.213595 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3078a1b977323d6d8f95a4018ef06f377c198eb1741282ac05e933b60384v48_d014441f-4913-4583-ba8e-b1c20aaeed47/extract/0.log" Jan 22 10:04:17 crc kubenswrapper[4811]: I0122 10:04:17.228084 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3078a1b977323d6d8f95a4018ef06f377c198eb1741282ac05e933b60384v48_d014441f-4913-4583-ba8e-b1c20aaeed47/util/0.log" Jan 22 10:04:17 crc kubenswrapper[4811]: I0122 10:04:17.252188 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3078a1b977323d6d8f95a4018ef06f377c198eb1741282ac05e933b60384v48_d014441f-4913-4583-ba8e-b1c20aaeed47/pull/0.log" Jan 22 10:04:17 crc kubenswrapper[4811]: I0122 10:04:17.322886 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-59dd8b7cbf-pklcs_09ad3a19-244b-4685-8c96-0bee227b6547/manager/0.log" Jan 22 10:04:17 crc kubenswrapper[4811]: I0122 10:04:17.372207 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-69cf5d4557-rgwhg_e6fe0bc0-30b4-4a2f-b36d-93d5b288ecf8/manager/0.log" Jan 22 10:04:17 crc kubenswrapper[4811]: I0122 10:04:17.389494 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-b45d7bf98-2ltqr_62aa676a-95ae-40a8-9db5-b5fd24a293c2/manager/0.log" Jan 22 10:04:17 crc kubenswrapper[4811]: I0122 10:04:17.484777 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-78fdd796fd-26vqb_b0f07719-5203-4d79-82b4-995b8af81a00/manager/0.log" Jan 22 10:04:17 crc kubenswrapper[4811]: I0122 10:04:17.512608 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-vbbnq_e019bc4b-f0e7-4a4f-a42c-1486010a63fd/manager/0.log" Jan 22 10:04:17 crc kubenswrapper[4811]: I0122 10:04:17.531918 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-7p5h9_62a9fc61-630e-4f4d-9788-f21e25ab4dda/manager/0.log" Jan 22 10:04:17 crc kubenswrapper[4811]: I0122 10:04:17.706284 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-xvdbh_97d60b95-f52c-4946-919a-e8fd73251ed5/cert-manager-controller/0.log" Jan 22 10:04:17 crc kubenswrapper[4811]: I0122 10:04:17.730156 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-jbj4c_4dbc71dd-a371-4735-bc7e-6c29eb855fbd/cert-manager-cainjector/0.log" Jan 22 10:04:17 crc kubenswrapper[4811]: I0122 10:04:17.740795 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-sgnhm_34e7fc62-f1c1-41cb-b44c-2ef705fa2a15/cert-manager-webhook/0.log" Jan 22 10:04:17 crc kubenswrapper[4811]: I0122 10:04:17.819546 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-54ccf4f85d-r6z6t_81d4cd92-880c-4806-ab95-fcb009827075/manager/0.log" Jan 22 10:04:17 crc kubenswrapper[4811]: I0122 10:04:17.830989 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-69d6c9f5b8-fx6zn_ce893825-4e8e-4c9b-b37e-a974d7cfda21/manager/0.log" Jan 22 10:04:17 crc kubenswrapper[4811]: I0122 10:04:17.897202 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b8b6d4659-99m2t_9dfc4274-a6f5-4dcb-8bcf-b2e0ff337574/manager/0.log" Jan 22 10:04:17 crc kubenswrapper[4811]: I0122 10:04:17.946214 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-78c6999f6f-4wtlm_688057d8-0445-42c1-b073-83deb026ab4c/manager/0.log" Jan 22 10:04:17 crc kubenswrapper[4811]: I0122 10:04:17.976031 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-c87fff755-kc9m5_02697a04-4401-498c-9b69-ff0b57ce8f4b/manager/0.log" Jan 22 10:04:18 crc kubenswrapper[4811]: I0122 10:04:18.016950 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5d8f59fb49-tll52_c284b86e-5a0e-4bd4-aa67-4e2c208ea4fa/manager/0.log" Jan 22 10:04:18 crc kubenswrapper[4811]: I0122 10:04:18.086344 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-6b8bc8d87d-h7wzt_b157cb38-af8a-41bf-a29a-2da5b59aa500/manager/0.log" Jan 22 10:04:18 crc kubenswrapper[4811]: I0122 10:04:18.094930 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7bd9774b6-t9djx_a247bb8f-a274-481d-916b-8ad80521af31/manager/0.log" Jan 22 10:04:18 crc kubenswrapper[4811]: I0122 10:04:18.110492 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-7fbbcdb4d6qmzp5_7c209919-fd54-40e8-a741-7006cf8dd361/manager/0.log" Jan 22 10:04:18 crc kubenswrapper[4811]: I0122 10:04:18.207842 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-5cd76577f9-kn8dt_a01a5eb9-0bef-4a6b-af9e-d71281e2ae34/operator/0.log" Jan 22 10:04:18 crc kubenswrapper[4811]: I0122 10:04:18.717798 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-9pxnj_8b7094aa-cc4a-49eb-be77-715a4efbc1d0/control-plane-machine-set-operator/0.log" Jan 22 10:04:18 crc kubenswrapper[4811]: I0122 10:04:18.740758 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-rx42r_8a9d91fa-d887-4128-af43-cfe3cad79784/kube-rbac-proxy/0.log" Jan 22 10:04:18 crc kubenswrapper[4811]: I0122 10:04:18.752028 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-rx42r_8a9d91fa-d887-4128-af43-cfe3cad79784/machine-api-operator/0.log" Jan 22 10:04:19 crc kubenswrapper[4811]: I0122 10:04:19.246838 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-647bb87bbd-v227g_c0b74933-8fe4-4fb1-82af-eda7df5c3c06/manager/0.log" Jan 22 10:04:19 crc kubenswrapper[4811]: I0122 10:04:19.301429 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-c2sr9_5518af80-1f74-4caf-8bc0-80680646bfca/registry-server/0.log" Jan 22 10:04:19 crc kubenswrapper[4811]: I0122 10:04:19.346844 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-55db956ddc-xp5jv_e343c2da-412a-4226-b711-81f83fdbb04b/manager/0.log" Jan 22 10:04:19 crc kubenswrapper[4811]: I0122 10:04:19.400548 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5d646b7d76-rvsm7_b579b636-697b-4a23-9de7-1f9a8537eb94/manager/0.log" Jan 22 10:04:19 crc kubenswrapper[4811]: I0122 10:04:19.417818 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-2l9vr_c624375e-a5cf-49b8-a54a-5770a6c7e738/operator/0.log" Jan 22 10:04:19 crc kubenswrapper[4811]: I0122 10:04:19.426119 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-547cbdb99f-4xkc8_b3409f06-ef57-4717-b3e2-9b4f788fd7f0/manager/0.log" Jan 22 10:04:19 crc kubenswrapper[4811]: I0122 10:04:19.476619 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-85cd9769bb-627nz_42323d0d-05b6-4a0d-a809-405dec7c2893/manager/0.log" Jan 22 10:04:19 crc kubenswrapper[4811]: I0122 10:04:19.486365 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-69797bbcbd-mg5cr_d990df50-3df1-46b6-b6df-5b84bf8eeb20/manager/0.log" Jan 22 10:04:19 crc kubenswrapper[4811]: I0122 10:04:19.493922 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-5ffb9c6597-kxs2j_15eb97d5-2508-4c32-8b7e-65f1015767cf/manager/0.log" Jan 22 10:04:20 crc kubenswrapper[4811]: I0122 10:04:20.081259 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-ppfm9" Jan 22 10:04:20 crc kubenswrapper[4811]: I0122 10:04:20.081316 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-ppfm9" Jan 22 10:04:20 crc kubenswrapper[4811]: I0122 10:04:20.169557 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3078a1b977323d6d8f95a4018ef06f377c198eb1741282ac05e933b60384v48_d014441f-4913-4583-ba8e-b1c20aaeed47/extract/0.log" Jan 22 10:04:20 crc kubenswrapper[4811]: I0122 10:04:20.177233 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3078a1b977323d6d8f95a4018ef06f377c198eb1741282ac05e933b60384v48_d014441f-4913-4583-ba8e-b1c20aaeed47/util/0.log" Jan 22 10:04:20 crc kubenswrapper[4811]: I0122 10:04:20.190328 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3078a1b977323d6d8f95a4018ef06f377c198eb1741282ac05e933b60384v48_d014441f-4913-4583-ba8e-b1c20aaeed47/pull/0.log" Jan 22 10:04:20 crc kubenswrapper[4811]: I0122 10:04:20.260144 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-59dd8b7cbf-pklcs_09ad3a19-244b-4685-8c96-0bee227b6547/manager/0.log" Jan 22 10:04:20 crc kubenswrapper[4811]: I0122 10:04:20.315084 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-69cf5d4557-rgwhg_e6fe0bc0-30b4-4a2f-b36d-93d5b288ecf8/manager/0.log" Jan 22 10:04:20 crc kubenswrapper[4811]: I0122 10:04:20.327694 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-b45d7bf98-2ltqr_62aa676a-95ae-40a8-9db5-b5fd24a293c2/manager/0.log" Jan 22 10:04:20 crc kubenswrapper[4811]: I0122 10:04:20.418913 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-78fdd796fd-26vqb_b0f07719-5203-4d79-82b4-995b8af81a00/manager/0.log" Jan 22 10:04:20 crc kubenswrapper[4811]: I0122 10:04:20.428557 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-vbbnq_e019bc4b-f0e7-4a4f-a42c-1486010a63fd/manager/0.log" Jan 22 10:04:20 crc kubenswrapper[4811]: I0122 10:04:20.447019 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-7p5h9_62a9fc61-630e-4f4d-9788-f21e25ab4dda/manager/0.log" Jan 22 10:04:20 crc kubenswrapper[4811]: I0122 10:04:20.680687 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-54ccf4f85d-r6z6t_81d4cd92-880c-4806-ab95-fcb009827075/manager/0.log" Jan 22 10:04:20 crc kubenswrapper[4811]: I0122 10:04:20.691430 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-69d6c9f5b8-fx6zn_ce893825-4e8e-4c9b-b37e-a974d7cfda21/manager/0.log" Jan 22 10:04:20 crc kubenswrapper[4811]: I0122 10:04:20.745167 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b8b6d4659-99m2t_9dfc4274-a6f5-4dcb-8bcf-b2e0ff337574/manager/0.log" Jan 22 10:04:20 crc kubenswrapper[4811]: I0122 10:04:20.790921 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-78c6999f6f-4wtlm_688057d8-0445-42c1-b073-83deb026ab4c/manager/0.log" Jan 22 10:04:20 crc kubenswrapper[4811]: I0122 10:04:20.823276 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-c87fff755-kc9m5_02697a04-4401-498c-9b69-ff0b57ce8f4b/manager/0.log" Jan 22 10:04:20 crc kubenswrapper[4811]: I0122 10:04:20.868899 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5d8f59fb49-tll52_c284b86e-5a0e-4bd4-aa67-4e2c208ea4fa/manager/0.log" Jan 22 10:04:20 crc kubenswrapper[4811]: I0122 10:04:20.923276 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-6b8bc8d87d-h7wzt_b157cb38-af8a-41bf-a29a-2da5b59aa500/manager/0.log" Jan 22 10:04:20 crc kubenswrapper[4811]: I0122 10:04:20.932977 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7bd9774b6-t9djx_a247bb8f-a274-481d-916b-8ad80521af31/manager/0.log" Jan 22 10:04:20 crc kubenswrapper[4811]: I0122 10:04:20.949713 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-7fbbcdb4d6qmzp5_7c209919-fd54-40e8-a741-7006cf8dd361/manager/0.log" Jan 22 10:04:21 crc kubenswrapper[4811]: I0122 10:04:21.058649 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-5cd76577f9-kn8dt_a01a5eb9-0bef-4a6b-af9e-d71281e2ae34/operator/0.log" Jan 22 10:04:21 crc kubenswrapper[4811]: I0122 10:04:21.121826 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-ppfm9" podUID="77c08286-fb8e-49c1-a29d-91754ab41a94" containerName="registry-server" probeResult="failure" output=< Jan 22 10:04:21 crc kubenswrapper[4811]: timeout: failed to connect service ":50051" within 1s Jan 22 10:04:21 crc kubenswrapper[4811]: > Jan 22 10:04:21 crc kubenswrapper[4811]: I0122 10:04:21.315956 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-22qhf_d63fcc2e-ef3c-4a10-9444-43070aa0dc77/nmstate-console-plugin/0.log" Jan 22 10:04:21 crc kubenswrapper[4811]: I0122 10:04:21.333611 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-tvjnz_74fc22de-195f-452c-b18c-f12c53f2465f/nmstate-handler/0.log" Jan 22 10:04:21 crc kubenswrapper[4811]: I0122 10:04:21.349861 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-nmrg4_66e8ec28-33fd-440b-9064-dd5c40cf4b61/nmstate-metrics/0.log" Jan 22 10:04:21 crc kubenswrapper[4811]: I0122 10:04:21.367307 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-nmrg4_66e8ec28-33fd-440b-9064-dd5c40cf4b61/kube-rbac-proxy/0.log" Jan 22 10:04:21 crc kubenswrapper[4811]: I0122 10:04:21.381727 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-v76qn_952cfa08-4a5f-43b8-aa83-58839cc92523/nmstate-operator/0.log" Jan 22 10:04:21 crc kubenswrapper[4811]: I0122 10:04:21.395837 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-tnk97_9e9c9633-f916-440c-b02c-5bb58eb51e76/nmstate-webhook/0.log" Jan 22 10:04:22 crc kubenswrapper[4811]: I0122 10:04:22.200345 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-647bb87bbd-v227g_c0b74933-8fe4-4fb1-82af-eda7df5c3c06/manager/0.log" Jan 22 10:04:22 crc kubenswrapper[4811]: I0122 10:04:22.245038 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-c2sr9_5518af80-1f74-4caf-8bc0-80680646bfca/registry-server/0.log" Jan 22 10:04:22 crc kubenswrapper[4811]: I0122 10:04:22.289786 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-55db956ddc-xp5jv_e343c2da-412a-4226-b711-81f83fdbb04b/manager/0.log" Jan 22 10:04:22 crc kubenswrapper[4811]: I0122 10:04:22.316708 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5d646b7d76-rvsm7_b579b636-697b-4a23-9de7-1f9a8537eb94/manager/0.log" Jan 22 10:04:22 crc kubenswrapper[4811]: I0122 10:04:22.332586 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-2l9vr_c624375e-a5cf-49b8-a54a-5770a6c7e738/operator/0.log" Jan 22 10:04:22 crc kubenswrapper[4811]: I0122 10:04:22.341406 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-547cbdb99f-4xkc8_b3409f06-ef57-4717-b3e2-9b4f788fd7f0/manager/0.log" Jan 22 10:04:22 crc kubenswrapper[4811]: I0122 10:04:22.396151 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-85cd9769bb-627nz_42323d0d-05b6-4a0d-a809-405dec7c2893/manager/0.log" Jan 22 10:04:22 crc kubenswrapper[4811]: I0122 10:04:22.409449 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-69797bbcbd-mg5cr_d990df50-3df1-46b6-b6df-5b84bf8eeb20/manager/0.log" Jan 22 10:04:22 crc kubenswrapper[4811]: I0122 10:04:22.420327 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-5ffb9c6597-kxs2j_15eb97d5-2508-4c32-8b7e-65f1015767cf/manager/0.log" Jan 22 10:04:23 crc kubenswrapper[4811]: I0122 10:04:23.899278 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-9g4j8_3d23c9c9-89ca-4db5-99dc-1e5b9f80be38/kube-multus-additional-cni-plugins/0.log" Jan 22 10:04:23 crc kubenswrapper[4811]: I0122 10:04:23.906036 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-9g4j8_3d23c9c9-89ca-4db5-99dc-1e5b9f80be38/egress-router-binary-copy/0.log" Jan 22 10:04:23 crc kubenswrapper[4811]: I0122 10:04:23.910653 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-9g4j8_3d23c9c9-89ca-4db5-99dc-1e5b9f80be38/cni-plugins/0.log" Jan 22 10:04:23 crc kubenswrapper[4811]: I0122 10:04:23.916116 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-9g4j8_3d23c9c9-89ca-4db5-99dc-1e5b9f80be38/bond-cni-plugin/0.log" Jan 22 10:04:23 crc kubenswrapper[4811]: I0122 10:04:23.921446 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-9g4j8_3d23c9c9-89ca-4db5-99dc-1e5b9f80be38/routeoverride-cni/0.log" Jan 22 10:04:23 crc kubenswrapper[4811]: I0122 10:04:23.926904 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-9g4j8_3d23c9c9-89ca-4db5-99dc-1e5b9f80be38/whereabouts-cni-bincopy/0.log" Jan 22 10:04:23 crc kubenswrapper[4811]: I0122 10:04:23.932942 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-9g4j8_3d23c9c9-89ca-4db5-99dc-1e5b9f80be38/whereabouts-cni/0.log" Jan 22 10:04:23 crc kubenswrapper[4811]: I0122 10:04:23.958603 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-857f4d67dd-78zjg_cb22b2ae-6c13-482b-b827-5200e2be87ca/multus-admission-controller/0.log" Jan 22 10:04:23 crc kubenswrapper[4811]: I0122 10:04:23.963548 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-857f4d67dd-78zjg_cb22b2ae-6c13-482b-b827-5200e2be87ca/kube-rbac-proxy/0.log" Jan 22 10:04:23 crc kubenswrapper[4811]: I0122 10:04:23.996187 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-kfqgt_f2555861-d1bb-4f21-be4a-165ed9212932/kube-multus/2.log" Jan 22 10:04:24 crc kubenswrapper[4811]: I0122 10:04:24.070392 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-kfqgt_f2555861-d1bb-4f21-be4a-165ed9212932/kube-multus/3.log" Jan 22 10:04:24 crc kubenswrapper[4811]: I0122 10:04:24.094650 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-bhj4l_de4b38a0-0c7a-4693-9f92-40fefd6bc9b4/network-metrics-daemon/0.log" Jan 22 10:04:24 crc kubenswrapper[4811]: I0122 10:04:24.098722 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-bhj4l_de4b38a0-0c7a-4693-9f92-40fefd6bc9b4/kube-rbac-proxy/0.log" Jan 22 10:04:30 crc kubenswrapper[4811]: I0122 10:04:30.120954 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-ppfm9" Jan 22 10:04:30 crc kubenswrapper[4811]: I0122 10:04:30.160834 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-ppfm9" Jan 22 10:04:30 crc kubenswrapper[4811]: I0122 10:04:30.359060 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ppfm9"] Jan 22 10:04:31 crc kubenswrapper[4811]: I0122 10:04:31.261900 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-ppfm9" podUID="77c08286-fb8e-49c1-a29d-91754ab41a94" containerName="registry-server" containerID="cri-o://88eb9499ce971579b77dd4ba454b06e17b2fad41d2d44208c049250998d40c41" gracePeriod=2 Jan 22 10:04:31 crc kubenswrapper[4811]: I0122 10:04:31.830153 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ppfm9" Jan 22 10:04:31 crc kubenswrapper[4811]: I0122 10:04:31.886522 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wmcmq\" (UniqueName: \"kubernetes.io/projected/77c08286-fb8e-49c1-a29d-91754ab41a94-kube-api-access-wmcmq\") pod \"77c08286-fb8e-49c1-a29d-91754ab41a94\" (UID: \"77c08286-fb8e-49c1-a29d-91754ab41a94\") " Jan 22 10:04:31 crc kubenswrapper[4811]: I0122 10:04:31.886669 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77c08286-fb8e-49c1-a29d-91754ab41a94-utilities\") pod \"77c08286-fb8e-49c1-a29d-91754ab41a94\" (UID: \"77c08286-fb8e-49c1-a29d-91754ab41a94\") " Jan 22 10:04:31 crc kubenswrapper[4811]: I0122 10:04:31.886912 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77c08286-fb8e-49c1-a29d-91754ab41a94-catalog-content\") pod \"77c08286-fb8e-49c1-a29d-91754ab41a94\" (UID: \"77c08286-fb8e-49c1-a29d-91754ab41a94\") " Jan 22 10:04:31 crc kubenswrapper[4811]: I0122 10:04:31.887475 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/77c08286-fb8e-49c1-a29d-91754ab41a94-utilities" (OuterVolumeSpecName: "utilities") pod "77c08286-fb8e-49c1-a29d-91754ab41a94" (UID: "77c08286-fb8e-49c1-a29d-91754ab41a94"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:04:31 crc kubenswrapper[4811]: I0122 10:04:31.892503 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77c08286-fb8e-49c1-a29d-91754ab41a94-kube-api-access-wmcmq" (OuterVolumeSpecName: "kube-api-access-wmcmq") pod "77c08286-fb8e-49c1-a29d-91754ab41a94" (UID: "77c08286-fb8e-49c1-a29d-91754ab41a94"). InnerVolumeSpecName "kube-api-access-wmcmq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:04:31 crc kubenswrapper[4811]: I0122 10:04:31.988690 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/77c08286-fb8e-49c1-a29d-91754ab41a94-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "77c08286-fb8e-49c1-a29d-91754ab41a94" (UID: "77c08286-fb8e-49c1-a29d-91754ab41a94"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:04:31 crc kubenswrapper[4811]: I0122 10:04:31.990041 4811 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77c08286-fb8e-49c1-a29d-91754ab41a94-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 10:04:31 crc kubenswrapper[4811]: I0122 10:04:31.990068 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wmcmq\" (UniqueName: \"kubernetes.io/projected/77c08286-fb8e-49c1-a29d-91754ab41a94-kube-api-access-wmcmq\") on node \"crc\" DevicePath \"\"" Jan 22 10:04:31 crc kubenswrapper[4811]: I0122 10:04:31.990079 4811 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77c08286-fb8e-49c1-a29d-91754ab41a94-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 10:04:32 crc kubenswrapper[4811]: I0122 10:04:32.270111 4811 generic.go:334] "Generic (PLEG): container finished" podID="77c08286-fb8e-49c1-a29d-91754ab41a94" containerID="88eb9499ce971579b77dd4ba454b06e17b2fad41d2d44208c049250998d40c41" exitCode=0 Jan 22 10:04:32 crc kubenswrapper[4811]: I0122 10:04:32.270147 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ppfm9" Jan 22 10:04:32 crc kubenswrapper[4811]: I0122 10:04:32.270148 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ppfm9" event={"ID":"77c08286-fb8e-49c1-a29d-91754ab41a94","Type":"ContainerDied","Data":"88eb9499ce971579b77dd4ba454b06e17b2fad41d2d44208c049250998d40c41"} Jan 22 10:04:32 crc kubenswrapper[4811]: I0122 10:04:32.270266 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ppfm9" event={"ID":"77c08286-fb8e-49c1-a29d-91754ab41a94","Type":"ContainerDied","Data":"54113504168f25c04120f5a4303917149b1157d71aa1d322fb16c2ee03fd830a"} Jan 22 10:04:32 crc kubenswrapper[4811]: I0122 10:04:32.270284 4811 scope.go:117] "RemoveContainer" containerID="88eb9499ce971579b77dd4ba454b06e17b2fad41d2d44208c049250998d40c41" Jan 22 10:04:32 crc kubenswrapper[4811]: I0122 10:04:32.293370 4811 scope.go:117] "RemoveContainer" containerID="116265326e69712e2285835e80e785213db90f095e34d6bb8eca6e8d38828e44" Jan 22 10:04:32 crc kubenswrapper[4811]: I0122 10:04:32.294602 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ppfm9"] Jan 22 10:04:32 crc kubenswrapper[4811]: I0122 10:04:32.301047 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-ppfm9"] Jan 22 10:04:32 crc kubenswrapper[4811]: I0122 10:04:32.312668 4811 scope.go:117] "RemoveContainer" containerID="3a37c4a56598a7208a50504f798077e909e0c53ca9d1182d46bf5a8c9446fb8d" Jan 22 10:04:32 crc kubenswrapper[4811]: I0122 10:04:32.344080 4811 scope.go:117] "RemoveContainer" containerID="88eb9499ce971579b77dd4ba454b06e17b2fad41d2d44208c049250998d40c41" Jan 22 10:04:32 crc kubenswrapper[4811]: E0122 10:04:32.344450 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88eb9499ce971579b77dd4ba454b06e17b2fad41d2d44208c049250998d40c41\": container with ID starting with 88eb9499ce971579b77dd4ba454b06e17b2fad41d2d44208c049250998d40c41 not found: ID does not exist" containerID="88eb9499ce971579b77dd4ba454b06e17b2fad41d2d44208c049250998d40c41" Jan 22 10:04:32 crc kubenswrapper[4811]: I0122 10:04:32.344486 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88eb9499ce971579b77dd4ba454b06e17b2fad41d2d44208c049250998d40c41"} err="failed to get container status \"88eb9499ce971579b77dd4ba454b06e17b2fad41d2d44208c049250998d40c41\": rpc error: code = NotFound desc = could not find container \"88eb9499ce971579b77dd4ba454b06e17b2fad41d2d44208c049250998d40c41\": container with ID starting with 88eb9499ce971579b77dd4ba454b06e17b2fad41d2d44208c049250998d40c41 not found: ID does not exist" Jan 22 10:04:32 crc kubenswrapper[4811]: I0122 10:04:32.344507 4811 scope.go:117] "RemoveContainer" containerID="116265326e69712e2285835e80e785213db90f095e34d6bb8eca6e8d38828e44" Jan 22 10:04:32 crc kubenswrapper[4811]: E0122 10:04:32.344783 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"116265326e69712e2285835e80e785213db90f095e34d6bb8eca6e8d38828e44\": container with ID starting with 116265326e69712e2285835e80e785213db90f095e34d6bb8eca6e8d38828e44 not found: ID does not exist" containerID="116265326e69712e2285835e80e785213db90f095e34d6bb8eca6e8d38828e44" Jan 22 10:04:32 crc kubenswrapper[4811]: I0122 10:04:32.344804 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"116265326e69712e2285835e80e785213db90f095e34d6bb8eca6e8d38828e44"} err="failed to get container status \"116265326e69712e2285835e80e785213db90f095e34d6bb8eca6e8d38828e44\": rpc error: code = NotFound desc = could not find container \"116265326e69712e2285835e80e785213db90f095e34d6bb8eca6e8d38828e44\": container with ID starting with 116265326e69712e2285835e80e785213db90f095e34d6bb8eca6e8d38828e44 not found: ID does not exist" Jan 22 10:04:32 crc kubenswrapper[4811]: I0122 10:04:32.344817 4811 scope.go:117] "RemoveContainer" containerID="3a37c4a56598a7208a50504f798077e909e0c53ca9d1182d46bf5a8c9446fb8d" Jan 22 10:04:32 crc kubenswrapper[4811]: E0122 10:04:32.345806 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a37c4a56598a7208a50504f798077e909e0c53ca9d1182d46bf5a8c9446fb8d\": container with ID starting with 3a37c4a56598a7208a50504f798077e909e0c53ca9d1182d46bf5a8c9446fb8d not found: ID does not exist" containerID="3a37c4a56598a7208a50504f798077e909e0c53ca9d1182d46bf5a8c9446fb8d" Jan 22 10:04:32 crc kubenswrapper[4811]: I0122 10:04:32.345829 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a37c4a56598a7208a50504f798077e909e0c53ca9d1182d46bf5a8c9446fb8d"} err="failed to get container status \"3a37c4a56598a7208a50504f798077e909e0c53ca9d1182d46bf5a8c9446fb8d\": rpc error: code = NotFound desc = could not find container \"3a37c4a56598a7208a50504f798077e909e0c53ca9d1182d46bf5a8c9446fb8d\": container with ID starting with 3a37c4a56598a7208a50504f798077e909e0c53ca9d1182d46bf5a8c9446fb8d not found: ID does not exist" Jan 22 10:04:34 crc kubenswrapper[4811]: I0122 10:04:34.000987 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77c08286-fb8e-49c1-a29d-91754ab41a94" path="/var/lib/kubelet/pods/77c08286-fb8e-49c1-a29d-91754ab41a94/volumes" Jan 22 10:04:35 crc kubenswrapper[4811]: I0122 10:04:35.501612 4811 patch_prober.go:28] interesting pod/machine-config-daemon-txvcq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 10:04:35 crc kubenswrapper[4811]: I0122 10:04:35.501692 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 10:05:05 crc kubenswrapper[4811]: I0122 10:05:05.501147 4811 patch_prober.go:28] interesting pod/machine-config-daemon-txvcq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 10:05:05 crc kubenswrapper[4811]: I0122 10:05:05.502136 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 10:05:05 crc kubenswrapper[4811]: I0122 10:05:05.502222 4811 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" Jan 22 10:05:05 crc kubenswrapper[4811]: I0122 10:05:05.502686 4811 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"48adbe58c0edc6e2dbd02dd77a6da6c68c74447ea1d3ab9ca56908346b26b622"} pod="openshift-machine-config-operator/machine-config-daemon-txvcq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 10:05:05 crc kubenswrapper[4811]: I0122 10:05:05.502799 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" containerName="machine-config-daemon" containerID="cri-o://48adbe58c0edc6e2dbd02dd77a6da6c68c74447ea1d3ab9ca56908346b26b622" gracePeriod=600 Jan 22 10:05:05 crc kubenswrapper[4811]: E0122 10:05:05.624528 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-txvcq_openshift-machine-config-operator(84068a6b-e189-419b-87f5-f31428f6eafe)\"" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" Jan 22 10:05:06 crc kubenswrapper[4811]: I0122 10:05:06.503923 4811 generic.go:334] "Generic (PLEG): container finished" podID="84068a6b-e189-419b-87f5-f31428f6eafe" containerID="48adbe58c0edc6e2dbd02dd77a6da6c68c74447ea1d3ab9ca56908346b26b622" exitCode=0 Jan 22 10:05:06 crc kubenswrapper[4811]: I0122 10:05:06.503959 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" event={"ID":"84068a6b-e189-419b-87f5-f31428f6eafe","Type":"ContainerDied","Data":"48adbe58c0edc6e2dbd02dd77a6da6c68c74447ea1d3ab9ca56908346b26b622"} Jan 22 10:05:06 crc kubenswrapper[4811]: I0122 10:05:06.503987 4811 scope.go:117] "RemoveContainer" containerID="3a06585913d6ba918f6c52903bb7850c2377d5698106e38de260a0e7343ce390" Jan 22 10:05:06 crc kubenswrapper[4811]: I0122 10:05:06.504319 4811 scope.go:117] "RemoveContainer" containerID="48adbe58c0edc6e2dbd02dd77a6da6c68c74447ea1d3ab9ca56908346b26b622" Jan 22 10:05:06 crc kubenswrapper[4811]: E0122 10:05:06.504596 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-txvcq_openshift-machine-config-operator(84068a6b-e189-419b-87f5-f31428f6eafe)\"" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" Jan 22 10:05:19 crc kubenswrapper[4811]: I0122 10:05:19.992327 4811 scope.go:117] "RemoveContainer" containerID="48adbe58c0edc6e2dbd02dd77a6da6c68c74447ea1d3ab9ca56908346b26b622" Jan 22 10:05:19 crc kubenswrapper[4811]: E0122 10:05:19.992844 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-txvcq_openshift-machine-config-operator(84068a6b-e189-419b-87f5-f31428f6eafe)\"" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" Jan 22 10:05:34 crc kubenswrapper[4811]: I0122 10:05:34.991928 4811 scope.go:117] "RemoveContainer" containerID="48adbe58c0edc6e2dbd02dd77a6da6c68c74447ea1d3ab9ca56908346b26b622" Jan 22 10:05:34 crc kubenswrapper[4811]: E0122 10:05:34.992441 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-txvcq_openshift-machine-config-operator(84068a6b-e189-419b-87f5-f31428f6eafe)\"" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" Jan 22 10:05:49 crc kubenswrapper[4811]: I0122 10:05:49.992512 4811 scope.go:117] "RemoveContainer" containerID="48adbe58c0edc6e2dbd02dd77a6da6c68c74447ea1d3ab9ca56908346b26b622" Jan 22 10:05:49 crc kubenswrapper[4811]: E0122 10:05:49.994167 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-txvcq_openshift-machine-config-operator(84068a6b-e189-419b-87f5-f31428f6eafe)\"" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" Jan 22 10:06:03 crc kubenswrapper[4811]: I0122 10:06:03.996539 4811 scope.go:117] "RemoveContainer" containerID="48adbe58c0edc6e2dbd02dd77a6da6c68c74447ea1d3ab9ca56908346b26b622" Jan 22 10:06:03 crc kubenswrapper[4811]: E0122 10:06:03.997442 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-txvcq_openshift-machine-config-operator(84068a6b-e189-419b-87f5-f31428f6eafe)\"" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" Jan 22 10:06:16 crc kubenswrapper[4811]: I0122 10:06:16.992814 4811 scope.go:117] "RemoveContainer" containerID="48adbe58c0edc6e2dbd02dd77a6da6c68c74447ea1d3ab9ca56908346b26b622" Jan 22 10:06:16 crc kubenswrapper[4811]: E0122 10:06:16.994018 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-txvcq_openshift-machine-config-operator(84068a6b-e189-419b-87f5-f31428f6eafe)\"" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" Jan 22 10:06:31 crc kubenswrapper[4811]: I0122 10:06:31.993554 4811 scope.go:117] "RemoveContainer" containerID="48adbe58c0edc6e2dbd02dd77a6da6c68c74447ea1d3ab9ca56908346b26b622" Jan 22 10:06:31 crc kubenswrapper[4811]: E0122 10:06:31.994754 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-txvcq_openshift-machine-config-operator(84068a6b-e189-419b-87f5-f31428f6eafe)\"" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" Jan 22 10:06:45 crc kubenswrapper[4811]: I0122 10:06:45.992758 4811 scope.go:117] "RemoveContainer" containerID="48adbe58c0edc6e2dbd02dd77a6da6c68c74447ea1d3ab9ca56908346b26b622" Jan 22 10:06:45 crc kubenswrapper[4811]: E0122 10:06:45.993714 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-txvcq_openshift-machine-config-operator(84068a6b-e189-419b-87f5-f31428f6eafe)\"" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" Jan 22 10:06:56 crc kubenswrapper[4811]: I0122 10:06:56.992479 4811 scope.go:117] "RemoveContainer" containerID="48adbe58c0edc6e2dbd02dd77a6da6c68c74447ea1d3ab9ca56908346b26b622" Jan 22 10:06:56 crc kubenswrapper[4811]: E0122 10:06:56.993228 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-txvcq_openshift-machine-config-operator(84068a6b-e189-419b-87f5-f31428f6eafe)\"" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" Jan 22 10:07:10 crc kubenswrapper[4811]: I0122 10:07:10.991858 4811 scope.go:117] "RemoveContainer" containerID="48adbe58c0edc6e2dbd02dd77a6da6c68c74447ea1d3ab9ca56908346b26b622" Jan 22 10:07:10 crc kubenswrapper[4811]: E0122 10:07:10.992755 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-txvcq_openshift-machine-config-operator(84068a6b-e189-419b-87f5-f31428f6eafe)\"" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" Jan 22 10:07:23 crc kubenswrapper[4811]: I0122 10:07:23.995052 4811 scope.go:117] "RemoveContainer" containerID="48adbe58c0edc6e2dbd02dd77a6da6c68c74447ea1d3ab9ca56908346b26b622" Jan 22 10:07:23 crc kubenswrapper[4811]: E0122 10:07:23.995825 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-txvcq_openshift-machine-config-operator(84068a6b-e189-419b-87f5-f31428f6eafe)\"" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" Jan 22 10:07:33 crc kubenswrapper[4811]: I0122 10:07:33.592898 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-pzwr8"] Jan 22 10:07:33 crc kubenswrapper[4811]: E0122 10:07:33.594703 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77c08286-fb8e-49c1-a29d-91754ab41a94" containerName="registry-server" Jan 22 10:07:33 crc kubenswrapper[4811]: I0122 10:07:33.594723 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="77c08286-fb8e-49c1-a29d-91754ab41a94" containerName="registry-server" Jan 22 10:07:33 crc kubenswrapper[4811]: E0122 10:07:33.594759 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77c08286-fb8e-49c1-a29d-91754ab41a94" containerName="extract-utilities" Jan 22 10:07:33 crc kubenswrapper[4811]: I0122 10:07:33.594766 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="77c08286-fb8e-49c1-a29d-91754ab41a94" containerName="extract-utilities" Jan 22 10:07:33 crc kubenswrapper[4811]: E0122 10:07:33.594777 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77c08286-fb8e-49c1-a29d-91754ab41a94" containerName="extract-content" Jan 22 10:07:33 crc kubenswrapper[4811]: I0122 10:07:33.594784 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="77c08286-fb8e-49c1-a29d-91754ab41a94" containerName="extract-content" Jan 22 10:07:33 crc kubenswrapper[4811]: I0122 10:07:33.606851 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="77c08286-fb8e-49c1-a29d-91754ab41a94" containerName="registry-server" Jan 22 10:07:33 crc kubenswrapper[4811]: I0122 10:07:33.608007 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pzwr8"] Jan 22 10:07:33 crc kubenswrapper[4811]: I0122 10:07:33.608085 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pzwr8" Jan 22 10:07:33 crc kubenswrapper[4811]: I0122 10:07:33.656475 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vl2n2\" (UniqueName: \"kubernetes.io/projected/15ceb38e-da30-4720-b42b-321d9701df7f-kube-api-access-vl2n2\") pod \"community-operators-pzwr8\" (UID: \"15ceb38e-da30-4720-b42b-321d9701df7f\") " pod="openshift-marketplace/community-operators-pzwr8" Jan 22 10:07:33 crc kubenswrapper[4811]: I0122 10:07:33.656554 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15ceb38e-da30-4720-b42b-321d9701df7f-utilities\") pod \"community-operators-pzwr8\" (UID: \"15ceb38e-da30-4720-b42b-321d9701df7f\") " pod="openshift-marketplace/community-operators-pzwr8" Jan 22 10:07:33 crc kubenswrapper[4811]: I0122 10:07:33.657043 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15ceb38e-da30-4720-b42b-321d9701df7f-catalog-content\") pod \"community-operators-pzwr8\" (UID: \"15ceb38e-da30-4720-b42b-321d9701df7f\") " pod="openshift-marketplace/community-operators-pzwr8" Jan 22 10:07:33 crc kubenswrapper[4811]: I0122 10:07:33.759592 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vl2n2\" (UniqueName: \"kubernetes.io/projected/15ceb38e-da30-4720-b42b-321d9701df7f-kube-api-access-vl2n2\") pod \"community-operators-pzwr8\" (UID: \"15ceb38e-da30-4720-b42b-321d9701df7f\") " pod="openshift-marketplace/community-operators-pzwr8" Jan 22 10:07:33 crc kubenswrapper[4811]: I0122 10:07:33.760317 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15ceb38e-da30-4720-b42b-321d9701df7f-utilities\") pod \"community-operators-pzwr8\" (UID: \"15ceb38e-da30-4720-b42b-321d9701df7f\") " pod="openshift-marketplace/community-operators-pzwr8" Jan 22 10:07:33 crc kubenswrapper[4811]: I0122 10:07:33.760523 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15ceb38e-da30-4720-b42b-321d9701df7f-catalog-content\") pod \"community-operators-pzwr8\" (UID: \"15ceb38e-da30-4720-b42b-321d9701df7f\") " pod="openshift-marketplace/community-operators-pzwr8" Jan 22 10:07:33 crc kubenswrapper[4811]: I0122 10:07:33.760868 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15ceb38e-da30-4720-b42b-321d9701df7f-catalog-content\") pod \"community-operators-pzwr8\" (UID: \"15ceb38e-da30-4720-b42b-321d9701df7f\") " pod="openshift-marketplace/community-operators-pzwr8" Jan 22 10:07:33 crc kubenswrapper[4811]: I0122 10:07:33.761032 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15ceb38e-da30-4720-b42b-321d9701df7f-utilities\") pod \"community-operators-pzwr8\" (UID: \"15ceb38e-da30-4720-b42b-321d9701df7f\") " pod="openshift-marketplace/community-operators-pzwr8" Jan 22 10:07:33 crc kubenswrapper[4811]: I0122 10:07:33.777048 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vl2n2\" (UniqueName: \"kubernetes.io/projected/15ceb38e-da30-4720-b42b-321d9701df7f-kube-api-access-vl2n2\") pod \"community-operators-pzwr8\" (UID: \"15ceb38e-da30-4720-b42b-321d9701df7f\") " pod="openshift-marketplace/community-operators-pzwr8" Jan 22 10:07:33 crc kubenswrapper[4811]: I0122 10:07:33.935889 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pzwr8" Jan 22 10:07:34 crc kubenswrapper[4811]: I0122 10:07:34.436655 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pzwr8"] Jan 22 10:07:34 crc kubenswrapper[4811]: I0122 10:07:34.731868 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pzwr8" event={"ID":"15ceb38e-da30-4720-b42b-321d9701df7f","Type":"ContainerStarted","Data":"4a5e08e5e2943603b4e90168dd8f52e14415dc6ebfcf4e6a028870ec51fe9abc"} Jan 22 10:07:35 crc kubenswrapper[4811]: I0122 10:07:35.741049 4811 generic.go:334] "Generic (PLEG): container finished" podID="15ceb38e-da30-4720-b42b-321d9701df7f" containerID="1ee711d5f8e2b9fd03f753eedf12778bc92a92033e082220f63030093216e788" exitCode=0 Jan 22 10:07:35 crc kubenswrapper[4811]: I0122 10:07:35.741135 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pzwr8" event={"ID":"15ceb38e-da30-4720-b42b-321d9701df7f","Type":"ContainerDied","Data":"1ee711d5f8e2b9fd03f753eedf12778bc92a92033e082220f63030093216e788"} Jan 22 10:07:35 crc kubenswrapper[4811]: I0122 10:07:35.743574 4811 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 10:07:35 crc kubenswrapper[4811]: I0122 10:07:35.998893 4811 scope.go:117] "RemoveContainer" containerID="48adbe58c0edc6e2dbd02dd77a6da6c68c74447ea1d3ab9ca56908346b26b622" Jan 22 10:07:35 crc kubenswrapper[4811]: E0122 10:07:35.999293 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-txvcq_openshift-machine-config-operator(84068a6b-e189-419b-87f5-f31428f6eafe)\"" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" Jan 22 10:07:36 crc kubenswrapper[4811]: I0122 10:07:36.753863 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pzwr8" event={"ID":"15ceb38e-da30-4720-b42b-321d9701df7f","Type":"ContainerStarted","Data":"450e39b29d4ad791c19360f4b5816fa7a1d46279d53f03b1f78eefbc9a5aedde"} Jan 22 10:07:37 crc kubenswrapper[4811]: I0122 10:07:37.763835 4811 generic.go:334] "Generic (PLEG): container finished" podID="15ceb38e-da30-4720-b42b-321d9701df7f" containerID="450e39b29d4ad791c19360f4b5816fa7a1d46279d53f03b1f78eefbc9a5aedde" exitCode=0 Jan 22 10:07:37 crc kubenswrapper[4811]: I0122 10:07:37.763916 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pzwr8" event={"ID":"15ceb38e-da30-4720-b42b-321d9701df7f","Type":"ContainerDied","Data":"450e39b29d4ad791c19360f4b5816fa7a1d46279d53f03b1f78eefbc9a5aedde"} Jan 22 10:07:38 crc kubenswrapper[4811]: I0122 10:07:38.775042 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pzwr8" event={"ID":"15ceb38e-da30-4720-b42b-321d9701df7f","Type":"ContainerStarted","Data":"ba497f630b01379c161552edbfddd53afe596f33fe2da1d93d3a966442532036"} Jan 22 10:07:38 crc kubenswrapper[4811]: I0122 10:07:38.794289 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-pzwr8" podStartSLOduration=3.271504341 podStartE2EDuration="5.794271532s" podCreationTimestamp="2026-01-22 10:07:33 +0000 UTC" firstStartedPulling="2026-01-22 10:07:35.743314339 +0000 UTC m=+3700.065501462" lastFinishedPulling="2026-01-22 10:07:38.26608153 +0000 UTC m=+3702.588268653" observedRunningTime="2026-01-22 10:07:38.789888066 +0000 UTC m=+3703.112075189" watchObservedRunningTime="2026-01-22 10:07:38.794271532 +0000 UTC m=+3703.116458656" Jan 22 10:07:43 crc kubenswrapper[4811]: I0122 10:07:43.936900 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-pzwr8" Jan 22 10:07:43 crc kubenswrapper[4811]: I0122 10:07:43.937169 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-pzwr8" Jan 22 10:07:43 crc kubenswrapper[4811]: I0122 10:07:43.971270 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-pzwr8" Jan 22 10:07:44 crc kubenswrapper[4811]: I0122 10:07:44.854463 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-pzwr8" Jan 22 10:07:44 crc kubenswrapper[4811]: I0122 10:07:44.934970 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pzwr8"] Jan 22 10:07:46 crc kubenswrapper[4811]: I0122 10:07:46.828714 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-pzwr8" podUID="15ceb38e-da30-4720-b42b-321d9701df7f" containerName="registry-server" containerID="cri-o://ba497f630b01379c161552edbfddd53afe596f33fe2da1d93d3a966442532036" gracePeriod=2 Jan 22 10:07:47 crc kubenswrapper[4811]: I0122 10:07:47.412555 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pzwr8" Jan 22 10:07:47 crc kubenswrapper[4811]: I0122 10:07:47.540874 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vl2n2\" (UniqueName: \"kubernetes.io/projected/15ceb38e-da30-4720-b42b-321d9701df7f-kube-api-access-vl2n2\") pod \"15ceb38e-da30-4720-b42b-321d9701df7f\" (UID: \"15ceb38e-da30-4720-b42b-321d9701df7f\") " Jan 22 10:07:47 crc kubenswrapper[4811]: I0122 10:07:47.540977 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15ceb38e-da30-4720-b42b-321d9701df7f-catalog-content\") pod \"15ceb38e-da30-4720-b42b-321d9701df7f\" (UID: \"15ceb38e-da30-4720-b42b-321d9701df7f\") " Jan 22 10:07:47 crc kubenswrapper[4811]: I0122 10:07:47.541120 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15ceb38e-da30-4720-b42b-321d9701df7f-utilities\") pod \"15ceb38e-da30-4720-b42b-321d9701df7f\" (UID: \"15ceb38e-da30-4720-b42b-321d9701df7f\") " Jan 22 10:07:47 crc kubenswrapper[4811]: I0122 10:07:47.542284 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/15ceb38e-da30-4720-b42b-321d9701df7f-utilities" (OuterVolumeSpecName: "utilities") pod "15ceb38e-da30-4720-b42b-321d9701df7f" (UID: "15ceb38e-da30-4720-b42b-321d9701df7f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:07:47 crc kubenswrapper[4811]: I0122 10:07:47.562776 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15ceb38e-da30-4720-b42b-321d9701df7f-kube-api-access-vl2n2" (OuterVolumeSpecName: "kube-api-access-vl2n2") pod "15ceb38e-da30-4720-b42b-321d9701df7f" (UID: "15ceb38e-da30-4720-b42b-321d9701df7f"). InnerVolumeSpecName "kube-api-access-vl2n2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:07:47 crc kubenswrapper[4811]: I0122 10:07:47.584216 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/15ceb38e-da30-4720-b42b-321d9701df7f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "15ceb38e-da30-4720-b42b-321d9701df7f" (UID: "15ceb38e-da30-4720-b42b-321d9701df7f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:07:47 crc kubenswrapper[4811]: I0122 10:07:47.643352 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vl2n2\" (UniqueName: \"kubernetes.io/projected/15ceb38e-da30-4720-b42b-321d9701df7f-kube-api-access-vl2n2\") on node \"crc\" DevicePath \"\"" Jan 22 10:07:47 crc kubenswrapper[4811]: I0122 10:07:47.643477 4811 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15ceb38e-da30-4720-b42b-321d9701df7f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 10:07:47 crc kubenswrapper[4811]: I0122 10:07:47.643535 4811 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15ceb38e-da30-4720-b42b-321d9701df7f-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 10:07:47 crc kubenswrapper[4811]: I0122 10:07:47.836433 4811 generic.go:334] "Generic (PLEG): container finished" podID="15ceb38e-da30-4720-b42b-321d9701df7f" containerID="ba497f630b01379c161552edbfddd53afe596f33fe2da1d93d3a966442532036" exitCode=0 Jan 22 10:07:47 crc kubenswrapper[4811]: I0122 10:07:47.836861 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pzwr8" Jan 22 10:07:47 crc kubenswrapper[4811]: I0122 10:07:47.836956 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pzwr8" event={"ID":"15ceb38e-da30-4720-b42b-321d9701df7f","Type":"ContainerDied","Data":"ba497f630b01379c161552edbfddd53afe596f33fe2da1d93d3a966442532036"} Jan 22 10:07:47 crc kubenswrapper[4811]: I0122 10:07:47.837882 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pzwr8" event={"ID":"15ceb38e-da30-4720-b42b-321d9701df7f","Type":"ContainerDied","Data":"4a5e08e5e2943603b4e90168dd8f52e14415dc6ebfcf4e6a028870ec51fe9abc"} Jan 22 10:07:47 crc kubenswrapper[4811]: I0122 10:07:47.837922 4811 scope.go:117] "RemoveContainer" containerID="ba497f630b01379c161552edbfddd53afe596f33fe2da1d93d3a966442532036" Jan 22 10:07:47 crc kubenswrapper[4811]: I0122 10:07:47.868466 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pzwr8"] Jan 22 10:07:47 crc kubenswrapper[4811]: I0122 10:07:47.875495 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-pzwr8"] Jan 22 10:07:47 crc kubenswrapper[4811]: I0122 10:07:47.877918 4811 scope.go:117] "RemoveContainer" containerID="450e39b29d4ad791c19360f4b5816fa7a1d46279d53f03b1f78eefbc9a5aedde" Jan 22 10:07:47 crc kubenswrapper[4811]: I0122 10:07:47.907748 4811 scope.go:117] "RemoveContainer" containerID="1ee711d5f8e2b9fd03f753eedf12778bc92a92033e082220f63030093216e788" Jan 22 10:07:47 crc kubenswrapper[4811]: I0122 10:07:47.936135 4811 scope.go:117] "RemoveContainer" containerID="ba497f630b01379c161552edbfddd53afe596f33fe2da1d93d3a966442532036" Jan 22 10:07:47 crc kubenswrapper[4811]: E0122 10:07:47.936799 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba497f630b01379c161552edbfddd53afe596f33fe2da1d93d3a966442532036\": container with ID starting with ba497f630b01379c161552edbfddd53afe596f33fe2da1d93d3a966442532036 not found: ID does not exist" containerID="ba497f630b01379c161552edbfddd53afe596f33fe2da1d93d3a966442532036" Jan 22 10:07:47 crc kubenswrapper[4811]: I0122 10:07:47.936831 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba497f630b01379c161552edbfddd53afe596f33fe2da1d93d3a966442532036"} err="failed to get container status \"ba497f630b01379c161552edbfddd53afe596f33fe2da1d93d3a966442532036\": rpc error: code = NotFound desc = could not find container \"ba497f630b01379c161552edbfddd53afe596f33fe2da1d93d3a966442532036\": container with ID starting with ba497f630b01379c161552edbfddd53afe596f33fe2da1d93d3a966442532036 not found: ID does not exist" Jan 22 10:07:47 crc kubenswrapper[4811]: I0122 10:07:47.936854 4811 scope.go:117] "RemoveContainer" containerID="450e39b29d4ad791c19360f4b5816fa7a1d46279d53f03b1f78eefbc9a5aedde" Jan 22 10:07:47 crc kubenswrapper[4811]: E0122 10:07:47.937063 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"450e39b29d4ad791c19360f4b5816fa7a1d46279d53f03b1f78eefbc9a5aedde\": container with ID starting with 450e39b29d4ad791c19360f4b5816fa7a1d46279d53f03b1f78eefbc9a5aedde not found: ID does not exist" containerID="450e39b29d4ad791c19360f4b5816fa7a1d46279d53f03b1f78eefbc9a5aedde" Jan 22 10:07:47 crc kubenswrapper[4811]: I0122 10:07:47.937085 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"450e39b29d4ad791c19360f4b5816fa7a1d46279d53f03b1f78eefbc9a5aedde"} err="failed to get container status \"450e39b29d4ad791c19360f4b5816fa7a1d46279d53f03b1f78eefbc9a5aedde\": rpc error: code = NotFound desc = could not find container \"450e39b29d4ad791c19360f4b5816fa7a1d46279d53f03b1f78eefbc9a5aedde\": container with ID starting with 450e39b29d4ad791c19360f4b5816fa7a1d46279d53f03b1f78eefbc9a5aedde not found: ID does not exist" Jan 22 10:07:47 crc kubenswrapper[4811]: I0122 10:07:47.937097 4811 scope.go:117] "RemoveContainer" containerID="1ee711d5f8e2b9fd03f753eedf12778bc92a92033e082220f63030093216e788" Jan 22 10:07:47 crc kubenswrapper[4811]: E0122 10:07:47.937306 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ee711d5f8e2b9fd03f753eedf12778bc92a92033e082220f63030093216e788\": container with ID starting with 1ee711d5f8e2b9fd03f753eedf12778bc92a92033e082220f63030093216e788 not found: ID does not exist" containerID="1ee711d5f8e2b9fd03f753eedf12778bc92a92033e082220f63030093216e788" Jan 22 10:07:47 crc kubenswrapper[4811]: I0122 10:07:47.937336 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ee711d5f8e2b9fd03f753eedf12778bc92a92033e082220f63030093216e788"} err="failed to get container status \"1ee711d5f8e2b9fd03f753eedf12778bc92a92033e082220f63030093216e788\": rpc error: code = NotFound desc = could not find container \"1ee711d5f8e2b9fd03f753eedf12778bc92a92033e082220f63030093216e788\": container with ID starting with 1ee711d5f8e2b9fd03f753eedf12778bc92a92033e082220f63030093216e788 not found: ID does not exist" Jan 22 10:07:47 crc kubenswrapper[4811]: I0122 10:07:47.993718 4811 scope.go:117] "RemoveContainer" containerID="48adbe58c0edc6e2dbd02dd77a6da6c68c74447ea1d3ab9ca56908346b26b622" Jan 22 10:07:47 crc kubenswrapper[4811]: E0122 10:07:47.994337 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-txvcq_openshift-machine-config-operator(84068a6b-e189-419b-87f5-f31428f6eafe)\"" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" Jan 22 10:07:48 crc kubenswrapper[4811]: I0122 10:07:48.000332 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15ceb38e-da30-4720-b42b-321d9701df7f" path="/var/lib/kubelet/pods/15ceb38e-da30-4720-b42b-321d9701df7f/volumes" Jan 22 10:08:01 crc kubenswrapper[4811]: I0122 10:08:01.397204 4811 scope.go:117] "RemoveContainer" containerID="e42f63fbf7642873cc98fbb33deb39f5376832add747d37647c4367d9abc4d6d" Jan 22 10:08:01 crc kubenswrapper[4811]: I0122 10:08:01.992933 4811 scope.go:117] "RemoveContainer" containerID="48adbe58c0edc6e2dbd02dd77a6da6c68c74447ea1d3ab9ca56908346b26b622" Jan 22 10:08:01 crc kubenswrapper[4811]: E0122 10:08:01.993969 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-txvcq_openshift-machine-config-operator(84068a6b-e189-419b-87f5-f31428f6eafe)\"" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" Jan 22 10:08:13 crc kubenswrapper[4811]: I0122 10:08:13.993076 4811 scope.go:117] "RemoveContainer" containerID="48adbe58c0edc6e2dbd02dd77a6da6c68c74447ea1d3ab9ca56908346b26b622" Jan 22 10:08:13 crc kubenswrapper[4811]: E0122 10:08:13.994584 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-txvcq_openshift-machine-config-operator(84068a6b-e189-419b-87f5-f31428f6eafe)\"" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" Jan 22 10:08:26 crc kubenswrapper[4811]: I0122 10:08:26.992436 4811 scope.go:117] "RemoveContainer" containerID="48adbe58c0edc6e2dbd02dd77a6da6c68c74447ea1d3ab9ca56908346b26b622" Jan 22 10:08:26 crc kubenswrapper[4811]: E0122 10:08:26.993195 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-txvcq_openshift-machine-config-operator(84068a6b-e189-419b-87f5-f31428f6eafe)\"" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" Jan 22 10:08:39 crc kubenswrapper[4811]: I0122 10:08:39.992991 4811 scope.go:117] "RemoveContainer" containerID="48adbe58c0edc6e2dbd02dd77a6da6c68c74447ea1d3ab9ca56908346b26b622" Jan 22 10:08:39 crc kubenswrapper[4811]: E0122 10:08:39.993667 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-txvcq_openshift-machine-config-operator(84068a6b-e189-419b-87f5-f31428f6eafe)\"" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" Jan 22 10:08:54 crc kubenswrapper[4811]: I0122 10:08:54.991696 4811 scope.go:117] "RemoveContainer" containerID="48adbe58c0edc6e2dbd02dd77a6da6c68c74447ea1d3ab9ca56908346b26b622" Jan 22 10:08:54 crc kubenswrapper[4811]: E0122 10:08:54.992349 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-txvcq_openshift-machine-config-operator(84068a6b-e189-419b-87f5-f31428f6eafe)\"" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" Jan 22 10:09:05 crc kubenswrapper[4811]: I0122 10:09:05.997071 4811 scope.go:117] "RemoveContainer" containerID="48adbe58c0edc6e2dbd02dd77a6da6c68c74447ea1d3ab9ca56908346b26b622" Jan 22 10:09:05 crc kubenswrapper[4811]: E0122 10:09:05.997719 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-txvcq_openshift-machine-config-operator(84068a6b-e189-419b-87f5-f31428f6eafe)\"" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" Jan 22 10:09:17 crc kubenswrapper[4811]: I0122 10:09:17.991928 4811 scope.go:117] "RemoveContainer" containerID="48adbe58c0edc6e2dbd02dd77a6da6c68c74447ea1d3ab9ca56908346b26b622" Jan 22 10:09:17 crc kubenswrapper[4811]: E0122 10:09:17.992454 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-txvcq_openshift-machine-config-operator(84068a6b-e189-419b-87f5-f31428f6eafe)\"" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" Jan 22 10:09:31 crc kubenswrapper[4811]: I0122 10:09:31.992046 4811 scope.go:117] "RemoveContainer" containerID="48adbe58c0edc6e2dbd02dd77a6da6c68c74447ea1d3ab9ca56908346b26b622" Jan 22 10:09:31 crc kubenswrapper[4811]: E0122 10:09:31.992577 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-txvcq_openshift-machine-config-operator(84068a6b-e189-419b-87f5-f31428f6eafe)\"" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" Jan 22 10:09:46 crc kubenswrapper[4811]: I0122 10:09:46.992357 4811 scope.go:117] "RemoveContainer" containerID="48adbe58c0edc6e2dbd02dd77a6da6c68c74447ea1d3ab9ca56908346b26b622" Jan 22 10:09:46 crc kubenswrapper[4811]: E0122 10:09:46.993436 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-txvcq_openshift-machine-config-operator(84068a6b-e189-419b-87f5-f31428f6eafe)\"" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" Jan 22 10:10:01 crc kubenswrapper[4811]: I0122 10:10:01.995212 4811 scope.go:117] "RemoveContainer" containerID="48adbe58c0edc6e2dbd02dd77a6da6c68c74447ea1d3ab9ca56908346b26b622" Jan 22 10:10:01 crc kubenswrapper[4811]: E0122 10:10:01.996011 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-txvcq_openshift-machine-config-operator(84068a6b-e189-419b-87f5-f31428f6eafe)\"" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" Jan 22 10:10:14 crc kubenswrapper[4811]: I0122 10:10:14.992687 4811 scope.go:117] "RemoveContainer" containerID="48adbe58c0edc6e2dbd02dd77a6da6c68c74447ea1d3ab9ca56908346b26b622" Jan 22 10:10:16 crc kubenswrapper[4811]: I0122 10:10:16.019343 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" event={"ID":"84068a6b-e189-419b-87f5-f31428f6eafe","Type":"ContainerStarted","Data":"539809bd9b7bc1eb866c0acd9085f828d6b9e023dbd6aba5bf80b7ae0173d007"} Jan 22 10:12:20 crc kubenswrapper[4811]: I0122 10:12:20.210746 4811 generic.go:334] "Generic (PLEG): container finished" podID="fc7c94c9-4bc5-4fff-b46a-60facf41a2df" containerID="8e0f244cca1ec15f381eaed5ae8c13ad119d433e9e9abcebc181c9ba0c09f3f6" exitCode=0 Jan 22 10:12:20 crc kubenswrapper[4811]: I0122 10:12:20.210824 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-t86ls/must-gather-mmvbb" event={"ID":"fc7c94c9-4bc5-4fff-b46a-60facf41a2df","Type":"ContainerDied","Data":"8e0f244cca1ec15f381eaed5ae8c13ad119d433e9e9abcebc181c9ba0c09f3f6"} Jan 22 10:12:20 crc kubenswrapper[4811]: I0122 10:12:20.211957 4811 scope.go:117] "RemoveContainer" containerID="8e0f244cca1ec15f381eaed5ae8c13ad119d433e9e9abcebc181c9ba0c09f3f6" Jan 22 10:12:21 crc kubenswrapper[4811]: I0122 10:12:21.043640 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-t86ls_must-gather-mmvbb_fc7c94c9-4bc5-4fff-b46a-60facf41a2df/gather/0.log" Jan 22 10:12:28 crc kubenswrapper[4811]: I0122 10:12:28.665755 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-t86ls/must-gather-mmvbb"] Jan 22 10:12:28 crc kubenswrapper[4811]: I0122 10:12:28.667104 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-t86ls/must-gather-mmvbb" podUID="fc7c94c9-4bc5-4fff-b46a-60facf41a2df" containerName="copy" containerID="cri-o://20c0f56a69ac7107a0e0116f3709ad1db2dee911429e9c3f7b3462ebc0dba599" gracePeriod=2 Jan 22 10:12:28 crc kubenswrapper[4811]: I0122 10:12:28.673734 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-t86ls/must-gather-mmvbb"] Jan 22 10:12:29 crc kubenswrapper[4811]: I0122 10:12:29.311180 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-t86ls_must-gather-mmvbb_fc7c94c9-4bc5-4fff-b46a-60facf41a2df/copy/0.log" Jan 22 10:12:29 crc kubenswrapper[4811]: I0122 10:12:29.311768 4811 generic.go:334] "Generic (PLEG): container finished" podID="fc7c94c9-4bc5-4fff-b46a-60facf41a2df" containerID="20c0f56a69ac7107a0e0116f3709ad1db2dee911429e9c3f7b3462ebc0dba599" exitCode=143 Jan 22 10:12:29 crc kubenswrapper[4811]: I0122 10:12:29.619373 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-t86ls_must-gather-mmvbb_fc7c94c9-4bc5-4fff-b46a-60facf41a2df/copy/0.log" Jan 22 10:12:29 crc kubenswrapper[4811]: I0122 10:12:29.619903 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-t86ls/must-gather-mmvbb" Jan 22 10:12:29 crc kubenswrapper[4811]: I0122 10:12:29.714381 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rlptg\" (UniqueName: \"kubernetes.io/projected/fc7c94c9-4bc5-4fff-b46a-60facf41a2df-kube-api-access-rlptg\") pod \"fc7c94c9-4bc5-4fff-b46a-60facf41a2df\" (UID: \"fc7c94c9-4bc5-4fff-b46a-60facf41a2df\") " Jan 22 10:12:29 crc kubenswrapper[4811]: I0122 10:12:29.714873 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/fc7c94c9-4bc5-4fff-b46a-60facf41a2df-must-gather-output\") pod \"fc7c94c9-4bc5-4fff-b46a-60facf41a2df\" (UID: \"fc7c94c9-4bc5-4fff-b46a-60facf41a2df\") " Jan 22 10:12:29 crc kubenswrapper[4811]: I0122 10:12:29.720111 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc7c94c9-4bc5-4fff-b46a-60facf41a2df-kube-api-access-rlptg" (OuterVolumeSpecName: "kube-api-access-rlptg") pod "fc7c94c9-4bc5-4fff-b46a-60facf41a2df" (UID: "fc7c94c9-4bc5-4fff-b46a-60facf41a2df"). InnerVolumeSpecName "kube-api-access-rlptg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:12:29 crc kubenswrapper[4811]: I0122 10:12:29.816108 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rlptg\" (UniqueName: \"kubernetes.io/projected/fc7c94c9-4bc5-4fff-b46a-60facf41a2df-kube-api-access-rlptg\") on node \"crc\" DevicePath \"\"" Jan 22 10:12:29 crc kubenswrapper[4811]: I0122 10:12:29.883396 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fc7c94c9-4bc5-4fff-b46a-60facf41a2df-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "fc7c94c9-4bc5-4fff-b46a-60facf41a2df" (UID: "fc7c94c9-4bc5-4fff-b46a-60facf41a2df"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:12:29 crc kubenswrapper[4811]: I0122 10:12:29.918369 4811 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/fc7c94c9-4bc5-4fff-b46a-60facf41a2df-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 22 10:12:30 crc kubenswrapper[4811]: I0122 10:12:30.001209 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc7c94c9-4bc5-4fff-b46a-60facf41a2df" path="/var/lib/kubelet/pods/fc7c94c9-4bc5-4fff-b46a-60facf41a2df/volumes" Jan 22 10:12:30 crc kubenswrapper[4811]: I0122 10:12:30.320749 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-t86ls_must-gather-mmvbb_fc7c94c9-4bc5-4fff-b46a-60facf41a2df/copy/0.log" Jan 22 10:12:30 crc kubenswrapper[4811]: I0122 10:12:30.321322 4811 scope.go:117] "RemoveContainer" containerID="20c0f56a69ac7107a0e0116f3709ad1db2dee911429e9c3f7b3462ebc0dba599" Jan 22 10:12:30 crc kubenswrapper[4811]: I0122 10:12:30.321380 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-t86ls/must-gather-mmvbb" Jan 22 10:12:30 crc kubenswrapper[4811]: I0122 10:12:30.344118 4811 scope.go:117] "RemoveContainer" containerID="8e0f244cca1ec15f381eaed5ae8c13ad119d433e9e9abcebc181c9ba0c09f3f6" Jan 22 10:12:35 crc kubenswrapper[4811]: I0122 10:12:35.501690 4811 patch_prober.go:28] interesting pod/machine-config-daemon-txvcq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 10:12:35 crc kubenswrapper[4811]: I0122 10:12:35.502132 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 10:13:05 crc kubenswrapper[4811]: I0122 10:13:05.501005 4811 patch_prober.go:28] interesting pod/machine-config-daemon-txvcq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 10:13:05 crc kubenswrapper[4811]: I0122 10:13:05.501539 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 10:13:35 crc kubenswrapper[4811]: I0122 10:13:35.502022 4811 patch_prober.go:28] interesting pod/machine-config-daemon-txvcq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 10:13:35 crc kubenswrapper[4811]: I0122 10:13:35.502469 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 10:13:35 crc kubenswrapper[4811]: I0122 10:13:35.502519 4811 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" Jan 22 10:13:35 crc kubenswrapper[4811]: I0122 10:13:35.503110 4811 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"539809bd9b7bc1eb866c0acd9085f828d6b9e023dbd6aba5bf80b7ae0173d007"} pod="openshift-machine-config-operator/machine-config-daemon-txvcq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 10:13:35 crc kubenswrapper[4811]: I0122 10:13:35.503171 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" containerName="machine-config-daemon" containerID="cri-o://539809bd9b7bc1eb866c0acd9085f828d6b9e023dbd6aba5bf80b7ae0173d007" gracePeriod=600 Jan 22 10:13:35 crc kubenswrapper[4811]: I0122 10:13:35.861452 4811 generic.go:334] "Generic (PLEG): container finished" podID="84068a6b-e189-419b-87f5-f31428f6eafe" containerID="539809bd9b7bc1eb866c0acd9085f828d6b9e023dbd6aba5bf80b7ae0173d007" exitCode=0 Jan 22 10:13:35 crc kubenswrapper[4811]: I0122 10:13:35.861496 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" event={"ID":"84068a6b-e189-419b-87f5-f31428f6eafe","Type":"ContainerDied","Data":"539809bd9b7bc1eb866c0acd9085f828d6b9e023dbd6aba5bf80b7ae0173d007"} Jan 22 10:13:35 crc kubenswrapper[4811]: I0122 10:13:35.861539 4811 scope.go:117] "RemoveContainer" containerID="48adbe58c0edc6e2dbd02dd77a6da6c68c74447ea1d3ab9ca56908346b26b622" Jan 22 10:13:36 crc kubenswrapper[4811]: I0122 10:13:36.870803 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" event={"ID":"84068a6b-e189-419b-87f5-f31428f6eafe","Type":"ContainerStarted","Data":"538ba6db70fd8847ece098eb9e97c7fe7aec7e85ea8714fabcb34e1d372c3323"} Jan 22 10:13:49 crc kubenswrapper[4811]: I0122 10:13:49.205966 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-l6gl7/must-gather-vq4qc"] Jan 22 10:13:49 crc kubenswrapper[4811]: E0122 10:13:49.206753 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc7c94c9-4bc5-4fff-b46a-60facf41a2df" containerName="copy" Jan 22 10:13:49 crc kubenswrapper[4811]: I0122 10:13:49.206768 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc7c94c9-4bc5-4fff-b46a-60facf41a2df" containerName="copy" Jan 22 10:13:49 crc kubenswrapper[4811]: E0122 10:13:49.206781 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15ceb38e-da30-4720-b42b-321d9701df7f" containerName="registry-server" Jan 22 10:13:49 crc kubenswrapper[4811]: I0122 10:13:49.206787 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="15ceb38e-da30-4720-b42b-321d9701df7f" containerName="registry-server" Jan 22 10:13:49 crc kubenswrapper[4811]: E0122 10:13:49.206800 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc7c94c9-4bc5-4fff-b46a-60facf41a2df" containerName="gather" Jan 22 10:13:49 crc kubenswrapper[4811]: I0122 10:13:49.206805 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc7c94c9-4bc5-4fff-b46a-60facf41a2df" containerName="gather" Jan 22 10:13:49 crc kubenswrapper[4811]: E0122 10:13:49.206813 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15ceb38e-da30-4720-b42b-321d9701df7f" containerName="extract-content" Jan 22 10:13:49 crc kubenswrapper[4811]: I0122 10:13:49.206817 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="15ceb38e-da30-4720-b42b-321d9701df7f" containerName="extract-content" Jan 22 10:13:49 crc kubenswrapper[4811]: E0122 10:13:49.206824 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15ceb38e-da30-4720-b42b-321d9701df7f" containerName="extract-utilities" Jan 22 10:13:49 crc kubenswrapper[4811]: I0122 10:13:49.206829 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="15ceb38e-da30-4720-b42b-321d9701df7f" containerName="extract-utilities" Jan 22 10:13:49 crc kubenswrapper[4811]: I0122 10:13:49.206998 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc7c94c9-4bc5-4fff-b46a-60facf41a2df" containerName="gather" Jan 22 10:13:49 crc kubenswrapper[4811]: I0122 10:13:49.207007 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="15ceb38e-da30-4720-b42b-321d9701df7f" containerName="registry-server" Jan 22 10:13:49 crc kubenswrapper[4811]: I0122 10:13:49.207047 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc7c94c9-4bc5-4fff-b46a-60facf41a2df" containerName="copy" Jan 22 10:13:49 crc kubenswrapper[4811]: I0122 10:13:49.209661 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-l6gl7/must-gather-vq4qc" Jan 22 10:13:49 crc kubenswrapper[4811]: I0122 10:13:49.211555 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-l6gl7"/"default-dockercfg-fbw69" Jan 22 10:13:49 crc kubenswrapper[4811]: I0122 10:13:49.214980 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-l6gl7"/"openshift-service-ca.crt" Jan 22 10:13:49 crc kubenswrapper[4811]: I0122 10:13:49.215015 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-l6gl7"/"kube-root-ca.crt" Jan 22 10:13:49 crc kubenswrapper[4811]: I0122 10:13:49.221762 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/3d9af97d-9592-4cf2-bab0-2667139c49a6-must-gather-output\") pod \"must-gather-vq4qc\" (UID: \"3d9af97d-9592-4cf2-bab0-2667139c49a6\") " pod="openshift-must-gather-l6gl7/must-gather-vq4qc" Jan 22 10:13:49 crc kubenswrapper[4811]: I0122 10:13:49.222061 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgwwz\" (UniqueName: \"kubernetes.io/projected/3d9af97d-9592-4cf2-bab0-2667139c49a6-kube-api-access-tgwwz\") pod \"must-gather-vq4qc\" (UID: \"3d9af97d-9592-4cf2-bab0-2667139c49a6\") " pod="openshift-must-gather-l6gl7/must-gather-vq4qc" Jan 22 10:13:49 crc kubenswrapper[4811]: I0122 10:13:49.232136 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-l6gl7/must-gather-vq4qc"] Jan 22 10:13:49 crc kubenswrapper[4811]: I0122 10:13:49.323574 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tgwwz\" (UniqueName: \"kubernetes.io/projected/3d9af97d-9592-4cf2-bab0-2667139c49a6-kube-api-access-tgwwz\") pod \"must-gather-vq4qc\" (UID: \"3d9af97d-9592-4cf2-bab0-2667139c49a6\") " pod="openshift-must-gather-l6gl7/must-gather-vq4qc" Jan 22 10:13:49 crc kubenswrapper[4811]: I0122 10:13:49.323874 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/3d9af97d-9592-4cf2-bab0-2667139c49a6-must-gather-output\") pod \"must-gather-vq4qc\" (UID: \"3d9af97d-9592-4cf2-bab0-2667139c49a6\") " pod="openshift-must-gather-l6gl7/must-gather-vq4qc" Jan 22 10:13:49 crc kubenswrapper[4811]: I0122 10:13:49.324284 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/3d9af97d-9592-4cf2-bab0-2667139c49a6-must-gather-output\") pod \"must-gather-vq4qc\" (UID: \"3d9af97d-9592-4cf2-bab0-2667139c49a6\") " pod="openshift-must-gather-l6gl7/must-gather-vq4qc" Jan 22 10:13:49 crc kubenswrapper[4811]: I0122 10:13:49.343201 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tgwwz\" (UniqueName: \"kubernetes.io/projected/3d9af97d-9592-4cf2-bab0-2667139c49a6-kube-api-access-tgwwz\") pod \"must-gather-vq4qc\" (UID: \"3d9af97d-9592-4cf2-bab0-2667139c49a6\") " pod="openshift-must-gather-l6gl7/must-gather-vq4qc" Jan 22 10:13:49 crc kubenswrapper[4811]: I0122 10:13:49.524728 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-l6gl7/must-gather-vq4qc" Jan 22 10:13:49 crc kubenswrapper[4811]: I0122 10:13:49.956152 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-l6gl7/must-gather-vq4qc"] Jan 22 10:13:49 crc kubenswrapper[4811]: I0122 10:13:49.977144 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-l6gl7/must-gather-vq4qc" event={"ID":"3d9af97d-9592-4cf2-bab0-2667139c49a6","Type":"ContainerStarted","Data":"bc75efb5901698a501d6c19ee01cb407ae912a069b49e040724d41637b458a9b"} Jan 22 10:13:50 crc kubenswrapper[4811]: I0122 10:13:50.987154 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-l6gl7/must-gather-vq4qc" event={"ID":"3d9af97d-9592-4cf2-bab0-2667139c49a6","Type":"ContainerStarted","Data":"79603dc68c309d668fb4681e9a7b35416098964fe6b0364660b06727b20661ef"} Jan 22 10:13:50 crc kubenswrapper[4811]: I0122 10:13:50.987519 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-l6gl7/must-gather-vq4qc" event={"ID":"3d9af97d-9592-4cf2-bab0-2667139c49a6","Type":"ContainerStarted","Data":"1ba8ce47c4708062574c94c00457f07fbaeac1a0dd462d5e7fd38bb86e51507f"} Jan 22 10:13:51 crc kubenswrapper[4811]: I0122 10:13:51.016210 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-l6gl7/must-gather-vq4qc" podStartSLOduration=2.016197026 podStartE2EDuration="2.016197026s" podCreationTimestamp="2026-01-22 10:13:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:13:51.013903149 +0000 UTC m=+4075.336090273" watchObservedRunningTime="2026-01-22 10:13:51.016197026 +0000 UTC m=+4075.338384149" Jan 22 10:13:53 crc kubenswrapper[4811]: I0122 10:13:53.955507 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-l6gl7/crc-debug-hk9ql"] Jan 22 10:13:53 crc kubenswrapper[4811]: I0122 10:13:53.957037 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-l6gl7/crc-debug-hk9ql" Jan 22 10:13:54 crc kubenswrapper[4811]: I0122 10:13:54.016293 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kk9sx\" (UniqueName: \"kubernetes.io/projected/13b6dc4f-1d7d-468f-bc40-85034d842fc0-kube-api-access-kk9sx\") pod \"crc-debug-hk9ql\" (UID: \"13b6dc4f-1d7d-468f-bc40-85034d842fc0\") " pod="openshift-must-gather-l6gl7/crc-debug-hk9ql" Jan 22 10:13:54 crc kubenswrapper[4811]: I0122 10:13:54.016591 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/13b6dc4f-1d7d-468f-bc40-85034d842fc0-host\") pod \"crc-debug-hk9ql\" (UID: \"13b6dc4f-1d7d-468f-bc40-85034d842fc0\") " pod="openshift-must-gather-l6gl7/crc-debug-hk9ql" Jan 22 10:13:54 crc kubenswrapper[4811]: I0122 10:13:54.118551 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kk9sx\" (UniqueName: \"kubernetes.io/projected/13b6dc4f-1d7d-468f-bc40-85034d842fc0-kube-api-access-kk9sx\") pod \"crc-debug-hk9ql\" (UID: \"13b6dc4f-1d7d-468f-bc40-85034d842fc0\") " pod="openshift-must-gather-l6gl7/crc-debug-hk9ql" Jan 22 10:13:54 crc kubenswrapper[4811]: I0122 10:13:54.118694 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/13b6dc4f-1d7d-468f-bc40-85034d842fc0-host\") pod \"crc-debug-hk9ql\" (UID: \"13b6dc4f-1d7d-468f-bc40-85034d842fc0\") " pod="openshift-must-gather-l6gl7/crc-debug-hk9ql" Jan 22 10:13:54 crc kubenswrapper[4811]: I0122 10:13:54.118811 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/13b6dc4f-1d7d-468f-bc40-85034d842fc0-host\") pod \"crc-debug-hk9ql\" (UID: \"13b6dc4f-1d7d-468f-bc40-85034d842fc0\") " pod="openshift-must-gather-l6gl7/crc-debug-hk9ql" Jan 22 10:13:54 crc kubenswrapper[4811]: I0122 10:13:54.140846 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kk9sx\" (UniqueName: \"kubernetes.io/projected/13b6dc4f-1d7d-468f-bc40-85034d842fc0-kube-api-access-kk9sx\") pod \"crc-debug-hk9ql\" (UID: \"13b6dc4f-1d7d-468f-bc40-85034d842fc0\") " pod="openshift-must-gather-l6gl7/crc-debug-hk9ql" Jan 22 10:13:54 crc kubenswrapper[4811]: I0122 10:13:54.270819 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-l6gl7/crc-debug-hk9ql" Jan 22 10:13:54 crc kubenswrapper[4811]: W0122 10:13:54.295659 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod13b6dc4f_1d7d_468f_bc40_85034d842fc0.slice/crio-ffdace76a549b703b1ae21c609a987550864ff21621978b31d5cb2f81904baf2 WatchSource:0}: Error finding container ffdace76a549b703b1ae21c609a987550864ff21621978b31d5cb2f81904baf2: Status 404 returned error can't find the container with id ffdace76a549b703b1ae21c609a987550864ff21621978b31d5cb2f81904baf2 Jan 22 10:13:55 crc kubenswrapper[4811]: I0122 10:13:55.035702 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-l6gl7/crc-debug-hk9ql" event={"ID":"13b6dc4f-1d7d-468f-bc40-85034d842fc0","Type":"ContainerStarted","Data":"acdb5fa372c9ef804240f2ba49d5c2c15a5e5ce36d4f818590c9963b6009aaf9"} Jan 22 10:13:55 crc kubenswrapper[4811]: I0122 10:13:55.036185 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-l6gl7/crc-debug-hk9ql" event={"ID":"13b6dc4f-1d7d-468f-bc40-85034d842fc0","Type":"ContainerStarted","Data":"ffdace76a549b703b1ae21c609a987550864ff21621978b31d5cb2f81904baf2"} Jan 22 10:13:55 crc kubenswrapper[4811]: I0122 10:13:55.067038 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-l6gl7/crc-debug-hk9ql" podStartSLOduration=2.067027032 podStartE2EDuration="2.067027032s" podCreationTimestamp="2026-01-22 10:13:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:13:55.061379881 +0000 UTC m=+4079.383567004" watchObservedRunningTime="2026-01-22 10:13:55.067027032 +0000 UTC m=+4079.389214154" Jan 22 10:13:56 crc kubenswrapper[4811]: I0122 10:13:56.569382 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-76dcc8f4f4-dv85h_63cdd7a1-0295-4009-8c18-b3b3e24770b3/barbican-api-log/0.log" Jan 22 10:13:56 crc kubenswrapper[4811]: I0122 10:13:56.583707 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-76dcc8f4f4-dv85h_63cdd7a1-0295-4009-8c18-b3b3e24770b3/barbican-api/0.log" Jan 22 10:13:56 crc kubenswrapper[4811]: I0122 10:13:56.614140 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-5f966bcc4-4n44q_312cf490-6d44-416e-8238-06667bf8efee/barbican-keystone-listener-log/0.log" Jan 22 10:13:56 crc kubenswrapper[4811]: I0122 10:13:56.619582 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-5f966bcc4-4n44q_312cf490-6d44-416e-8238-06667bf8efee/barbican-keystone-listener/0.log" Jan 22 10:13:56 crc kubenswrapper[4811]: I0122 10:13:56.637582 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-7c8cfbdccc-5kmj8_220f6baa-23c9-4cf8-b91f-5245734fc341/barbican-worker-log/0.log" Jan 22 10:13:56 crc kubenswrapper[4811]: I0122 10:13:56.645169 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-7c8cfbdccc-5kmj8_220f6baa-23c9-4cf8-b91f-5245734fc341/barbican-worker/0.log" Jan 22 10:13:56 crc kubenswrapper[4811]: I0122 10:13:56.694647 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-mffrf_c799f725-4c74-42ab-9217-06e6c0310194/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Jan 22 10:13:56 crc kubenswrapper[4811]: I0122 10:13:56.727243 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_a18f46ac-7cea-410a-ac94-959fc43823bc/ceilometer-central-agent/0.log" Jan 22 10:13:56 crc kubenswrapper[4811]: I0122 10:13:56.740067 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_a18f46ac-7cea-410a-ac94-959fc43823bc/ceilometer-notification-agent/0.log" Jan 22 10:13:56 crc kubenswrapper[4811]: I0122 10:13:56.745907 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_a18f46ac-7cea-410a-ac94-959fc43823bc/sg-core/0.log" Jan 22 10:13:56 crc kubenswrapper[4811]: I0122 10:13:56.760185 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_a18f46ac-7cea-410a-ac94-959fc43823bc/proxy-httpd/0.log" Jan 22 10:13:56 crc kubenswrapper[4811]: I0122 10:13:56.775895 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceph-client-edpm-deployment-openstack-edpm-ipam-q2556_c61e1813-0266-4558-9a3d-5895a166d67f/ceph-client-edpm-deployment-openstack-edpm-ipam/0.log" Jan 22 10:13:56 crc kubenswrapper[4811]: I0122 10:13:56.788885 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-mk9fn_2994821b-e7da-4315-a718-9cc885e55fa4/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam/0.log" Jan 22 10:13:56 crc kubenswrapper[4811]: I0122 10:13:56.806328 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_8191719a-7bd8-44c9-9a24-65074b9bfa10/cinder-api-log/0.log" Jan 22 10:13:56 crc kubenswrapper[4811]: I0122 10:13:56.854527 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_8191719a-7bd8-44c9-9a24-65074b9bfa10/cinder-api/0.log" Jan 22 10:13:57 crc kubenswrapper[4811]: I0122 10:13:57.044292 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_747abd8a-15a3-42fe-b8bd-a74f2e03c00c/cinder-backup/0.log" Jan 22 10:13:57 crc kubenswrapper[4811]: I0122 10:13:57.068366 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_747abd8a-15a3-42fe-b8bd-a74f2e03c00c/probe/0.log" Jan 22 10:13:57 crc kubenswrapper[4811]: I0122 10:13:57.138200 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_82d89bb4-3738-46d0-8268-d14e298c13c8/cinder-scheduler/0.log" Jan 22 10:13:57 crc kubenswrapper[4811]: I0122 10:13:57.170399 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_82d89bb4-3738-46d0-8268-d14e298c13c8/probe/0.log" Jan 22 10:13:57 crc kubenswrapper[4811]: I0122 10:13:57.268400 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_e3a4d222-4fbd-4c90-9bb0-d787f257d7c0/cinder-volume/0.log" Jan 22 10:13:57 crc kubenswrapper[4811]: I0122 10:13:57.293330 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_e3a4d222-4fbd-4c90-9bb0-d787f257d7c0/probe/0.log" Jan 22 10:13:57 crc kubenswrapper[4811]: I0122 10:13:57.310508 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-8jwn7_6251494a-e332-4222-b95c-80c7205dc4ce/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 22 10:13:57 crc kubenswrapper[4811]: I0122 10:13:57.345980 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-hnjt7_4baf2862-a8ca-4314-a70a-67e087e5c897/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 22 10:14:02 crc kubenswrapper[4811]: I0122 10:14:02.741808 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-595b86679f-c5h4r_4943dd74-260e-4c75-af13-64455ecded8f/dnsmasq-dns/0.log" Jan 22 10:14:02 crc kubenswrapper[4811]: I0122 10:14:02.750454 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-595b86679f-c5h4r_4943dd74-260e-4c75-af13-64455ecded8f/init/0.log" Jan 22 10:14:02 crc kubenswrapper[4811]: I0122 10:14:02.763695 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_4494328e-eef4-42b6-993f-654585a11db3/glance-log/0.log" Jan 22 10:14:02 crc kubenswrapper[4811]: I0122 10:14:02.780203 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_4494328e-eef4-42b6-993f-654585a11db3/glance-httpd/0.log" Jan 22 10:14:02 crc kubenswrapper[4811]: I0122 10:14:02.791074 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_4ff328b8-5b8b-4a66-85e9-b083d86f2811/glance-log/0.log" Jan 22 10:14:02 crc kubenswrapper[4811]: I0122 10:14:02.812394 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_4ff328b8-5b8b-4a66-85e9-b083d86f2811/glance-httpd/0.log" Jan 22 10:14:03 crc kubenswrapper[4811]: I0122 10:14:03.024911 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-f75b46fc8-4l2b8_9c9eef01-268a-4d3c-b3c3-f30cd80694e0/horizon-log/0.log" Jan 22 10:14:03 crc kubenswrapper[4811]: I0122 10:14:03.112019 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-f75b46fc8-4l2b8_9c9eef01-268a-4d3c-b3c3-f30cd80694e0/horizon/0.log" Jan 22 10:14:03 crc kubenswrapper[4811]: I0122 10:14:03.149638 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-5m56f_d2d7b6d9-f9f5-4548-a6c3-01248c076247/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Jan 22 10:14:03 crc kubenswrapper[4811]: I0122 10:14:03.181554 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-kctxk_1d8c1630-ca31-4da8-a66d-54d6649558d4/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 22 10:14:03 crc kubenswrapper[4811]: I0122 10:14:03.357512 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-5bf9c84c75-rbgml_42068723-76f8-4a1a-8210-f0a70f10897a/keystone-api/0.log" Jan 22 10:14:03 crc kubenswrapper[4811]: I0122 10:14:03.368492 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29484601-6fs6b_f7ffe266-6663-41bb-a7f3-3e7807cd62e4/keystone-cron/0.log" Jan 22 10:14:03 crc kubenswrapper[4811]: I0122 10:14:03.377548 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_8b61b8e9-fda4-46d3-a494-f3e804e7f4d4/kube-state-metrics/0.log" Jan 22 10:14:03 crc kubenswrapper[4811]: I0122 10:14:03.433206 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-hklz6_0f4688b1-29e2-475b-80c0-63afbc3b1afa/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Jan 22 10:14:03 crc kubenswrapper[4811]: I0122 10:14:03.445259 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_8c25067a-ed34-4109-b8f4-d82320dedb05/manila-api-log/0.log" Jan 22 10:14:03 crc kubenswrapper[4811]: I0122 10:14:03.609214 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_8c25067a-ed34-4109-b8f4-d82320dedb05/manila-api/0.log" Jan 22 10:14:03 crc kubenswrapper[4811]: I0122 10:14:03.733269 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_4acbe231-7f63-499d-8813-f7a18c9d70fa/manila-scheduler/0.log" Jan 22 10:14:03 crc kubenswrapper[4811]: I0122 10:14:03.742366 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_4acbe231-7f63-499d-8813-f7a18c9d70fa/probe/0.log" Jan 22 10:14:03 crc kubenswrapper[4811]: I0122 10:14:03.820485 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_f3e1d3e9-c984-442b-8a77-28b88a934ebc/manila-share/0.log" Jan 22 10:14:03 crc kubenswrapper[4811]: I0122 10:14:03.827209 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_f3e1d3e9-c984-442b-8a77-28b88a934ebc/probe/0.log" Jan 22 10:14:08 crc kubenswrapper[4811]: I0122 10:14:08.415383 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-2lbxc"] Jan 22 10:14:08 crc kubenswrapper[4811]: I0122 10:14:08.418060 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2lbxc" Jan 22 10:14:08 crc kubenswrapper[4811]: I0122 10:14:08.440487 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2lbxc"] Jan 22 10:14:08 crc kubenswrapper[4811]: I0122 10:14:08.543437 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgbgn\" (UniqueName: \"kubernetes.io/projected/f23fd9ff-ea30-4e06-ac6d-5791efa5e278-kube-api-access-vgbgn\") pod \"redhat-marketplace-2lbxc\" (UID: \"f23fd9ff-ea30-4e06-ac6d-5791efa5e278\") " pod="openshift-marketplace/redhat-marketplace-2lbxc" Jan 22 10:14:08 crc kubenswrapper[4811]: I0122 10:14:08.543695 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f23fd9ff-ea30-4e06-ac6d-5791efa5e278-catalog-content\") pod \"redhat-marketplace-2lbxc\" (UID: \"f23fd9ff-ea30-4e06-ac6d-5791efa5e278\") " pod="openshift-marketplace/redhat-marketplace-2lbxc" Jan 22 10:14:08 crc kubenswrapper[4811]: I0122 10:14:08.544124 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f23fd9ff-ea30-4e06-ac6d-5791efa5e278-utilities\") pod \"redhat-marketplace-2lbxc\" (UID: \"f23fd9ff-ea30-4e06-ac6d-5791efa5e278\") " pod="openshift-marketplace/redhat-marketplace-2lbxc" Jan 22 10:14:08 crc kubenswrapper[4811]: I0122 10:14:08.650257 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vgbgn\" (UniqueName: \"kubernetes.io/projected/f23fd9ff-ea30-4e06-ac6d-5791efa5e278-kube-api-access-vgbgn\") pod \"redhat-marketplace-2lbxc\" (UID: \"f23fd9ff-ea30-4e06-ac6d-5791efa5e278\") " pod="openshift-marketplace/redhat-marketplace-2lbxc" Jan 22 10:14:08 crc kubenswrapper[4811]: I0122 10:14:08.650388 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f23fd9ff-ea30-4e06-ac6d-5791efa5e278-catalog-content\") pod \"redhat-marketplace-2lbxc\" (UID: \"f23fd9ff-ea30-4e06-ac6d-5791efa5e278\") " pod="openshift-marketplace/redhat-marketplace-2lbxc" Jan 22 10:14:08 crc kubenswrapper[4811]: I0122 10:14:08.650615 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f23fd9ff-ea30-4e06-ac6d-5791efa5e278-utilities\") pod \"redhat-marketplace-2lbxc\" (UID: \"f23fd9ff-ea30-4e06-ac6d-5791efa5e278\") " pod="openshift-marketplace/redhat-marketplace-2lbxc" Jan 22 10:14:08 crc kubenswrapper[4811]: I0122 10:14:08.651721 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f23fd9ff-ea30-4e06-ac6d-5791efa5e278-catalog-content\") pod \"redhat-marketplace-2lbxc\" (UID: \"f23fd9ff-ea30-4e06-ac6d-5791efa5e278\") " pod="openshift-marketplace/redhat-marketplace-2lbxc" Jan 22 10:14:08 crc kubenswrapper[4811]: I0122 10:14:08.651909 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f23fd9ff-ea30-4e06-ac6d-5791efa5e278-utilities\") pod \"redhat-marketplace-2lbxc\" (UID: \"f23fd9ff-ea30-4e06-ac6d-5791efa5e278\") " pod="openshift-marketplace/redhat-marketplace-2lbxc" Jan 22 10:14:08 crc kubenswrapper[4811]: I0122 10:14:08.675569 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgbgn\" (UniqueName: \"kubernetes.io/projected/f23fd9ff-ea30-4e06-ac6d-5791efa5e278-kube-api-access-vgbgn\") pod \"redhat-marketplace-2lbxc\" (UID: \"f23fd9ff-ea30-4e06-ac6d-5791efa5e278\") " pod="openshift-marketplace/redhat-marketplace-2lbxc" Jan 22 10:14:08 crc kubenswrapper[4811]: I0122 10:14:08.749485 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2lbxc" Jan 22 10:14:09 crc kubenswrapper[4811]: I0122 10:14:09.299905 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2lbxc"] Jan 22 10:14:10 crc kubenswrapper[4811]: I0122 10:14:10.206376 4811 generic.go:334] "Generic (PLEG): container finished" podID="f23fd9ff-ea30-4e06-ac6d-5791efa5e278" containerID="286318b5a7484053cfda63c4e1443ab1e914fe003d7d05495c38815306c50f83" exitCode=0 Jan 22 10:14:10 crc kubenswrapper[4811]: I0122 10:14:10.206448 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2lbxc" event={"ID":"f23fd9ff-ea30-4e06-ac6d-5791efa5e278","Type":"ContainerDied","Data":"286318b5a7484053cfda63c4e1443ab1e914fe003d7d05495c38815306c50f83"} Jan 22 10:14:10 crc kubenswrapper[4811]: I0122 10:14:10.206682 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2lbxc" event={"ID":"f23fd9ff-ea30-4e06-ac6d-5791efa5e278","Type":"ContainerStarted","Data":"75fdfdcccdfc32403d07571588ac8b832e8f95527e6d768790bc72fecda61e8e"} Jan 22 10:14:10 crc kubenswrapper[4811]: I0122 10:14:10.210565 4811 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 10:14:11 crc kubenswrapper[4811]: I0122 10:14:11.217340 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2lbxc" event={"ID":"f23fd9ff-ea30-4e06-ac6d-5791efa5e278","Type":"ContainerStarted","Data":"a11bf3393df36f28d981f62d2bbf9fe4288987b7663a1faaccc16d66fd023beb"} Jan 22 10:14:12 crc kubenswrapper[4811]: I0122 10:14:12.231361 4811 generic.go:334] "Generic (PLEG): container finished" podID="f23fd9ff-ea30-4e06-ac6d-5791efa5e278" containerID="a11bf3393df36f28d981f62d2bbf9fe4288987b7663a1faaccc16d66fd023beb" exitCode=0 Jan 22 10:14:12 crc kubenswrapper[4811]: I0122 10:14:12.232321 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2lbxc" event={"ID":"f23fd9ff-ea30-4e06-ac6d-5791efa5e278","Type":"ContainerDied","Data":"a11bf3393df36f28d981f62d2bbf9fe4288987b7663a1faaccc16d66fd023beb"} Jan 22 10:14:13 crc kubenswrapper[4811]: I0122 10:14:13.245589 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2lbxc" event={"ID":"f23fd9ff-ea30-4e06-ac6d-5791efa5e278","Type":"ContainerStarted","Data":"0cf7b1e5dd888bb7d09b3959794877ec349a1041476c74b7e045602924c6df16"} Jan 22 10:14:13 crc kubenswrapper[4811]: I0122 10:14:13.268079 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-2lbxc" podStartSLOduration=2.728504436 podStartE2EDuration="5.268063124s" podCreationTimestamp="2026-01-22 10:14:08 +0000 UTC" firstStartedPulling="2026-01-22 10:14:10.210276304 +0000 UTC m=+4094.532463427" lastFinishedPulling="2026-01-22 10:14:12.749834992 +0000 UTC m=+4097.072022115" observedRunningTime="2026-01-22 10:14:13.263494347 +0000 UTC m=+4097.585681469" watchObservedRunningTime="2026-01-22 10:14:13.268063124 +0000 UTC m=+4097.590250246" Jan 22 10:14:18 crc kubenswrapper[4811]: I0122 10:14:18.409993 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-rl9k7_346ed4cd-2bb8-470d-a275-6c297994fb3f/controller/0.log" Jan 22 10:14:18 crc kubenswrapper[4811]: I0122 10:14:18.415728 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-rl9k7_346ed4cd-2bb8-470d-a275-6c297994fb3f/kube-rbac-proxy/0.log" Jan 22 10:14:18 crc kubenswrapper[4811]: I0122 10:14:18.456441 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rnvr5_d2a3b77d-fd7c-421e-aa23-d206419b1c7d/controller/0.log" Jan 22 10:14:18 crc kubenswrapper[4811]: I0122 10:14:18.750321 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-2lbxc" Jan 22 10:14:18 crc kubenswrapper[4811]: I0122 10:14:18.750612 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-2lbxc" Jan 22 10:14:19 crc kubenswrapper[4811]: I0122 10:14:19.173899 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-2lbxc" Jan 22 10:14:19 crc kubenswrapper[4811]: I0122 10:14:19.349467 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-2lbxc" Jan 22 10:14:19 crc kubenswrapper[4811]: I0122 10:14:19.434358 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2lbxc"] Jan 22 10:14:20 crc kubenswrapper[4811]: I0122 10:14:20.553166 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rnvr5_d2a3b77d-fd7c-421e-aa23-d206419b1c7d/frr/0.log" Jan 22 10:14:20 crc kubenswrapper[4811]: I0122 10:14:20.564459 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rnvr5_d2a3b77d-fd7c-421e-aa23-d206419b1c7d/reloader/0.log" Jan 22 10:14:20 crc kubenswrapper[4811]: I0122 10:14:20.568773 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rnvr5_d2a3b77d-fd7c-421e-aa23-d206419b1c7d/frr-metrics/0.log" Jan 22 10:14:20 crc kubenswrapper[4811]: I0122 10:14:20.577243 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rnvr5_d2a3b77d-fd7c-421e-aa23-d206419b1c7d/kube-rbac-proxy/0.log" Jan 22 10:14:20 crc kubenswrapper[4811]: I0122 10:14:20.584421 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rnvr5_d2a3b77d-fd7c-421e-aa23-d206419b1c7d/kube-rbac-proxy-frr/0.log" Jan 22 10:14:20 crc kubenswrapper[4811]: I0122 10:14:20.592184 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rnvr5_d2a3b77d-fd7c-421e-aa23-d206419b1c7d/cp-frr-files/0.log" Jan 22 10:14:20 crc kubenswrapper[4811]: I0122 10:14:20.603448 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rnvr5_d2a3b77d-fd7c-421e-aa23-d206419b1c7d/cp-reloader/0.log" Jan 22 10:14:20 crc kubenswrapper[4811]: I0122 10:14:20.611347 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rnvr5_d2a3b77d-fd7c-421e-aa23-d206419b1c7d/cp-metrics/0.log" Jan 22 10:14:20 crc kubenswrapper[4811]: I0122 10:14:20.622178 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-zfggh_2f6eae9c-374b-4ac3-b5d7-04267fe9bf73/frr-k8s-webhook-server/0.log" Jan 22 10:14:20 crc kubenswrapper[4811]: I0122 10:14:20.646148 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-64bd67c58d-k58sk_13617657-7245-4223-9b20-03a56378edaf/manager/0.log" Jan 22 10:14:20 crc kubenswrapper[4811]: I0122 10:14:20.658273 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-5bc67d6df-ckhh4_1008e895-ec53-4fdd-9423-bbb4d249a6b9/webhook-server/0.log" Jan 22 10:14:21 crc kubenswrapper[4811]: I0122 10:14:21.060922 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-k88w9_faa36c07-3c7a-4b4a-a04e-58b43a178890/speaker/0.log" Jan 22 10:14:21 crc kubenswrapper[4811]: I0122 10:14:21.067940 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-k88w9_faa36c07-3c7a-4b4a-a04e-58b43a178890/kube-rbac-proxy/0.log" Jan 22 10:14:21 crc kubenswrapper[4811]: I0122 10:14:21.320835 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-2lbxc" podUID="f23fd9ff-ea30-4e06-ac6d-5791efa5e278" containerName="registry-server" containerID="cri-o://0cf7b1e5dd888bb7d09b3959794877ec349a1041476c74b7e045602924c6df16" gracePeriod=2 Jan 22 10:14:21 crc kubenswrapper[4811]: I0122 10:14:21.968724 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2lbxc" Jan 22 10:14:22 crc kubenswrapper[4811]: I0122 10:14:22.058256 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f23fd9ff-ea30-4e06-ac6d-5791efa5e278-utilities\") pod \"f23fd9ff-ea30-4e06-ac6d-5791efa5e278\" (UID: \"f23fd9ff-ea30-4e06-ac6d-5791efa5e278\") " Jan 22 10:14:22 crc kubenswrapper[4811]: I0122 10:14:22.058297 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vgbgn\" (UniqueName: \"kubernetes.io/projected/f23fd9ff-ea30-4e06-ac6d-5791efa5e278-kube-api-access-vgbgn\") pod \"f23fd9ff-ea30-4e06-ac6d-5791efa5e278\" (UID: \"f23fd9ff-ea30-4e06-ac6d-5791efa5e278\") " Jan 22 10:14:22 crc kubenswrapper[4811]: I0122 10:14:22.058368 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f23fd9ff-ea30-4e06-ac6d-5791efa5e278-catalog-content\") pod \"f23fd9ff-ea30-4e06-ac6d-5791efa5e278\" (UID: \"f23fd9ff-ea30-4e06-ac6d-5791efa5e278\") " Jan 22 10:14:22 crc kubenswrapper[4811]: I0122 10:14:22.059766 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f23fd9ff-ea30-4e06-ac6d-5791efa5e278-utilities" (OuterVolumeSpecName: "utilities") pod "f23fd9ff-ea30-4e06-ac6d-5791efa5e278" (UID: "f23fd9ff-ea30-4e06-ac6d-5791efa5e278"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:14:22 crc kubenswrapper[4811]: I0122 10:14:22.076478 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f23fd9ff-ea30-4e06-ac6d-5791efa5e278-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f23fd9ff-ea30-4e06-ac6d-5791efa5e278" (UID: "f23fd9ff-ea30-4e06-ac6d-5791efa5e278"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:14:22 crc kubenswrapper[4811]: I0122 10:14:22.077391 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f23fd9ff-ea30-4e06-ac6d-5791efa5e278-kube-api-access-vgbgn" (OuterVolumeSpecName: "kube-api-access-vgbgn") pod "f23fd9ff-ea30-4e06-ac6d-5791efa5e278" (UID: "f23fd9ff-ea30-4e06-ac6d-5791efa5e278"). InnerVolumeSpecName "kube-api-access-vgbgn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:14:22 crc kubenswrapper[4811]: I0122 10:14:22.160095 4811 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f23fd9ff-ea30-4e06-ac6d-5791efa5e278-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 10:14:22 crc kubenswrapper[4811]: I0122 10:14:22.160155 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vgbgn\" (UniqueName: \"kubernetes.io/projected/f23fd9ff-ea30-4e06-ac6d-5791efa5e278-kube-api-access-vgbgn\") on node \"crc\" DevicePath \"\"" Jan 22 10:14:22 crc kubenswrapper[4811]: I0122 10:14:22.160170 4811 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f23fd9ff-ea30-4e06-ac6d-5791efa5e278-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 10:14:22 crc kubenswrapper[4811]: I0122 10:14:22.333049 4811 generic.go:334] "Generic (PLEG): container finished" podID="f23fd9ff-ea30-4e06-ac6d-5791efa5e278" containerID="0cf7b1e5dd888bb7d09b3959794877ec349a1041476c74b7e045602924c6df16" exitCode=0 Jan 22 10:14:22 crc kubenswrapper[4811]: I0122 10:14:22.333251 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2lbxc" event={"ID":"f23fd9ff-ea30-4e06-ac6d-5791efa5e278","Type":"ContainerDied","Data":"0cf7b1e5dd888bb7d09b3959794877ec349a1041476c74b7e045602924c6df16"} Jan 22 10:14:22 crc kubenswrapper[4811]: I0122 10:14:22.333277 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2lbxc" event={"ID":"f23fd9ff-ea30-4e06-ac6d-5791efa5e278","Type":"ContainerDied","Data":"75fdfdcccdfc32403d07571588ac8b832e8f95527e6d768790bc72fecda61e8e"} Jan 22 10:14:22 crc kubenswrapper[4811]: I0122 10:14:22.333294 4811 scope.go:117] "RemoveContainer" containerID="0cf7b1e5dd888bb7d09b3959794877ec349a1041476c74b7e045602924c6df16" Jan 22 10:14:22 crc kubenswrapper[4811]: I0122 10:14:22.333411 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2lbxc" Jan 22 10:14:22 crc kubenswrapper[4811]: I0122 10:14:22.371442 4811 scope.go:117] "RemoveContainer" containerID="a11bf3393df36f28d981f62d2bbf9fe4288987b7663a1faaccc16d66fd023beb" Jan 22 10:14:22 crc kubenswrapper[4811]: I0122 10:14:22.382434 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2lbxc"] Jan 22 10:14:22 crc kubenswrapper[4811]: I0122 10:14:22.399555 4811 scope.go:117] "RemoveContainer" containerID="286318b5a7484053cfda63c4e1443ab1e914fe003d7d05495c38815306c50f83" Jan 22 10:14:22 crc kubenswrapper[4811]: I0122 10:14:22.406035 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-2lbxc"] Jan 22 10:14:22 crc kubenswrapper[4811]: I0122 10:14:22.441542 4811 scope.go:117] "RemoveContainer" containerID="0cf7b1e5dd888bb7d09b3959794877ec349a1041476c74b7e045602924c6df16" Jan 22 10:14:22 crc kubenswrapper[4811]: E0122 10:14:22.441847 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0cf7b1e5dd888bb7d09b3959794877ec349a1041476c74b7e045602924c6df16\": container with ID starting with 0cf7b1e5dd888bb7d09b3959794877ec349a1041476c74b7e045602924c6df16 not found: ID does not exist" containerID="0cf7b1e5dd888bb7d09b3959794877ec349a1041476c74b7e045602924c6df16" Jan 22 10:14:22 crc kubenswrapper[4811]: I0122 10:14:22.441873 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0cf7b1e5dd888bb7d09b3959794877ec349a1041476c74b7e045602924c6df16"} err="failed to get container status \"0cf7b1e5dd888bb7d09b3959794877ec349a1041476c74b7e045602924c6df16\": rpc error: code = NotFound desc = could not find container \"0cf7b1e5dd888bb7d09b3959794877ec349a1041476c74b7e045602924c6df16\": container with ID starting with 0cf7b1e5dd888bb7d09b3959794877ec349a1041476c74b7e045602924c6df16 not found: ID does not exist" Jan 22 10:14:22 crc kubenswrapper[4811]: I0122 10:14:22.441891 4811 scope.go:117] "RemoveContainer" containerID="a11bf3393df36f28d981f62d2bbf9fe4288987b7663a1faaccc16d66fd023beb" Jan 22 10:14:22 crc kubenswrapper[4811]: E0122 10:14:22.451033 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a11bf3393df36f28d981f62d2bbf9fe4288987b7663a1faaccc16d66fd023beb\": container with ID starting with a11bf3393df36f28d981f62d2bbf9fe4288987b7663a1faaccc16d66fd023beb not found: ID does not exist" containerID="a11bf3393df36f28d981f62d2bbf9fe4288987b7663a1faaccc16d66fd023beb" Jan 22 10:14:22 crc kubenswrapper[4811]: I0122 10:14:22.451070 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a11bf3393df36f28d981f62d2bbf9fe4288987b7663a1faaccc16d66fd023beb"} err="failed to get container status \"a11bf3393df36f28d981f62d2bbf9fe4288987b7663a1faaccc16d66fd023beb\": rpc error: code = NotFound desc = could not find container \"a11bf3393df36f28d981f62d2bbf9fe4288987b7663a1faaccc16d66fd023beb\": container with ID starting with a11bf3393df36f28d981f62d2bbf9fe4288987b7663a1faaccc16d66fd023beb not found: ID does not exist" Jan 22 10:14:22 crc kubenswrapper[4811]: I0122 10:14:22.451093 4811 scope.go:117] "RemoveContainer" containerID="286318b5a7484053cfda63c4e1443ab1e914fe003d7d05495c38815306c50f83" Jan 22 10:14:22 crc kubenswrapper[4811]: E0122 10:14:22.454718 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"286318b5a7484053cfda63c4e1443ab1e914fe003d7d05495c38815306c50f83\": container with ID starting with 286318b5a7484053cfda63c4e1443ab1e914fe003d7d05495c38815306c50f83 not found: ID does not exist" containerID="286318b5a7484053cfda63c4e1443ab1e914fe003d7d05495c38815306c50f83" Jan 22 10:14:22 crc kubenswrapper[4811]: I0122 10:14:22.454747 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"286318b5a7484053cfda63c4e1443ab1e914fe003d7d05495c38815306c50f83"} err="failed to get container status \"286318b5a7484053cfda63c4e1443ab1e914fe003d7d05495c38815306c50f83\": rpc error: code = NotFound desc = could not find container \"286318b5a7484053cfda63c4e1443ab1e914fe003d7d05495c38815306c50f83\": container with ID starting with 286318b5a7484053cfda63c4e1443ab1e914fe003d7d05495c38815306c50f83 not found: ID does not exist" Jan 22 10:14:23 crc kubenswrapper[4811]: I0122 10:14:23.343261 4811 generic.go:334] "Generic (PLEG): container finished" podID="13b6dc4f-1d7d-468f-bc40-85034d842fc0" containerID="acdb5fa372c9ef804240f2ba49d5c2c15a5e5ce36d4f818590c9963b6009aaf9" exitCode=0 Jan 22 10:14:23 crc kubenswrapper[4811]: I0122 10:14:23.343320 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-l6gl7/crc-debug-hk9ql" event={"ID":"13b6dc4f-1d7d-468f-bc40-85034d842fc0","Type":"ContainerDied","Data":"acdb5fa372c9ef804240f2ba49d5c2c15a5e5ce36d4f818590c9963b6009aaf9"} Jan 22 10:14:24 crc kubenswrapper[4811]: I0122 10:14:24.001716 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f23fd9ff-ea30-4e06-ac6d-5791efa5e278" path="/var/lib/kubelet/pods/f23fd9ff-ea30-4e06-ac6d-5791efa5e278/volumes" Jan 22 10:14:24 crc kubenswrapper[4811]: I0122 10:14:24.452407 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-l6gl7/crc-debug-hk9ql" Jan 22 10:14:24 crc kubenswrapper[4811]: I0122 10:14:24.488801 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-l6gl7/crc-debug-hk9ql"] Jan 22 10:14:24 crc kubenswrapper[4811]: I0122 10:14:24.495731 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-l6gl7/crc-debug-hk9ql"] Jan 22 10:14:24 crc kubenswrapper[4811]: I0122 10:14:24.500059 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kk9sx\" (UniqueName: \"kubernetes.io/projected/13b6dc4f-1d7d-468f-bc40-85034d842fc0-kube-api-access-kk9sx\") pod \"13b6dc4f-1d7d-468f-bc40-85034d842fc0\" (UID: \"13b6dc4f-1d7d-468f-bc40-85034d842fc0\") " Jan 22 10:14:24 crc kubenswrapper[4811]: I0122 10:14:24.500107 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/13b6dc4f-1d7d-468f-bc40-85034d842fc0-host\") pod \"13b6dc4f-1d7d-468f-bc40-85034d842fc0\" (UID: \"13b6dc4f-1d7d-468f-bc40-85034d842fc0\") " Jan 22 10:14:24 crc kubenswrapper[4811]: I0122 10:14:24.500945 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/13b6dc4f-1d7d-468f-bc40-85034d842fc0-host" (OuterVolumeSpecName: "host") pod "13b6dc4f-1d7d-468f-bc40-85034d842fc0" (UID: "13b6dc4f-1d7d-468f-bc40-85034d842fc0"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 10:14:24 crc kubenswrapper[4811]: I0122 10:14:24.506647 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13b6dc4f-1d7d-468f-bc40-85034d842fc0-kube-api-access-kk9sx" (OuterVolumeSpecName: "kube-api-access-kk9sx") pod "13b6dc4f-1d7d-468f-bc40-85034d842fc0" (UID: "13b6dc4f-1d7d-468f-bc40-85034d842fc0"). InnerVolumeSpecName "kube-api-access-kk9sx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:14:24 crc kubenswrapper[4811]: I0122 10:14:24.602315 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kk9sx\" (UniqueName: \"kubernetes.io/projected/13b6dc4f-1d7d-468f-bc40-85034d842fc0-kube-api-access-kk9sx\") on node \"crc\" DevicePath \"\"" Jan 22 10:14:24 crc kubenswrapper[4811]: I0122 10:14:24.602339 4811 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/13b6dc4f-1d7d-468f-bc40-85034d842fc0-host\") on node \"crc\" DevicePath \"\"" Jan 22 10:14:25 crc kubenswrapper[4811]: I0122 10:14:25.363061 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ffdace76a549b703b1ae21c609a987550864ff21621978b31d5cb2f81904baf2" Jan 22 10:14:25 crc kubenswrapper[4811]: I0122 10:14:25.363297 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-l6gl7/crc-debug-hk9ql" Jan 22 10:14:25 crc kubenswrapper[4811]: I0122 10:14:25.721270 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-l6gl7/crc-debug-r7m6m"] Jan 22 10:14:25 crc kubenswrapper[4811]: E0122 10:14:25.721664 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f23fd9ff-ea30-4e06-ac6d-5791efa5e278" containerName="extract-utilities" Jan 22 10:14:25 crc kubenswrapper[4811]: I0122 10:14:25.721679 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="f23fd9ff-ea30-4e06-ac6d-5791efa5e278" containerName="extract-utilities" Jan 22 10:14:25 crc kubenswrapper[4811]: E0122 10:14:25.721699 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f23fd9ff-ea30-4e06-ac6d-5791efa5e278" containerName="extract-content" Jan 22 10:14:25 crc kubenswrapper[4811]: I0122 10:14:25.721706 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="f23fd9ff-ea30-4e06-ac6d-5791efa5e278" containerName="extract-content" Jan 22 10:14:25 crc kubenswrapper[4811]: E0122 10:14:25.721715 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f23fd9ff-ea30-4e06-ac6d-5791efa5e278" containerName="registry-server" Jan 22 10:14:25 crc kubenswrapper[4811]: I0122 10:14:25.721722 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="f23fd9ff-ea30-4e06-ac6d-5791efa5e278" containerName="registry-server" Jan 22 10:14:25 crc kubenswrapper[4811]: E0122 10:14:25.721743 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13b6dc4f-1d7d-468f-bc40-85034d842fc0" containerName="container-00" Jan 22 10:14:25 crc kubenswrapper[4811]: I0122 10:14:25.721748 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="13b6dc4f-1d7d-468f-bc40-85034d842fc0" containerName="container-00" Jan 22 10:14:25 crc kubenswrapper[4811]: I0122 10:14:25.721915 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="13b6dc4f-1d7d-468f-bc40-85034d842fc0" containerName="container-00" Jan 22 10:14:25 crc kubenswrapper[4811]: I0122 10:14:25.721926 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="f23fd9ff-ea30-4e06-ac6d-5791efa5e278" containerName="registry-server" Jan 22 10:14:25 crc kubenswrapper[4811]: I0122 10:14:25.722562 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-l6gl7/crc-debug-r7m6m" Jan 22 10:14:25 crc kubenswrapper[4811]: I0122 10:14:25.820041 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2whzs\" (UniqueName: \"kubernetes.io/projected/578ddf08-3651-4edb-887f-2ef2251b4ea2-kube-api-access-2whzs\") pod \"crc-debug-r7m6m\" (UID: \"578ddf08-3651-4edb-887f-2ef2251b4ea2\") " pod="openshift-must-gather-l6gl7/crc-debug-r7m6m" Jan 22 10:14:25 crc kubenswrapper[4811]: I0122 10:14:25.820513 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/578ddf08-3651-4edb-887f-2ef2251b4ea2-host\") pod \"crc-debug-r7m6m\" (UID: \"578ddf08-3651-4edb-887f-2ef2251b4ea2\") " pod="openshift-must-gather-l6gl7/crc-debug-r7m6m" Jan 22 10:14:25 crc kubenswrapper[4811]: I0122 10:14:25.921763 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2whzs\" (UniqueName: \"kubernetes.io/projected/578ddf08-3651-4edb-887f-2ef2251b4ea2-kube-api-access-2whzs\") pod \"crc-debug-r7m6m\" (UID: \"578ddf08-3651-4edb-887f-2ef2251b4ea2\") " pod="openshift-must-gather-l6gl7/crc-debug-r7m6m" Jan 22 10:14:25 crc kubenswrapper[4811]: I0122 10:14:25.922024 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/578ddf08-3651-4edb-887f-2ef2251b4ea2-host\") pod \"crc-debug-r7m6m\" (UID: \"578ddf08-3651-4edb-887f-2ef2251b4ea2\") " pod="openshift-must-gather-l6gl7/crc-debug-r7m6m" Jan 22 10:14:25 crc kubenswrapper[4811]: I0122 10:14:25.922109 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/578ddf08-3651-4edb-887f-2ef2251b4ea2-host\") pod \"crc-debug-r7m6m\" (UID: \"578ddf08-3651-4edb-887f-2ef2251b4ea2\") " pod="openshift-must-gather-l6gl7/crc-debug-r7m6m" Jan 22 10:14:25 crc kubenswrapper[4811]: I0122 10:14:25.941957 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2whzs\" (UniqueName: \"kubernetes.io/projected/578ddf08-3651-4edb-887f-2ef2251b4ea2-kube-api-access-2whzs\") pod \"crc-debug-r7m6m\" (UID: \"578ddf08-3651-4edb-887f-2ef2251b4ea2\") " pod="openshift-must-gather-l6gl7/crc-debug-r7m6m" Jan 22 10:14:26 crc kubenswrapper[4811]: I0122 10:14:26.002293 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13b6dc4f-1d7d-468f-bc40-85034d842fc0" path="/var/lib/kubelet/pods/13b6dc4f-1d7d-468f-bc40-85034d842fc0/volumes" Jan 22 10:14:26 crc kubenswrapper[4811]: I0122 10:14:26.043893 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-l6gl7/crc-debug-r7m6m" Jan 22 10:14:26 crc kubenswrapper[4811]: I0122 10:14:26.374489 4811 generic.go:334] "Generic (PLEG): container finished" podID="578ddf08-3651-4edb-887f-2ef2251b4ea2" containerID="2ce8b65cd85e94e9da09a838b8f601019880a08c8ef651b620d80d59899000f3" exitCode=0 Jan 22 10:14:26 crc kubenswrapper[4811]: I0122 10:14:26.374722 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-l6gl7/crc-debug-r7m6m" event={"ID":"578ddf08-3651-4edb-887f-2ef2251b4ea2","Type":"ContainerDied","Data":"2ce8b65cd85e94e9da09a838b8f601019880a08c8ef651b620d80d59899000f3"} Jan 22 10:14:26 crc kubenswrapper[4811]: I0122 10:14:26.374851 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-l6gl7/crc-debug-r7m6m" event={"ID":"578ddf08-3651-4edb-887f-2ef2251b4ea2","Type":"ContainerStarted","Data":"502fa331d36f65c8de77704836f5ea86e45152ef449282757f41b33352294233"} Jan 22 10:14:26 crc kubenswrapper[4811]: I0122 10:14:26.924612 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-l6gl7/crc-debug-r7m6m"] Jan 22 10:14:26 crc kubenswrapper[4811]: I0122 10:14:26.932010 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-l6gl7/crc-debug-r7m6m"] Jan 22 10:14:27 crc kubenswrapper[4811]: I0122 10:14:27.524162 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-l6gl7/crc-debug-r7m6m" Jan 22 10:14:27 crc kubenswrapper[4811]: I0122 10:14:27.557151 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2whzs\" (UniqueName: \"kubernetes.io/projected/578ddf08-3651-4edb-887f-2ef2251b4ea2-kube-api-access-2whzs\") pod \"578ddf08-3651-4edb-887f-2ef2251b4ea2\" (UID: \"578ddf08-3651-4edb-887f-2ef2251b4ea2\") " Jan 22 10:14:27 crc kubenswrapper[4811]: I0122 10:14:27.557192 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/578ddf08-3651-4edb-887f-2ef2251b4ea2-host\") pod \"578ddf08-3651-4edb-887f-2ef2251b4ea2\" (UID: \"578ddf08-3651-4edb-887f-2ef2251b4ea2\") " Jan 22 10:14:27 crc kubenswrapper[4811]: I0122 10:14:27.557299 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/578ddf08-3651-4edb-887f-2ef2251b4ea2-host" (OuterVolumeSpecName: "host") pod "578ddf08-3651-4edb-887f-2ef2251b4ea2" (UID: "578ddf08-3651-4edb-887f-2ef2251b4ea2"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 10:14:27 crc kubenswrapper[4811]: I0122 10:14:27.557819 4811 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/578ddf08-3651-4edb-887f-2ef2251b4ea2-host\") on node \"crc\" DevicePath \"\"" Jan 22 10:14:27 crc kubenswrapper[4811]: I0122 10:14:27.566811 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/578ddf08-3651-4edb-887f-2ef2251b4ea2-kube-api-access-2whzs" (OuterVolumeSpecName: "kube-api-access-2whzs") pod "578ddf08-3651-4edb-887f-2ef2251b4ea2" (UID: "578ddf08-3651-4edb-887f-2ef2251b4ea2"). InnerVolumeSpecName "kube-api-access-2whzs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:14:27 crc kubenswrapper[4811]: I0122 10:14:27.659687 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2whzs\" (UniqueName: \"kubernetes.io/projected/578ddf08-3651-4edb-887f-2ef2251b4ea2-kube-api-access-2whzs\") on node \"crc\" DevicePath \"\"" Jan 22 10:14:28 crc kubenswrapper[4811]: I0122 10:14:28.000827 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="578ddf08-3651-4edb-887f-2ef2251b4ea2" path="/var/lib/kubelet/pods/578ddf08-3651-4edb-887f-2ef2251b4ea2/volumes" Jan 22 10:14:28 crc kubenswrapper[4811]: I0122 10:14:28.154731 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-l6gl7/crc-debug-cwv6q"] Jan 22 10:14:28 crc kubenswrapper[4811]: E0122 10:14:28.155096 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="578ddf08-3651-4edb-887f-2ef2251b4ea2" containerName="container-00" Jan 22 10:14:28 crc kubenswrapper[4811]: I0122 10:14:28.155115 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="578ddf08-3651-4edb-887f-2ef2251b4ea2" containerName="container-00" Jan 22 10:14:28 crc kubenswrapper[4811]: I0122 10:14:28.155308 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="578ddf08-3651-4edb-887f-2ef2251b4ea2" containerName="container-00" Jan 22 10:14:28 crc kubenswrapper[4811]: I0122 10:14:28.155888 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-l6gl7/crc-debug-cwv6q" Jan 22 10:14:28 crc kubenswrapper[4811]: I0122 10:14:28.274292 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/79915706-2b5c-4f39-928b-0acfc198f1d4-host\") pod \"crc-debug-cwv6q\" (UID: \"79915706-2b5c-4f39-928b-0acfc198f1d4\") " pod="openshift-must-gather-l6gl7/crc-debug-cwv6q" Jan 22 10:14:28 crc kubenswrapper[4811]: I0122 10:14:28.274439 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwdct\" (UniqueName: \"kubernetes.io/projected/79915706-2b5c-4f39-928b-0acfc198f1d4-kube-api-access-mwdct\") pod \"crc-debug-cwv6q\" (UID: \"79915706-2b5c-4f39-928b-0acfc198f1d4\") " pod="openshift-must-gather-l6gl7/crc-debug-cwv6q" Jan 22 10:14:28 crc kubenswrapper[4811]: I0122 10:14:28.378293 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/79915706-2b5c-4f39-928b-0acfc198f1d4-host\") pod \"crc-debug-cwv6q\" (UID: \"79915706-2b5c-4f39-928b-0acfc198f1d4\") " pod="openshift-must-gather-l6gl7/crc-debug-cwv6q" Jan 22 10:14:28 crc kubenswrapper[4811]: I0122 10:14:28.378455 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwdct\" (UniqueName: \"kubernetes.io/projected/79915706-2b5c-4f39-928b-0acfc198f1d4-kube-api-access-mwdct\") pod \"crc-debug-cwv6q\" (UID: \"79915706-2b5c-4f39-928b-0acfc198f1d4\") " pod="openshift-must-gather-l6gl7/crc-debug-cwv6q" Jan 22 10:14:28 crc kubenswrapper[4811]: I0122 10:14:28.378875 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/79915706-2b5c-4f39-928b-0acfc198f1d4-host\") pod \"crc-debug-cwv6q\" (UID: \"79915706-2b5c-4f39-928b-0acfc198f1d4\") " pod="openshift-must-gather-l6gl7/crc-debug-cwv6q" Jan 22 10:14:28 crc kubenswrapper[4811]: I0122 10:14:28.394401 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwdct\" (UniqueName: \"kubernetes.io/projected/79915706-2b5c-4f39-928b-0acfc198f1d4-kube-api-access-mwdct\") pod \"crc-debug-cwv6q\" (UID: \"79915706-2b5c-4f39-928b-0acfc198f1d4\") " pod="openshift-must-gather-l6gl7/crc-debug-cwv6q" Jan 22 10:14:28 crc kubenswrapper[4811]: I0122 10:14:28.402027 4811 scope.go:117] "RemoveContainer" containerID="2ce8b65cd85e94e9da09a838b8f601019880a08c8ef651b620d80d59899000f3" Jan 22 10:14:28 crc kubenswrapper[4811]: I0122 10:14:28.402151 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-l6gl7/crc-debug-r7m6m" Jan 22 10:14:28 crc kubenswrapper[4811]: I0122 10:14:28.476657 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-l6gl7/crc-debug-cwv6q" Jan 22 10:14:29 crc kubenswrapper[4811]: I0122 10:14:29.415468 4811 generic.go:334] "Generic (PLEG): container finished" podID="79915706-2b5c-4f39-928b-0acfc198f1d4" containerID="28ea1dde835bdf3a5854c80e17351eb340a2fbf4cf28da7ec3d74eeab98c25a5" exitCode=0 Jan 22 10:14:29 crc kubenswrapper[4811]: I0122 10:14:29.415548 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-l6gl7/crc-debug-cwv6q" event={"ID":"79915706-2b5c-4f39-928b-0acfc198f1d4","Type":"ContainerDied","Data":"28ea1dde835bdf3a5854c80e17351eb340a2fbf4cf28da7ec3d74eeab98c25a5"} Jan 22 10:14:29 crc kubenswrapper[4811]: I0122 10:14:29.415847 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-l6gl7/crc-debug-cwv6q" event={"ID":"79915706-2b5c-4f39-928b-0acfc198f1d4","Type":"ContainerStarted","Data":"b6a87919682610f954834ac75509b36473b3a5fc85a326543b5dc10c71d269d4"} Jan 22 10:14:29 crc kubenswrapper[4811]: I0122 10:14:29.449163 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-l6gl7/crc-debug-cwv6q"] Jan 22 10:14:29 crc kubenswrapper[4811]: I0122 10:14:29.468950 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-l6gl7/crc-debug-cwv6q"] Jan 22 10:14:30 crc kubenswrapper[4811]: I0122 10:14:30.024504 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_864b7037-2c34-48b6-b75d-38110d9816dc/memcached/0.log" Jan 22 10:14:30 crc kubenswrapper[4811]: I0122 10:14:30.112303 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-864bc8bfcf-nvbzn_3cd66dd0-aadf-46e8-b0d0-48d0563efa06/neutron-api/0.log" Jan 22 10:14:30 crc kubenswrapper[4811]: I0122 10:14:30.162191 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-864bc8bfcf-nvbzn_3cd66dd0-aadf-46e8-b0d0-48d0563efa06/neutron-httpd/0.log" Jan 22 10:14:30 crc kubenswrapper[4811]: I0122 10:14:30.190173 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-lqp9d_3c16de7c-e366-4871-b006-d63a565fb17e/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Jan 22 10:14:30 crc kubenswrapper[4811]: I0122 10:14:30.353087 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_dacc4f5b-746c-48bb-9d16-a30e402aa461/nova-api-log/0.log" Jan 22 10:14:30 crc kubenswrapper[4811]: I0122 10:14:30.524962 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-l6gl7/crc-debug-cwv6q" Jan 22 10:14:30 crc kubenswrapper[4811]: I0122 10:14:30.621005 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/79915706-2b5c-4f39-928b-0acfc198f1d4-host\") pod \"79915706-2b5c-4f39-928b-0acfc198f1d4\" (UID: \"79915706-2b5c-4f39-928b-0acfc198f1d4\") " Jan 22 10:14:30 crc kubenswrapper[4811]: I0122 10:14:30.621068 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mwdct\" (UniqueName: \"kubernetes.io/projected/79915706-2b5c-4f39-928b-0acfc198f1d4-kube-api-access-mwdct\") pod \"79915706-2b5c-4f39-928b-0acfc198f1d4\" (UID: \"79915706-2b5c-4f39-928b-0acfc198f1d4\") " Jan 22 10:14:30 crc kubenswrapper[4811]: I0122 10:14:30.621851 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/79915706-2b5c-4f39-928b-0acfc198f1d4-host" (OuterVolumeSpecName: "host") pod "79915706-2b5c-4f39-928b-0acfc198f1d4" (UID: "79915706-2b5c-4f39-928b-0acfc198f1d4"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 10:14:30 crc kubenswrapper[4811]: I0122 10:14:30.637835 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79915706-2b5c-4f39-928b-0acfc198f1d4-kube-api-access-mwdct" (OuterVolumeSpecName: "kube-api-access-mwdct") pod "79915706-2b5c-4f39-928b-0acfc198f1d4" (UID: "79915706-2b5c-4f39-928b-0acfc198f1d4"). InnerVolumeSpecName "kube-api-access-mwdct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:14:30 crc kubenswrapper[4811]: I0122 10:14:30.723809 4811 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/79915706-2b5c-4f39-928b-0acfc198f1d4-host\") on node \"crc\" DevicePath \"\"" Jan 22 10:14:30 crc kubenswrapper[4811]: I0122 10:14:30.723837 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mwdct\" (UniqueName: \"kubernetes.io/projected/79915706-2b5c-4f39-928b-0acfc198f1d4-kube-api-access-mwdct\") on node \"crc\" DevicePath \"\"" Jan 22 10:14:30 crc kubenswrapper[4811]: I0122 10:14:30.728208 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_dacc4f5b-746c-48bb-9d16-a30e402aa461/nova-api-api/0.log" Jan 22 10:14:30 crc kubenswrapper[4811]: I0122 10:14:30.848890 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_bb52292c-6627-4dbb-a981-f97886db6f7a/nova-cell0-conductor-conductor/0.log" Jan 22 10:14:30 crc kubenswrapper[4811]: I0122 10:14:30.957148 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_f96a2c3b-9e7d-4e81-8744-bcaeca5db4bd/nova-cell1-conductor-conductor/0.log" Jan 22 10:14:31 crc kubenswrapper[4811]: I0122 10:14:31.049523 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_ee19b0fe-2250-40e7-9917-230c53ad0f13/nova-cell1-novncproxy-novncproxy/0.log" Jan 22 10:14:31 crc kubenswrapper[4811]: I0122 10:14:31.105437 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-g62qq_93c19706-aa1a-40b2-96cb-ea74c87866d6/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam/0.log" Jan 22 10:14:31 crc kubenswrapper[4811]: I0122 10:14:31.166266 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_9ecace20-035f-4590-a0f0-32914d411253/nova-metadata-log/0.log" Jan 22 10:14:31 crc kubenswrapper[4811]: I0122 10:14:31.447793 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-l6gl7/crc-debug-cwv6q" Jan 22 10:14:31 crc kubenswrapper[4811]: I0122 10:14:31.447838 4811 scope.go:117] "RemoveContainer" containerID="28ea1dde835bdf3a5854c80e17351eb340a2fbf4cf28da7ec3d74eeab98c25a5" Jan 22 10:14:32 crc kubenswrapper[4811]: I0122 10:14:32.001300 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79915706-2b5c-4f39-928b-0acfc198f1d4" path="/var/lib/kubelet/pods/79915706-2b5c-4f39-928b-0acfc198f1d4/volumes" Jan 22 10:14:32 crc kubenswrapper[4811]: I0122 10:14:32.047327 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_9ecace20-035f-4590-a0f0-32914d411253/nova-metadata-metadata/0.log" Jan 22 10:14:32 crc kubenswrapper[4811]: I0122 10:14:32.211652 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_d8786fe9-36cf-4e1e-8d24-9eaeba9b1f9b/nova-scheduler-scheduler/0.log" Jan 22 10:14:32 crc kubenswrapper[4811]: I0122 10:14:32.230418 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_db2ecb97-db87-43bf-8ffb-7cbd7460ba19/galera/0.log" Jan 22 10:14:32 crc kubenswrapper[4811]: I0122 10:14:32.242724 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_db2ecb97-db87-43bf-8ffb-7cbd7460ba19/mysql-bootstrap/0.log" Jan 22 10:14:32 crc kubenswrapper[4811]: I0122 10:14:32.266871 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_02bd0635-dfd1-4e78-8fbf-57366ce83cdb/galera/0.log" Jan 22 10:14:32 crc kubenswrapper[4811]: I0122 10:14:32.275908 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_02bd0635-dfd1-4e78-8fbf-57366ce83cdb/mysql-bootstrap/0.log" Jan 22 10:14:32 crc kubenswrapper[4811]: I0122 10:14:32.284436 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_616ddacf-6ee0-46d9-9e03-c234d53b5dd8/openstackclient/0.log" Jan 22 10:14:32 crc kubenswrapper[4811]: I0122 10:14:32.294156 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-kz6sk_374f91f7-413d-4830-afc1-0d75c2946fc3/openstack-network-exporter/0.log" Jan 22 10:14:32 crc kubenswrapper[4811]: I0122 10:14:32.305279 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-nm2bb_29754ede-0901-4bbd-aa87-49a8e93050b9/ovn-controller/0.log" Jan 22 10:14:32 crc kubenswrapper[4811]: I0122 10:14:32.312734 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-kdqzr_cbf54ce8-3114-43c1-a1ce-6a13dd41297a/ovsdb-server/0.log" Jan 22 10:14:32 crc kubenswrapper[4811]: I0122 10:14:32.321432 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-kdqzr_cbf54ce8-3114-43c1-a1ce-6a13dd41297a/ovs-vswitchd/0.log" Jan 22 10:14:32 crc kubenswrapper[4811]: I0122 10:14:32.329754 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-kdqzr_cbf54ce8-3114-43c1-a1ce-6a13dd41297a/ovsdb-server-init/0.log" Jan 22 10:14:32 crc kubenswrapper[4811]: I0122 10:14:32.373612 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-5q54d_8fafd202-523c-44b0-b229-527193721bb1/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Jan 22 10:14:32 crc kubenswrapper[4811]: I0122 10:14:32.382755 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_68ff0a82-cf02-4e4e-bf49-b46f3e0f361a/ovn-northd/0.log" Jan 22 10:14:32 crc kubenswrapper[4811]: I0122 10:14:32.389198 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_68ff0a82-cf02-4e4e-bf49-b46f3e0f361a/openstack-network-exporter/0.log" Jan 22 10:14:32 crc kubenswrapper[4811]: I0122 10:14:32.403120 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_4344c0dd-b6d6-4448-b943-0e036ee2098b/ovsdbserver-nb/0.log" Jan 22 10:14:32 crc kubenswrapper[4811]: I0122 10:14:32.407557 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_4344c0dd-b6d6-4448-b943-0e036ee2098b/openstack-network-exporter/0.log" Jan 22 10:14:32 crc kubenswrapper[4811]: I0122 10:14:32.419125 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_d0e5eec8-870c-4be3-8ac6-4ad6698d8a4b/ovsdbserver-sb/0.log" Jan 22 10:14:32 crc kubenswrapper[4811]: I0122 10:14:32.426132 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_d0e5eec8-870c-4be3-8ac6-4ad6698d8a4b/openstack-network-exporter/0.log" Jan 22 10:14:32 crc kubenswrapper[4811]: I0122 10:14:32.494984 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6976f89774-xh5fd_32532baf-c6cc-4f91-91f5-7f81462d369a/placement-log/0.log" Jan 22 10:14:32 crc kubenswrapper[4811]: I0122 10:14:32.548506 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6976f89774-xh5fd_32532baf-c6cc-4f91-91f5-7f81462d369a/placement-api/0.log" Jan 22 10:14:32 crc kubenswrapper[4811]: I0122 10:14:32.570071 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_a2ce439e-8652-40cb-9d5d-90913d18bea1/rabbitmq/0.log" Jan 22 10:14:32 crc kubenswrapper[4811]: I0122 10:14:32.573680 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_a2ce439e-8652-40cb-9d5d-90913d18bea1/setup-container/0.log" Jan 22 10:14:32 crc kubenswrapper[4811]: I0122 10:14:32.598239 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_f7aa6fa4-4d1f-4243-bd9c-8b9caa013bb8/rabbitmq/0.log" Jan 22 10:14:32 crc kubenswrapper[4811]: I0122 10:14:32.601997 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_f7aa6fa4-4d1f-4243-bd9c-8b9caa013bb8/setup-container/0.log" Jan 22 10:14:32 crc kubenswrapper[4811]: I0122 10:14:32.618881 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-rgn6q_84f96cc8-d392-47a2-baab-998459b83025/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 22 10:14:32 crc kubenswrapper[4811]: I0122 10:14:32.627485 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-5ddtk_016a5684-671f-4e6a-81dc-15c2a55a6911/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Jan 22 10:14:32 crc kubenswrapper[4811]: I0122 10:14:32.641586 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-5n2sj_247204c2-fb25-45be-a1ec-8bc4b64e41d6/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 22 10:14:32 crc kubenswrapper[4811]: I0122 10:14:32.654302 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-pn5nw_b3144d30-a0bb-4788-bf66-089587cabbf5/ssh-known-hosts-edpm-deployment/0.log" Jan 22 10:14:32 crc kubenswrapper[4811]: I0122 10:14:32.674908 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_eb23b893-6bb1-4d84-bb05-09c701024b37/tempest-tests-tempest-tests-runner/0.log" Jan 22 10:14:32 crc kubenswrapper[4811]: I0122 10:14:32.681534 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_37849dcb-3091-4827-b55c-97806fb09eef/test-operator-logs-container/0.log" Jan 22 10:14:32 crc kubenswrapper[4811]: I0122 10:14:32.694193 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-f92wm_ca9b0d63-2524-406e-bd65-36224327f50f/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 22 10:14:46 crc kubenswrapper[4811]: I0122 10:14:46.076753 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3078a1b977323d6d8f95a4018ef06f377c198eb1741282ac05e933b60384v48_d014441f-4913-4583-ba8e-b1c20aaeed47/extract/0.log" Jan 22 10:14:46 crc kubenswrapper[4811]: I0122 10:14:46.084766 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3078a1b977323d6d8f95a4018ef06f377c198eb1741282ac05e933b60384v48_d014441f-4913-4583-ba8e-b1c20aaeed47/util/0.log" Jan 22 10:14:46 crc kubenswrapper[4811]: I0122 10:14:46.091572 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3078a1b977323d6d8f95a4018ef06f377c198eb1741282ac05e933b60384v48_d014441f-4913-4583-ba8e-b1c20aaeed47/pull/0.log" Jan 22 10:14:46 crc kubenswrapper[4811]: I0122 10:14:46.156586 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-59dd8b7cbf-pklcs_09ad3a19-244b-4685-8c96-0bee227b6547/manager/0.log" Jan 22 10:14:46 crc kubenswrapper[4811]: I0122 10:14:46.205774 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-69cf5d4557-rgwhg_e6fe0bc0-30b4-4a2f-b36d-93d5b288ecf8/manager/0.log" Jan 22 10:14:46 crc kubenswrapper[4811]: I0122 10:14:46.217285 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-b45d7bf98-2ltqr_62aa676a-95ae-40a8-9db5-b5fd24a293c2/manager/0.log" Jan 22 10:14:46 crc kubenswrapper[4811]: I0122 10:14:46.292565 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-78fdd796fd-26vqb_b0f07719-5203-4d79-82b4-995b8af81a00/manager/0.log" Jan 22 10:14:46 crc kubenswrapper[4811]: I0122 10:14:46.300444 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-vbbnq_e019bc4b-f0e7-4a4f-a42c-1486010a63fd/manager/0.log" Jan 22 10:14:46 crc kubenswrapper[4811]: I0122 10:14:46.317109 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-7p5h9_62a9fc61-630e-4f4d-9788-f21e25ab4dda/manager/0.log" Jan 22 10:14:46 crc kubenswrapper[4811]: I0122 10:14:46.556113 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-54ccf4f85d-r6z6t_81d4cd92-880c-4806-ab95-fcb009827075/manager/0.log" Jan 22 10:14:46 crc kubenswrapper[4811]: I0122 10:14:46.565990 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-69d6c9f5b8-fx6zn_ce893825-4e8e-4c9b-b37e-a974d7cfda21/manager/0.log" Jan 22 10:14:46 crc kubenswrapper[4811]: I0122 10:14:46.627022 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b8b6d4659-99m2t_9dfc4274-a6f5-4dcb-8bcf-b2e0ff337574/manager/0.log" Jan 22 10:14:46 crc kubenswrapper[4811]: I0122 10:14:46.678852 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-78c6999f6f-4wtlm_688057d8-0445-42c1-b073-83deb026ab4c/manager/0.log" Jan 22 10:14:46 crc kubenswrapper[4811]: I0122 10:14:46.709460 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-c87fff755-kc9m5_02697a04-4401-498c-9b69-ff0b57ce8f4b/manager/0.log" Jan 22 10:14:46 crc kubenswrapper[4811]: I0122 10:14:46.756145 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5d8f59fb49-tll52_c284b86e-5a0e-4bd4-aa67-4e2c208ea4fa/manager/0.log" Jan 22 10:14:46 crc kubenswrapper[4811]: I0122 10:14:46.828181 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-6b8bc8d87d-h7wzt_b157cb38-af8a-41bf-a29a-2da5b59aa500/manager/0.log" Jan 22 10:14:46 crc kubenswrapper[4811]: I0122 10:14:46.835711 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7bd9774b6-t9djx_a247bb8f-a274-481d-916b-8ad80521af31/manager/0.log" Jan 22 10:14:46 crc kubenswrapper[4811]: I0122 10:14:46.851354 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-7fbbcdb4d6qmzp5_7c209919-fd54-40e8-a741-7006cf8dd361/manager/0.log" Jan 22 10:14:46 crc kubenswrapper[4811]: I0122 10:14:46.968406 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-5cd76577f9-kn8dt_a01a5eb9-0bef-4a6b-af9e-d71281e2ae34/operator/0.log" Jan 22 10:14:48 crc kubenswrapper[4811]: I0122 10:14:48.076815 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-647bb87bbd-v227g_c0b74933-8fe4-4fb1-82af-eda7df5c3c06/manager/0.log" Jan 22 10:14:48 crc kubenswrapper[4811]: I0122 10:14:48.139649 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-c2sr9_5518af80-1f74-4caf-8bc0-80680646bfca/registry-server/0.log" Jan 22 10:14:48 crc kubenswrapper[4811]: I0122 10:14:48.176680 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-55db956ddc-xp5jv_e343c2da-412a-4226-b711-81f83fdbb04b/manager/0.log" Jan 22 10:14:48 crc kubenswrapper[4811]: I0122 10:14:48.201478 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5d646b7d76-rvsm7_b579b636-697b-4a23-9de7-1f9a8537eb94/manager/0.log" Jan 22 10:14:48 crc kubenswrapper[4811]: I0122 10:14:48.217775 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-2l9vr_c624375e-a5cf-49b8-a54a-5770a6c7e738/operator/0.log" Jan 22 10:14:48 crc kubenswrapper[4811]: I0122 10:14:48.224298 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-547cbdb99f-4xkc8_b3409f06-ef57-4717-b3e2-9b4f788fd7f0/manager/0.log" Jan 22 10:14:48 crc kubenswrapper[4811]: I0122 10:14:48.283871 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-85cd9769bb-627nz_42323d0d-05b6-4a0d-a809-405dec7c2893/manager/0.log" Jan 22 10:14:48 crc kubenswrapper[4811]: I0122 10:14:48.290750 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-69797bbcbd-mg5cr_d990df50-3df1-46b6-b6df-5b84bf8eeb20/manager/0.log" Jan 22 10:14:48 crc kubenswrapper[4811]: I0122 10:14:48.297991 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-5ffb9c6597-kxs2j_15eb97d5-2508-4c32-8b7e-65f1015767cf/manager/0.log" Jan 22 10:14:53 crc kubenswrapper[4811]: I0122 10:14:53.171160 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-9pxnj_8b7094aa-cc4a-49eb-be77-715a4efbc1d0/control-plane-machine-set-operator/0.log" Jan 22 10:14:53 crc kubenswrapper[4811]: I0122 10:14:53.186859 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-rx42r_8a9d91fa-d887-4128-af43-cfe3cad79784/kube-rbac-proxy/0.log" Jan 22 10:14:53 crc kubenswrapper[4811]: I0122 10:14:53.193358 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-rx42r_8a9d91fa-d887-4128-af43-cfe3cad79784/machine-api-operator/0.log" Jan 22 10:15:00 crc kubenswrapper[4811]: I0122 10:15:00.172772 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484615-d8tz6"] Jan 22 10:15:00 crc kubenswrapper[4811]: E0122 10:15:00.173454 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79915706-2b5c-4f39-928b-0acfc198f1d4" containerName="container-00" Jan 22 10:15:00 crc kubenswrapper[4811]: I0122 10:15:00.173466 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="79915706-2b5c-4f39-928b-0acfc198f1d4" containerName="container-00" Jan 22 10:15:00 crc kubenswrapper[4811]: I0122 10:15:00.175894 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="79915706-2b5c-4f39-928b-0acfc198f1d4" containerName="container-00" Jan 22 10:15:00 crc kubenswrapper[4811]: I0122 10:15:00.176512 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484615-d8tz6" Jan 22 10:15:00 crc kubenswrapper[4811]: I0122 10:15:00.179371 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 22 10:15:00 crc kubenswrapper[4811]: I0122 10:15:00.179786 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 22 10:15:00 crc kubenswrapper[4811]: I0122 10:15:00.185278 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484615-d8tz6"] Jan 22 10:15:00 crc kubenswrapper[4811]: I0122 10:15:00.251419 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/74405cbe-2ff9-40e3-b588-08c2a6dfd56a-secret-volume\") pod \"collect-profiles-29484615-d8tz6\" (UID: \"74405cbe-2ff9-40e3-b588-08c2a6dfd56a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484615-d8tz6" Jan 22 10:15:00 crc kubenswrapper[4811]: I0122 10:15:00.251472 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-msttc\" (UniqueName: \"kubernetes.io/projected/74405cbe-2ff9-40e3-b588-08c2a6dfd56a-kube-api-access-msttc\") pod \"collect-profiles-29484615-d8tz6\" (UID: \"74405cbe-2ff9-40e3-b588-08c2a6dfd56a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484615-d8tz6" Jan 22 10:15:00 crc kubenswrapper[4811]: I0122 10:15:00.251691 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/74405cbe-2ff9-40e3-b588-08c2a6dfd56a-config-volume\") pod \"collect-profiles-29484615-d8tz6\" (UID: \"74405cbe-2ff9-40e3-b588-08c2a6dfd56a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484615-d8tz6" Jan 22 10:15:00 crc kubenswrapper[4811]: I0122 10:15:00.352920 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/74405cbe-2ff9-40e3-b588-08c2a6dfd56a-secret-volume\") pod \"collect-profiles-29484615-d8tz6\" (UID: \"74405cbe-2ff9-40e3-b588-08c2a6dfd56a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484615-d8tz6" Jan 22 10:15:00 crc kubenswrapper[4811]: I0122 10:15:00.352959 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-msttc\" (UniqueName: \"kubernetes.io/projected/74405cbe-2ff9-40e3-b588-08c2a6dfd56a-kube-api-access-msttc\") pod \"collect-profiles-29484615-d8tz6\" (UID: \"74405cbe-2ff9-40e3-b588-08c2a6dfd56a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484615-d8tz6" Jan 22 10:15:00 crc kubenswrapper[4811]: I0122 10:15:00.353020 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/74405cbe-2ff9-40e3-b588-08c2a6dfd56a-config-volume\") pod \"collect-profiles-29484615-d8tz6\" (UID: \"74405cbe-2ff9-40e3-b588-08c2a6dfd56a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484615-d8tz6" Jan 22 10:15:00 crc kubenswrapper[4811]: I0122 10:15:00.353826 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/74405cbe-2ff9-40e3-b588-08c2a6dfd56a-config-volume\") pod \"collect-profiles-29484615-d8tz6\" (UID: \"74405cbe-2ff9-40e3-b588-08c2a6dfd56a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484615-d8tz6" Jan 22 10:15:00 crc kubenswrapper[4811]: I0122 10:15:00.358372 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/74405cbe-2ff9-40e3-b588-08c2a6dfd56a-secret-volume\") pod \"collect-profiles-29484615-d8tz6\" (UID: \"74405cbe-2ff9-40e3-b588-08c2a6dfd56a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484615-d8tz6" Jan 22 10:15:00 crc kubenswrapper[4811]: I0122 10:15:00.369167 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-msttc\" (UniqueName: \"kubernetes.io/projected/74405cbe-2ff9-40e3-b588-08c2a6dfd56a-kube-api-access-msttc\") pod \"collect-profiles-29484615-d8tz6\" (UID: \"74405cbe-2ff9-40e3-b588-08c2a6dfd56a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484615-d8tz6" Jan 22 10:15:00 crc kubenswrapper[4811]: I0122 10:15:00.504550 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484615-d8tz6" Jan 22 10:15:00 crc kubenswrapper[4811]: I0122 10:15:00.909214 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484615-d8tz6"] Jan 22 10:15:01 crc kubenswrapper[4811]: I0122 10:15:01.720950 4811 generic.go:334] "Generic (PLEG): container finished" podID="74405cbe-2ff9-40e3-b588-08c2a6dfd56a" containerID="cfcd93dad51ee83b49d03b07871f1d497d5d278169b7327bcd03a06dd563ddb8" exitCode=0 Jan 22 10:15:01 crc kubenswrapper[4811]: I0122 10:15:01.721071 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484615-d8tz6" event={"ID":"74405cbe-2ff9-40e3-b588-08c2a6dfd56a","Type":"ContainerDied","Data":"cfcd93dad51ee83b49d03b07871f1d497d5d278169b7327bcd03a06dd563ddb8"} Jan 22 10:15:01 crc kubenswrapper[4811]: I0122 10:15:01.721296 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484615-d8tz6" event={"ID":"74405cbe-2ff9-40e3-b588-08c2a6dfd56a","Type":"ContainerStarted","Data":"4f99398de39554bfd2beb92aea2092ae01be73946d4415167f47531d9fa00fd1"} Jan 22 10:15:03 crc kubenswrapper[4811]: I0122 10:15:03.233933 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484615-d8tz6" Jan 22 10:15:03 crc kubenswrapper[4811]: I0122 10:15:03.318344 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/74405cbe-2ff9-40e3-b588-08c2a6dfd56a-secret-volume\") pod \"74405cbe-2ff9-40e3-b588-08c2a6dfd56a\" (UID: \"74405cbe-2ff9-40e3-b588-08c2a6dfd56a\") " Jan 22 10:15:03 crc kubenswrapper[4811]: I0122 10:15:03.318587 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/74405cbe-2ff9-40e3-b588-08c2a6dfd56a-config-volume\") pod \"74405cbe-2ff9-40e3-b588-08c2a6dfd56a\" (UID: \"74405cbe-2ff9-40e3-b588-08c2a6dfd56a\") " Jan 22 10:15:03 crc kubenswrapper[4811]: I0122 10:15:03.318852 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-msttc\" (UniqueName: \"kubernetes.io/projected/74405cbe-2ff9-40e3-b588-08c2a6dfd56a-kube-api-access-msttc\") pod \"74405cbe-2ff9-40e3-b588-08c2a6dfd56a\" (UID: \"74405cbe-2ff9-40e3-b588-08c2a6dfd56a\") " Jan 22 10:15:03 crc kubenswrapper[4811]: I0122 10:15:03.319135 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74405cbe-2ff9-40e3-b588-08c2a6dfd56a-config-volume" (OuterVolumeSpecName: "config-volume") pod "74405cbe-2ff9-40e3-b588-08c2a6dfd56a" (UID: "74405cbe-2ff9-40e3-b588-08c2a6dfd56a"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:15:03 crc kubenswrapper[4811]: I0122 10:15:03.319508 4811 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/74405cbe-2ff9-40e3-b588-08c2a6dfd56a-config-volume\") on node \"crc\" DevicePath \"\"" Jan 22 10:15:03 crc kubenswrapper[4811]: I0122 10:15:03.337938 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74405cbe-2ff9-40e3-b588-08c2a6dfd56a-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "74405cbe-2ff9-40e3-b588-08c2a6dfd56a" (UID: "74405cbe-2ff9-40e3-b588-08c2a6dfd56a"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:15:03 crc kubenswrapper[4811]: I0122 10:15:03.338976 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74405cbe-2ff9-40e3-b588-08c2a6dfd56a-kube-api-access-msttc" (OuterVolumeSpecName: "kube-api-access-msttc") pod "74405cbe-2ff9-40e3-b588-08c2a6dfd56a" (UID: "74405cbe-2ff9-40e3-b588-08c2a6dfd56a"). InnerVolumeSpecName "kube-api-access-msttc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:15:03 crc kubenswrapper[4811]: I0122 10:15:03.421469 4811 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/74405cbe-2ff9-40e3-b588-08c2a6dfd56a-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 22 10:15:03 crc kubenswrapper[4811]: I0122 10:15:03.421883 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-msttc\" (UniqueName: \"kubernetes.io/projected/74405cbe-2ff9-40e3-b588-08c2a6dfd56a-kube-api-access-msttc\") on node \"crc\" DevicePath \"\"" Jan 22 10:15:03 crc kubenswrapper[4811]: I0122 10:15:03.738967 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484615-d8tz6" event={"ID":"74405cbe-2ff9-40e3-b588-08c2a6dfd56a","Type":"ContainerDied","Data":"4f99398de39554bfd2beb92aea2092ae01be73946d4415167f47531d9fa00fd1"} Jan 22 10:15:03 crc kubenswrapper[4811]: I0122 10:15:03.739016 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4f99398de39554bfd2beb92aea2092ae01be73946d4415167f47531d9fa00fd1" Jan 22 10:15:03 crc kubenswrapper[4811]: I0122 10:15:03.739028 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484615-d8tz6" Jan 22 10:15:04 crc kubenswrapper[4811]: I0122 10:15:04.322110 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484570-hfr5f"] Jan 22 10:15:04 crc kubenswrapper[4811]: I0122 10:15:04.328953 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484570-hfr5f"] Jan 22 10:15:06 crc kubenswrapper[4811]: I0122 10:15:06.001829 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f56432ca-1c38-4f53-884b-77f93f904cc5" path="/var/lib/kubelet/pods/f56432ca-1c38-4f53-884b-77f93f904cc5/volumes" Jan 22 10:15:12 crc kubenswrapper[4811]: I0122 10:15:12.409082 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-xvdbh_97d60b95-f52c-4946-919a-e8fd73251ed5/cert-manager-controller/0.log" Jan 22 10:15:12 crc kubenswrapper[4811]: I0122 10:15:12.422037 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-jbj4c_4dbc71dd-a371-4735-bc7e-6c29eb855fbd/cert-manager-cainjector/0.log" Jan 22 10:15:12 crc kubenswrapper[4811]: I0122 10:15:12.429082 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-sgnhm_34e7fc62-f1c1-41cb-b44c-2ef705fa2a15/cert-manager-webhook/0.log" Jan 22 10:15:15 crc kubenswrapper[4811]: I0122 10:15:15.634174 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-nlrpt"] Jan 22 10:15:15 crc kubenswrapper[4811]: E0122 10:15:15.635301 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74405cbe-2ff9-40e3-b588-08c2a6dfd56a" containerName="collect-profiles" Jan 22 10:15:15 crc kubenswrapper[4811]: I0122 10:15:15.635316 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="74405cbe-2ff9-40e3-b588-08c2a6dfd56a" containerName="collect-profiles" Jan 22 10:15:15 crc kubenswrapper[4811]: I0122 10:15:15.635695 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="74405cbe-2ff9-40e3-b588-08c2a6dfd56a" containerName="collect-profiles" Jan 22 10:15:15 crc kubenswrapper[4811]: I0122 10:15:15.640560 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nlrpt" Jan 22 10:15:15 crc kubenswrapper[4811]: I0122 10:15:15.691959 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nlrpt"] Jan 22 10:15:15 crc kubenswrapper[4811]: I0122 10:15:15.762331 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2b5489c-1fbe-47ba-a7f9-8b96240d3c8e-utilities\") pod \"redhat-operators-nlrpt\" (UID: \"f2b5489c-1fbe-47ba-a7f9-8b96240d3c8e\") " pod="openshift-marketplace/redhat-operators-nlrpt" Jan 22 10:15:15 crc kubenswrapper[4811]: I0122 10:15:15.762381 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2b5489c-1fbe-47ba-a7f9-8b96240d3c8e-catalog-content\") pod \"redhat-operators-nlrpt\" (UID: \"f2b5489c-1fbe-47ba-a7f9-8b96240d3c8e\") " pod="openshift-marketplace/redhat-operators-nlrpt" Jan 22 10:15:15 crc kubenswrapper[4811]: I0122 10:15:15.762467 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lltxw\" (UniqueName: \"kubernetes.io/projected/f2b5489c-1fbe-47ba-a7f9-8b96240d3c8e-kube-api-access-lltxw\") pod \"redhat-operators-nlrpt\" (UID: \"f2b5489c-1fbe-47ba-a7f9-8b96240d3c8e\") " pod="openshift-marketplace/redhat-operators-nlrpt" Jan 22 10:15:15 crc kubenswrapper[4811]: I0122 10:15:15.864216 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2b5489c-1fbe-47ba-a7f9-8b96240d3c8e-utilities\") pod \"redhat-operators-nlrpt\" (UID: \"f2b5489c-1fbe-47ba-a7f9-8b96240d3c8e\") " pod="openshift-marketplace/redhat-operators-nlrpt" Jan 22 10:15:15 crc kubenswrapper[4811]: I0122 10:15:15.864263 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2b5489c-1fbe-47ba-a7f9-8b96240d3c8e-catalog-content\") pod \"redhat-operators-nlrpt\" (UID: \"f2b5489c-1fbe-47ba-a7f9-8b96240d3c8e\") " pod="openshift-marketplace/redhat-operators-nlrpt" Jan 22 10:15:15 crc kubenswrapper[4811]: I0122 10:15:15.864344 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lltxw\" (UniqueName: \"kubernetes.io/projected/f2b5489c-1fbe-47ba-a7f9-8b96240d3c8e-kube-api-access-lltxw\") pod \"redhat-operators-nlrpt\" (UID: \"f2b5489c-1fbe-47ba-a7f9-8b96240d3c8e\") " pod="openshift-marketplace/redhat-operators-nlrpt" Jan 22 10:15:15 crc kubenswrapper[4811]: I0122 10:15:15.864764 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2b5489c-1fbe-47ba-a7f9-8b96240d3c8e-catalog-content\") pod \"redhat-operators-nlrpt\" (UID: \"f2b5489c-1fbe-47ba-a7f9-8b96240d3c8e\") " pod="openshift-marketplace/redhat-operators-nlrpt" Jan 22 10:15:15 crc kubenswrapper[4811]: I0122 10:15:15.864777 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2b5489c-1fbe-47ba-a7f9-8b96240d3c8e-utilities\") pod \"redhat-operators-nlrpt\" (UID: \"f2b5489c-1fbe-47ba-a7f9-8b96240d3c8e\") " pod="openshift-marketplace/redhat-operators-nlrpt" Jan 22 10:15:16 crc kubenswrapper[4811]: I0122 10:15:16.147533 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lltxw\" (UniqueName: \"kubernetes.io/projected/f2b5489c-1fbe-47ba-a7f9-8b96240d3c8e-kube-api-access-lltxw\") pod \"redhat-operators-nlrpt\" (UID: \"f2b5489c-1fbe-47ba-a7f9-8b96240d3c8e\") " pod="openshift-marketplace/redhat-operators-nlrpt" Jan 22 10:15:16 crc kubenswrapper[4811]: I0122 10:15:16.257770 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nlrpt" Jan 22 10:15:16 crc kubenswrapper[4811]: I0122 10:15:16.703336 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nlrpt"] Jan 22 10:15:16 crc kubenswrapper[4811]: I0122 10:15:16.840781 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nlrpt" event={"ID":"f2b5489c-1fbe-47ba-a7f9-8b96240d3c8e","Type":"ContainerStarted","Data":"e926624e91785c9550281e3c6a9c8c917b4ab6a7f930ef99f9cd2e0963afc1d2"} Jan 22 10:15:16 crc kubenswrapper[4811]: I0122 10:15:16.841004 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nlrpt" event={"ID":"f2b5489c-1fbe-47ba-a7f9-8b96240d3c8e","Type":"ContainerStarted","Data":"8223b7d59f8053410194abf1ea1d5d105cf52453d29fcea6312dd3e1d2b963cf"} Jan 22 10:15:17 crc kubenswrapper[4811]: I0122 10:15:17.025470 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-gln6s"] Jan 22 10:15:17 crc kubenswrapper[4811]: I0122 10:15:17.027392 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gln6s" Jan 22 10:15:17 crc kubenswrapper[4811]: I0122 10:15:17.039221 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gln6s"] Jan 22 10:15:17 crc kubenswrapper[4811]: I0122 10:15:17.195672 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be4c5e24-9f96-4205-9484-8ae0c6ff3842-catalog-content\") pod \"certified-operators-gln6s\" (UID: \"be4c5e24-9f96-4205-9484-8ae0c6ff3842\") " pod="openshift-marketplace/certified-operators-gln6s" Jan 22 10:15:17 crc kubenswrapper[4811]: I0122 10:15:17.195759 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmgsq\" (UniqueName: \"kubernetes.io/projected/be4c5e24-9f96-4205-9484-8ae0c6ff3842-kube-api-access-kmgsq\") pod \"certified-operators-gln6s\" (UID: \"be4c5e24-9f96-4205-9484-8ae0c6ff3842\") " pod="openshift-marketplace/certified-operators-gln6s" Jan 22 10:15:17 crc kubenswrapper[4811]: I0122 10:15:17.195952 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be4c5e24-9f96-4205-9484-8ae0c6ff3842-utilities\") pod \"certified-operators-gln6s\" (UID: \"be4c5e24-9f96-4205-9484-8ae0c6ff3842\") " pod="openshift-marketplace/certified-operators-gln6s" Jan 22 10:15:17 crc kubenswrapper[4811]: I0122 10:15:17.297280 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be4c5e24-9f96-4205-9484-8ae0c6ff3842-catalog-content\") pod \"certified-operators-gln6s\" (UID: \"be4c5e24-9f96-4205-9484-8ae0c6ff3842\") " pod="openshift-marketplace/certified-operators-gln6s" Jan 22 10:15:17 crc kubenswrapper[4811]: I0122 10:15:17.297343 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmgsq\" (UniqueName: \"kubernetes.io/projected/be4c5e24-9f96-4205-9484-8ae0c6ff3842-kube-api-access-kmgsq\") pod \"certified-operators-gln6s\" (UID: \"be4c5e24-9f96-4205-9484-8ae0c6ff3842\") " pod="openshift-marketplace/certified-operators-gln6s" Jan 22 10:15:17 crc kubenswrapper[4811]: I0122 10:15:17.297448 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be4c5e24-9f96-4205-9484-8ae0c6ff3842-utilities\") pod \"certified-operators-gln6s\" (UID: \"be4c5e24-9f96-4205-9484-8ae0c6ff3842\") " pod="openshift-marketplace/certified-operators-gln6s" Jan 22 10:15:17 crc kubenswrapper[4811]: I0122 10:15:17.298207 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be4c5e24-9f96-4205-9484-8ae0c6ff3842-catalog-content\") pod \"certified-operators-gln6s\" (UID: \"be4c5e24-9f96-4205-9484-8ae0c6ff3842\") " pod="openshift-marketplace/certified-operators-gln6s" Jan 22 10:15:17 crc kubenswrapper[4811]: I0122 10:15:17.298256 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be4c5e24-9f96-4205-9484-8ae0c6ff3842-utilities\") pod \"certified-operators-gln6s\" (UID: \"be4c5e24-9f96-4205-9484-8ae0c6ff3842\") " pod="openshift-marketplace/certified-operators-gln6s" Jan 22 10:15:17 crc kubenswrapper[4811]: I0122 10:15:17.324281 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmgsq\" (UniqueName: \"kubernetes.io/projected/be4c5e24-9f96-4205-9484-8ae0c6ff3842-kube-api-access-kmgsq\") pod \"certified-operators-gln6s\" (UID: \"be4c5e24-9f96-4205-9484-8ae0c6ff3842\") " pod="openshift-marketplace/certified-operators-gln6s" Jan 22 10:15:17 crc kubenswrapper[4811]: I0122 10:15:17.340919 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gln6s" Jan 22 10:15:17 crc kubenswrapper[4811]: I0122 10:15:17.751683 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gln6s"] Jan 22 10:15:17 crc kubenswrapper[4811]: W0122 10:15:17.755156 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbe4c5e24_9f96_4205_9484_8ae0c6ff3842.slice/crio-5a40cae827ed823468e3e8a2dc25f405cc206540d35ddcf3587bcae23872e570 WatchSource:0}: Error finding container 5a40cae827ed823468e3e8a2dc25f405cc206540d35ddcf3587bcae23872e570: Status 404 returned error can't find the container with id 5a40cae827ed823468e3e8a2dc25f405cc206540d35ddcf3587bcae23872e570 Jan 22 10:15:17 crc kubenswrapper[4811]: I0122 10:15:17.851905 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nlrpt" event={"ID":"f2b5489c-1fbe-47ba-a7f9-8b96240d3c8e","Type":"ContainerDied","Data":"e926624e91785c9550281e3c6a9c8c917b4ab6a7f930ef99f9cd2e0963afc1d2"} Jan 22 10:15:17 crc kubenswrapper[4811]: I0122 10:15:17.852592 4811 generic.go:334] "Generic (PLEG): container finished" podID="f2b5489c-1fbe-47ba-a7f9-8b96240d3c8e" containerID="e926624e91785c9550281e3c6a9c8c917b4ab6a7f930ef99f9cd2e0963afc1d2" exitCode=0 Jan 22 10:15:17 crc kubenswrapper[4811]: I0122 10:15:17.855790 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gln6s" event={"ID":"be4c5e24-9f96-4205-9484-8ae0c6ff3842","Type":"ContainerStarted","Data":"5a40cae827ed823468e3e8a2dc25f405cc206540d35ddcf3587bcae23872e570"} Jan 22 10:15:18 crc kubenswrapper[4811]: I0122 10:15:18.527542 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-22qhf_d63fcc2e-ef3c-4a10-9444-43070aa0dc77/nmstate-console-plugin/0.log" Jan 22 10:15:18 crc kubenswrapper[4811]: I0122 10:15:18.542850 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-tvjnz_74fc22de-195f-452c-b18c-f12c53f2465f/nmstate-handler/0.log" Jan 22 10:15:18 crc kubenswrapper[4811]: I0122 10:15:18.559187 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-nmrg4_66e8ec28-33fd-440b-9064-dd5c40cf4b61/nmstate-metrics/0.log" Jan 22 10:15:18 crc kubenswrapper[4811]: I0122 10:15:18.564888 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-nmrg4_66e8ec28-33fd-440b-9064-dd5c40cf4b61/kube-rbac-proxy/0.log" Jan 22 10:15:18 crc kubenswrapper[4811]: I0122 10:15:18.583263 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-v76qn_952cfa08-4a5f-43b8-aa83-58839cc92523/nmstate-operator/0.log" Jan 22 10:15:18 crc kubenswrapper[4811]: I0122 10:15:18.593142 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-tnk97_9e9c9633-f916-440c-b02c-5bb58eb51e76/nmstate-webhook/0.log" Jan 22 10:15:18 crc kubenswrapper[4811]: I0122 10:15:18.884570 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nlrpt" event={"ID":"f2b5489c-1fbe-47ba-a7f9-8b96240d3c8e","Type":"ContainerStarted","Data":"05968ea3c10f43a42a1899c03d062b290949438e1aa746896cd662ee4cbde25a"} Jan 22 10:15:18 crc kubenswrapper[4811]: I0122 10:15:18.886962 4811 generic.go:334] "Generic (PLEG): container finished" podID="be4c5e24-9f96-4205-9484-8ae0c6ff3842" containerID="7074025de1a27d300619ef2eccfb51db6e8b849bf89aff29ffce9d5b5a2fac38" exitCode=0 Jan 22 10:15:18 crc kubenswrapper[4811]: I0122 10:15:18.887033 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gln6s" event={"ID":"be4c5e24-9f96-4205-9484-8ae0c6ff3842","Type":"ContainerDied","Data":"7074025de1a27d300619ef2eccfb51db6e8b849bf89aff29ffce9d5b5a2fac38"} Jan 22 10:15:19 crc kubenswrapper[4811]: I0122 10:15:19.897854 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gln6s" event={"ID":"be4c5e24-9f96-4205-9484-8ae0c6ff3842","Type":"ContainerStarted","Data":"89e39348587aeef84866da65939a0e029a84fcd9cf95f82e0b1e3682144e1018"} Jan 22 10:15:21 crc kubenswrapper[4811]: I0122 10:15:21.914502 4811 generic.go:334] "Generic (PLEG): container finished" podID="be4c5e24-9f96-4205-9484-8ae0c6ff3842" containerID="89e39348587aeef84866da65939a0e029a84fcd9cf95f82e0b1e3682144e1018" exitCode=0 Jan 22 10:15:21 crc kubenswrapper[4811]: I0122 10:15:21.914558 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gln6s" event={"ID":"be4c5e24-9f96-4205-9484-8ae0c6ff3842","Type":"ContainerDied","Data":"89e39348587aeef84866da65939a0e029a84fcd9cf95f82e0b1e3682144e1018"} Jan 22 10:15:22 crc kubenswrapper[4811]: I0122 10:15:22.927225 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gln6s" event={"ID":"be4c5e24-9f96-4205-9484-8ae0c6ff3842","Type":"ContainerStarted","Data":"0f0a33b170ebc9870beee9c1dc94b2ee02a646c956d06823adcb77c72b2933e1"} Jan 22 10:15:22 crc kubenswrapper[4811]: I0122 10:15:22.929520 4811 generic.go:334] "Generic (PLEG): container finished" podID="f2b5489c-1fbe-47ba-a7f9-8b96240d3c8e" containerID="05968ea3c10f43a42a1899c03d062b290949438e1aa746896cd662ee4cbde25a" exitCode=0 Jan 22 10:15:22 crc kubenswrapper[4811]: I0122 10:15:22.929585 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nlrpt" event={"ID":"f2b5489c-1fbe-47ba-a7f9-8b96240d3c8e","Type":"ContainerDied","Data":"05968ea3c10f43a42a1899c03d062b290949438e1aa746896cd662ee4cbde25a"} Jan 22 10:15:22 crc kubenswrapper[4811]: I0122 10:15:22.946929 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-gln6s" podStartSLOduration=2.4128951770000002 podStartE2EDuration="5.946916077s" podCreationTimestamp="2026-01-22 10:15:17 +0000 UTC" firstStartedPulling="2026-01-22 10:15:18.891582769 +0000 UTC m=+4163.213769892" lastFinishedPulling="2026-01-22 10:15:22.425603668 +0000 UTC m=+4166.747790792" observedRunningTime="2026-01-22 10:15:22.942077021 +0000 UTC m=+4167.264264144" watchObservedRunningTime="2026-01-22 10:15:22.946916077 +0000 UTC m=+4167.269103201" Jan 22 10:15:23 crc kubenswrapper[4811]: I0122 10:15:23.939819 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nlrpt" event={"ID":"f2b5489c-1fbe-47ba-a7f9-8b96240d3c8e","Type":"ContainerStarted","Data":"209ddb544ff8ab83cf8fae6bcda9804301fbac01dc33c27c215bbc5d8c8caed0"} Jan 22 10:15:23 crc kubenswrapper[4811]: I0122 10:15:23.958656 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-nlrpt" podStartSLOduration=3.4310299730000002 podStartE2EDuration="8.95864129s" podCreationTimestamp="2026-01-22 10:15:15 +0000 UTC" firstStartedPulling="2026-01-22 10:15:17.854277914 +0000 UTC m=+4162.176465037" lastFinishedPulling="2026-01-22 10:15:23.381889231 +0000 UTC m=+4167.704076354" observedRunningTime="2026-01-22 10:15:23.954908259 +0000 UTC m=+4168.277095382" watchObservedRunningTime="2026-01-22 10:15:23.95864129 +0000 UTC m=+4168.280828412" Jan 22 10:15:26 crc kubenswrapper[4811]: I0122 10:15:26.258482 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-nlrpt" Jan 22 10:15:26 crc kubenswrapper[4811]: I0122 10:15:26.258733 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-nlrpt" Jan 22 10:15:27 crc kubenswrapper[4811]: I0122 10:15:27.298108 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-nlrpt" podUID="f2b5489c-1fbe-47ba-a7f9-8b96240d3c8e" containerName="registry-server" probeResult="failure" output=< Jan 22 10:15:27 crc kubenswrapper[4811]: timeout: failed to connect service ":50051" within 1s Jan 22 10:15:27 crc kubenswrapper[4811]: > Jan 22 10:15:27 crc kubenswrapper[4811]: I0122 10:15:27.341944 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-gln6s" Jan 22 10:15:27 crc kubenswrapper[4811]: I0122 10:15:27.342153 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-gln6s" Jan 22 10:15:28 crc kubenswrapper[4811]: I0122 10:15:28.381661 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-gln6s" podUID="be4c5e24-9f96-4205-9484-8ae0c6ff3842" containerName="registry-server" probeResult="failure" output=< Jan 22 10:15:28 crc kubenswrapper[4811]: timeout: failed to connect service ":50051" within 1s Jan 22 10:15:28 crc kubenswrapper[4811]: > Jan 22 10:15:31 crc kubenswrapper[4811]: I0122 10:15:31.862149 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-rl9k7_346ed4cd-2bb8-470d-a275-6c297994fb3f/controller/0.log" Jan 22 10:15:31 crc kubenswrapper[4811]: I0122 10:15:31.876286 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-rl9k7_346ed4cd-2bb8-470d-a275-6c297994fb3f/kube-rbac-proxy/0.log" Jan 22 10:15:31 crc kubenswrapper[4811]: I0122 10:15:31.897735 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rnvr5_d2a3b77d-fd7c-421e-aa23-d206419b1c7d/controller/0.log" Jan 22 10:15:33 crc kubenswrapper[4811]: I0122 10:15:33.028821 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rnvr5_d2a3b77d-fd7c-421e-aa23-d206419b1c7d/frr/0.log" Jan 22 10:15:33 crc kubenswrapper[4811]: I0122 10:15:33.036535 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rnvr5_d2a3b77d-fd7c-421e-aa23-d206419b1c7d/reloader/0.log" Jan 22 10:15:33 crc kubenswrapper[4811]: I0122 10:15:33.047589 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rnvr5_d2a3b77d-fd7c-421e-aa23-d206419b1c7d/frr-metrics/0.log" Jan 22 10:15:33 crc kubenswrapper[4811]: I0122 10:15:33.060464 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rnvr5_d2a3b77d-fd7c-421e-aa23-d206419b1c7d/kube-rbac-proxy/0.log" Jan 22 10:15:33 crc kubenswrapper[4811]: I0122 10:15:33.064461 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rnvr5_d2a3b77d-fd7c-421e-aa23-d206419b1c7d/kube-rbac-proxy-frr/0.log" Jan 22 10:15:33 crc kubenswrapper[4811]: I0122 10:15:33.072024 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rnvr5_d2a3b77d-fd7c-421e-aa23-d206419b1c7d/cp-frr-files/0.log" Jan 22 10:15:33 crc kubenswrapper[4811]: I0122 10:15:33.095835 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rnvr5_d2a3b77d-fd7c-421e-aa23-d206419b1c7d/cp-reloader/0.log" Jan 22 10:15:33 crc kubenswrapper[4811]: I0122 10:15:33.106869 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rnvr5_d2a3b77d-fd7c-421e-aa23-d206419b1c7d/cp-metrics/0.log" Jan 22 10:15:33 crc kubenswrapper[4811]: I0122 10:15:33.121904 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-zfggh_2f6eae9c-374b-4ac3-b5d7-04267fe9bf73/frr-k8s-webhook-server/0.log" Jan 22 10:15:33 crc kubenswrapper[4811]: I0122 10:15:33.142724 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-64bd67c58d-k58sk_13617657-7245-4223-9b20-03a56378edaf/manager/0.log" Jan 22 10:15:33 crc kubenswrapper[4811]: I0122 10:15:33.151219 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-5bc67d6df-ckhh4_1008e895-ec53-4fdd-9423-bbb4d249a6b9/webhook-server/0.log" Jan 22 10:15:33 crc kubenswrapper[4811]: I0122 10:15:33.458560 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-k88w9_faa36c07-3c7a-4b4a-a04e-58b43a178890/speaker/0.log" Jan 22 10:15:33 crc kubenswrapper[4811]: I0122 10:15:33.479128 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-k88w9_faa36c07-3c7a-4b4a-a04e-58b43a178890/kube-rbac-proxy/0.log" Jan 22 10:15:37 crc kubenswrapper[4811]: I0122 10:15:37.294965 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-nlrpt" podUID="f2b5489c-1fbe-47ba-a7f9-8b96240d3c8e" containerName="registry-server" probeResult="failure" output=< Jan 22 10:15:37 crc kubenswrapper[4811]: timeout: failed to connect service ":50051" within 1s Jan 22 10:15:37 crc kubenswrapper[4811]: > Jan 22 10:15:37 crc kubenswrapper[4811]: I0122 10:15:37.376760 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-gln6s" Jan 22 10:15:37 crc kubenswrapper[4811]: I0122 10:15:37.416360 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-gln6s" Jan 22 10:15:37 crc kubenswrapper[4811]: I0122 10:15:37.608311 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gln6s"] Jan 22 10:15:37 crc kubenswrapper[4811]: I0122 10:15:37.824174 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfbp48_4069e1a9-a40a-4b76-bee8-4b35c06e818e/extract/0.log" Jan 22 10:15:37 crc kubenswrapper[4811]: I0122 10:15:37.834155 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfbp48_4069e1a9-a40a-4b76-bee8-4b35c06e818e/util/0.log" Jan 22 10:15:37 crc kubenswrapper[4811]: I0122 10:15:37.841247 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfbp48_4069e1a9-a40a-4b76-bee8-4b35c06e818e/pull/0.log" Jan 22 10:15:37 crc kubenswrapper[4811]: I0122 10:15:37.852449 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713p6bdc_df76885d-11e8-4fce-a69a-dee26f62c562/extract/0.log" Jan 22 10:15:37 crc kubenswrapper[4811]: I0122 10:15:37.858465 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713p6bdc_df76885d-11e8-4fce-a69a-dee26f62c562/util/0.log" Jan 22 10:15:37 crc kubenswrapper[4811]: I0122 10:15:37.866067 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713p6bdc_df76885d-11e8-4fce-a69a-dee26f62c562/pull/0.log" Jan 22 10:15:37 crc kubenswrapper[4811]: I0122 10:15:37.876589 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-gln6s_be4c5e24-9f96-4205-9484-8ae0c6ff3842/registry-server/0.log" Jan 22 10:15:37 crc kubenswrapper[4811]: I0122 10:15:37.882990 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-gln6s_be4c5e24-9f96-4205-9484-8ae0c6ff3842/extract-utilities/0.log" Jan 22 10:15:37 crc kubenswrapper[4811]: I0122 10:15:37.888536 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-gln6s_be4c5e24-9f96-4205-9484-8ae0c6ff3842/extract-content/0.log" Jan 22 10:15:38 crc kubenswrapper[4811]: I0122 10:15:38.151025 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-q2j6v_2476cdc2-a97d-41a6-8a68-d8d2537180d4/registry-server/0.log" Jan 22 10:15:38 crc kubenswrapper[4811]: I0122 10:15:38.159958 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-q2j6v_2476cdc2-a97d-41a6-8a68-d8d2537180d4/extract-utilities/0.log" Jan 22 10:15:38 crc kubenswrapper[4811]: I0122 10:15:38.171743 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-q2j6v_2476cdc2-a97d-41a6-8a68-d8d2537180d4/extract-content/0.log" Jan 22 10:15:38 crc kubenswrapper[4811]: I0122 10:15:38.709589 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-xn9pt_73a05511-7e27-41cd-9da6-e9277550936d/registry-server/0.log" Jan 22 10:15:38 crc kubenswrapper[4811]: I0122 10:15:38.714257 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-xn9pt_73a05511-7e27-41cd-9da6-e9277550936d/extract-utilities/0.log" Jan 22 10:15:38 crc kubenswrapper[4811]: I0122 10:15:38.721039 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-xn9pt_73a05511-7e27-41cd-9da6-e9277550936d/extract-content/0.log" Jan 22 10:15:38 crc kubenswrapper[4811]: I0122 10:15:38.733584 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-cnjv9_f62c6396-82e9-4314-912a-42f5265b03bb/marketplace-operator/0.log" Jan 22 10:15:38 crc kubenswrapper[4811]: I0122 10:15:38.859013 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pbqkb_aedb9efe-c04f-46f9-9b3c-c231b81440e7/registry-server/0.log" Jan 22 10:15:38 crc kubenswrapper[4811]: I0122 10:15:38.950233 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pbqkb_aedb9efe-c04f-46f9-9b3c-c231b81440e7/extract-utilities/0.log" Jan 22 10:15:39 crc kubenswrapper[4811]: I0122 10:15:39.095854 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-gln6s" podUID="be4c5e24-9f96-4205-9484-8ae0c6ff3842" containerName="registry-server" containerID="cri-o://0f0a33b170ebc9870beee9c1dc94b2ee02a646c956d06823adcb77c72b2933e1" gracePeriod=2 Jan 22 10:15:39 crc kubenswrapper[4811]: I0122 10:15:39.238119 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pbqkb_aedb9efe-c04f-46f9-9b3c-c231b81440e7/extract-content/0.log" Jan 22 10:15:39 crc kubenswrapper[4811]: I0122 10:15:39.729934 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-65jbw_d1f41cc2-bb4a-415e-80a6-8ae31b4c354f/registry-server/0.log" Jan 22 10:15:39 crc kubenswrapper[4811]: I0122 10:15:39.736586 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-65jbw_d1f41cc2-bb4a-415e-80a6-8ae31b4c354f/extract-utilities/0.log" Jan 22 10:15:39 crc kubenswrapper[4811]: I0122 10:15:39.745864 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-65jbw_d1f41cc2-bb4a-415e-80a6-8ae31b4c354f/extract-content/0.log" Jan 22 10:15:39 crc kubenswrapper[4811]: I0122 10:15:39.779125 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-nlrpt_f2b5489c-1fbe-47ba-a7f9-8b96240d3c8e/registry-server/0.log" Jan 22 10:15:39 crc kubenswrapper[4811]: I0122 10:15:39.786237 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-nlrpt_f2b5489c-1fbe-47ba-a7f9-8b96240d3c8e/extract-utilities/0.log" Jan 22 10:15:39 crc kubenswrapper[4811]: I0122 10:15:39.796107 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-nlrpt_f2b5489c-1fbe-47ba-a7f9-8b96240d3c8e/extract-content/0.log" Jan 22 10:15:40 crc kubenswrapper[4811]: I0122 10:15:40.123447 4811 generic.go:334] "Generic (PLEG): container finished" podID="be4c5e24-9f96-4205-9484-8ae0c6ff3842" containerID="0f0a33b170ebc9870beee9c1dc94b2ee02a646c956d06823adcb77c72b2933e1" exitCode=0 Jan 22 10:15:40 crc kubenswrapper[4811]: I0122 10:15:40.123722 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gln6s" event={"ID":"be4c5e24-9f96-4205-9484-8ae0c6ff3842","Type":"ContainerDied","Data":"0f0a33b170ebc9870beee9c1dc94b2ee02a646c956d06823adcb77c72b2933e1"} Jan 22 10:15:40 crc kubenswrapper[4811]: I0122 10:15:40.423371 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gln6s" Jan 22 10:15:40 crc kubenswrapper[4811]: I0122 10:15:40.609149 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be4c5e24-9f96-4205-9484-8ae0c6ff3842-utilities\") pod \"be4c5e24-9f96-4205-9484-8ae0c6ff3842\" (UID: \"be4c5e24-9f96-4205-9484-8ae0c6ff3842\") " Jan 22 10:15:40 crc kubenswrapper[4811]: I0122 10:15:40.609422 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kmgsq\" (UniqueName: \"kubernetes.io/projected/be4c5e24-9f96-4205-9484-8ae0c6ff3842-kube-api-access-kmgsq\") pod \"be4c5e24-9f96-4205-9484-8ae0c6ff3842\" (UID: \"be4c5e24-9f96-4205-9484-8ae0c6ff3842\") " Jan 22 10:15:40 crc kubenswrapper[4811]: I0122 10:15:40.609581 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be4c5e24-9f96-4205-9484-8ae0c6ff3842-catalog-content\") pod \"be4c5e24-9f96-4205-9484-8ae0c6ff3842\" (UID: \"be4c5e24-9f96-4205-9484-8ae0c6ff3842\") " Jan 22 10:15:40 crc kubenswrapper[4811]: I0122 10:15:40.609871 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/be4c5e24-9f96-4205-9484-8ae0c6ff3842-utilities" (OuterVolumeSpecName: "utilities") pod "be4c5e24-9f96-4205-9484-8ae0c6ff3842" (UID: "be4c5e24-9f96-4205-9484-8ae0c6ff3842"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:15:40 crc kubenswrapper[4811]: I0122 10:15:40.610691 4811 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be4c5e24-9f96-4205-9484-8ae0c6ff3842-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 10:15:40 crc kubenswrapper[4811]: I0122 10:15:40.616990 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be4c5e24-9f96-4205-9484-8ae0c6ff3842-kube-api-access-kmgsq" (OuterVolumeSpecName: "kube-api-access-kmgsq") pod "be4c5e24-9f96-4205-9484-8ae0c6ff3842" (UID: "be4c5e24-9f96-4205-9484-8ae0c6ff3842"). InnerVolumeSpecName "kube-api-access-kmgsq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:15:40 crc kubenswrapper[4811]: I0122 10:15:40.644754 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/be4c5e24-9f96-4205-9484-8ae0c6ff3842-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "be4c5e24-9f96-4205-9484-8ae0c6ff3842" (UID: "be4c5e24-9f96-4205-9484-8ae0c6ff3842"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:15:40 crc kubenswrapper[4811]: I0122 10:15:40.712619 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kmgsq\" (UniqueName: \"kubernetes.io/projected/be4c5e24-9f96-4205-9484-8ae0c6ff3842-kube-api-access-kmgsq\") on node \"crc\" DevicePath \"\"" Jan 22 10:15:40 crc kubenswrapper[4811]: I0122 10:15:40.712670 4811 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be4c5e24-9f96-4205-9484-8ae0c6ff3842-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 10:15:41 crc kubenswrapper[4811]: I0122 10:15:41.134343 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gln6s" event={"ID":"be4c5e24-9f96-4205-9484-8ae0c6ff3842","Type":"ContainerDied","Data":"5a40cae827ed823468e3e8a2dc25f405cc206540d35ddcf3587bcae23872e570"} Jan 22 10:15:41 crc kubenswrapper[4811]: I0122 10:15:41.134411 4811 scope.go:117] "RemoveContainer" containerID="0f0a33b170ebc9870beee9c1dc94b2ee02a646c956d06823adcb77c72b2933e1" Jan 22 10:15:41 crc kubenswrapper[4811]: I0122 10:15:41.135322 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gln6s" Jan 22 10:15:41 crc kubenswrapper[4811]: I0122 10:15:41.150714 4811 scope.go:117] "RemoveContainer" containerID="89e39348587aeef84866da65939a0e029a84fcd9cf95f82e0b1e3682144e1018" Jan 22 10:15:41 crc kubenswrapper[4811]: I0122 10:15:41.171245 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gln6s"] Jan 22 10:15:41 crc kubenswrapper[4811]: I0122 10:15:41.182524 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-gln6s"] Jan 22 10:15:41 crc kubenswrapper[4811]: I0122 10:15:41.185923 4811 scope.go:117] "RemoveContainer" containerID="7074025de1a27d300619ef2eccfb51db6e8b849bf89aff29ffce9d5b5a2fac38" Jan 22 10:15:42 crc kubenswrapper[4811]: I0122 10:15:42.000074 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be4c5e24-9f96-4205-9484-8ae0c6ff3842" path="/var/lib/kubelet/pods/be4c5e24-9f96-4205-9484-8ae0c6ff3842/volumes" Jan 22 10:15:46 crc kubenswrapper[4811]: I0122 10:15:46.304164 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-nlrpt" Jan 22 10:15:46 crc kubenswrapper[4811]: I0122 10:15:46.355409 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-nlrpt" Jan 22 10:15:47 crc kubenswrapper[4811]: I0122 10:15:47.427306 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nlrpt"] Jan 22 10:15:48 crc kubenswrapper[4811]: I0122 10:15:48.186160 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-nlrpt" podUID="f2b5489c-1fbe-47ba-a7f9-8b96240d3c8e" containerName="registry-server" containerID="cri-o://209ddb544ff8ab83cf8fae6bcda9804301fbac01dc33c27c215bbc5d8c8caed0" gracePeriod=2 Jan 22 10:15:48 crc kubenswrapper[4811]: I0122 10:15:48.787079 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nlrpt" Jan 22 10:15:48 crc kubenswrapper[4811]: I0122 10:15:48.889204 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2b5489c-1fbe-47ba-a7f9-8b96240d3c8e-utilities\") pod \"f2b5489c-1fbe-47ba-a7f9-8b96240d3c8e\" (UID: \"f2b5489c-1fbe-47ba-a7f9-8b96240d3c8e\") " Jan 22 10:15:48 crc kubenswrapper[4811]: I0122 10:15:48.889290 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lltxw\" (UniqueName: \"kubernetes.io/projected/f2b5489c-1fbe-47ba-a7f9-8b96240d3c8e-kube-api-access-lltxw\") pod \"f2b5489c-1fbe-47ba-a7f9-8b96240d3c8e\" (UID: \"f2b5489c-1fbe-47ba-a7f9-8b96240d3c8e\") " Jan 22 10:15:48 crc kubenswrapper[4811]: I0122 10:15:48.889444 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2b5489c-1fbe-47ba-a7f9-8b96240d3c8e-catalog-content\") pod \"f2b5489c-1fbe-47ba-a7f9-8b96240d3c8e\" (UID: \"f2b5489c-1fbe-47ba-a7f9-8b96240d3c8e\") " Jan 22 10:15:48 crc kubenswrapper[4811]: I0122 10:15:48.890524 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f2b5489c-1fbe-47ba-a7f9-8b96240d3c8e-utilities" (OuterVolumeSpecName: "utilities") pod "f2b5489c-1fbe-47ba-a7f9-8b96240d3c8e" (UID: "f2b5489c-1fbe-47ba-a7f9-8b96240d3c8e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:15:48 crc kubenswrapper[4811]: I0122 10:15:48.896981 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2b5489c-1fbe-47ba-a7f9-8b96240d3c8e-kube-api-access-lltxw" (OuterVolumeSpecName: "kube-api-access-lltxw") pod "f2b5489c-1fbe-47ba-a7f9-8b96240d3c8e" (UID: "f2b5489c-1fbe-47ba-a7f9-8b96240d3c8e"). InnerVolumeSpecName "kube-api-access-lltxw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:15:48 crc kubenswrapper[4811]: I0122 10:15:48.982427 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f2b5489c-1fbe-47ba-a7f9-8b96240d3c8e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f2b5489c-1fbe-47ba-a7f9-8b96240d3c8e" (UID: "f2b5489c-1fbe-47ba-a7f9-8b96240d3c8e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:15:48 crc kubenswrapper[4811]: I0122 10:15:48.991884 4811 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2b5489c-1fbe-47ba-a7f9-8b96240d3c8e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 10:15:48 crc kubenswrapper[4811]: I0122 10:15:48.992382 4811 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2b5489c-1fbe-47ba-a7f9-8b96240d3c8e-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 10:15:48 crc kubenswrapper[4811]: I0122 10:15:48.992616 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lltxw\" (UniqueName: \"kubernetes.io/projected/f2b5489c-1fbe-47ba-a7f9-8b96240d3c8e-kube-api-access-lltxw\") on node \"crc\" DevicePath \"\"" Jan 22 10:15:49 crc kubenswrapper[4811]: I0122 10:15:49.212428 4811 generic.go:334] "Generic (PLEG): container finished" podID="f2b5489c-1fbe-47ba-a7f9-8b96240d3c8e" containerID="209ddb544ff8ab83cf8fae6bcda9804301fbac01dc33c27c215bbc5d8c8caed0" exitCode=0 Jan 22 10:15:49 crc kubenswrapper[4811]: I0122 10:15:49.212477 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nlrpt" event={"ID":"f2b5489c-1fbe-47ba-a7f9-8b96240d3c8e","Type":"ContainerDied","Data":"209ddb544ff8ab83cf8fae6bcda9804301fbac01dc33c27c215bbc5d8c8caed0"} Jan 22 10:15:49 crc kubenswrapper[4811]: I0122 10:15:49.212510 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nlrpt" event={"ID":"f2b5489c-1fbe-47ba-a7f9-8b96240d3c8e","Type":"ContainerDied","Data":"8223b7d59f8053410194abf1ea1d5d105cf52453d29fcea6312dd3e1d2b963cf"} Jan 22 10:15:49 crc kubenswrapper[4811]: I0122 10:15:49.212542 4811 scope.go:117] "RemoveContainer" containerID="209ddb544ff8ab83cf8fae6bcda9804301fbac01dc33c27c215bbc5d8c8caed0" Jan 22 10:15:49 crc kubenswrapper[4811]: I0122 10:15:49.212803 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nlrpt" Jan 22 10:15:49 crc kubenswrapper[4811]: I0122 10:15:49.241131 4811 scope.go:117] "RemoveContainer" containerID="05968ea3c10f43a42a1899c03d062b290949438e1aa746896cd662ee4cbde25a" Jan 22 10:15:49 crc kubenswrapper[4811]: I0122 10:15:49.248288 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nlrpt"] Jan 22 10:15:49 crc kubenswrapper[4811]: I0122 10:15:49.261086 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-nlrpt"] Jan 22 10:15:49 crc kubenswrapper[4811]: I0122 10:15:49.270815 4811 scope.go:117] "RemoveContainer" containerID="e926624e91785c9550281e3c6a9c8c917b4ab6a7f930ef99f9cd2e0963afc1d2" Jan 22 10:15:49 crc kubenswrapper[4811]: I0122 10:15:49.293907 4811 scope.go:117] "RemoveContainer" containerID="209ddb544ff8ab83cf8fae6bcda9804301fbac01dc33c27c215bbc5d8c8caed0" Jan 22 10:15:49 crc kubenswrapper[4811]: E0122 10:15:49.294543 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"209ddb544ff8ab83cf8fae6bcda9804301fbac01dc33c27c215bbc5d8c8caed0\": container with ID starting with 209ddb544ff8ab83cf8fae6bcda9804301fbac01dc33c27c215bbc5d8c8caed0 not found: ID does not exist" containerID="209ddb544ff8ab83cf8fae6bcda9804301fbac01dc33c27c215bbc5d8c8caed0" Jan 22 10:15:49 crc kubenswrapper[4811]: I0122 10:15:49.294583 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"209ddb544ff8ab83cf8fae6bcda9804301fbac01dc33c27c215bbc5d8c8caed0"} err="failed to get container status \"209ddb544ff8ab83cf8fae6bcda9804301fbac01dc33c27c215bbc5d8c8caed0\": rpc error: code = NotFound desc = could not find container \"209ddb544ff8ab83cf8fae6bcda9804301fbac01dc33c27c215bbc5d8c8caed0\": container with ID starting with 209ddb544ff8ab83cf8fae6bcda9804301fbac01dc33c27c215bbc5d8c8caed0 not found: ID does not exist" Jan 22 10:15:49 crc kubenswrapper[4811]: I0122 10:15:49.294605 4811 scope.go:117] "RemoveContainer" containerID="05968ea3c10f43a42a1899c03d062b290949438e1aa746896cd662ee4cbde25a" Jan 22 10:15:49 crc kubenswrapper[4811]: E0122 10:15:49.294835 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"05968ea3c10f43a42a1899c03d062b290949438e1aa746896cd662ee4cbde25a\": container with ID starting with 05968ea3c10f43a42a1899c03d062b290949438e1aa746896cd662ee4cbde25a not found: ID does not exist" containerID="05968ea3c10f43a42a1899c03d062b290949438e1aa746896cd662ee4cbde25a" Jan 22 10:15:49 crc kubenswrapper[4811]: I0122 10:15:49.294851 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05968ea3c10f43a42a1899c03d062b290949438e1aa746896cd662ee4cbde25a"} err="failed to get container status \"05968ea3c10f43a42a1899c03d062b290949438e1aa746896cd662ee4cbde25a\": rpc error: code = NotFound desc = could not find container \"05968ea3c10f43a42a1899c03d062b290949438e1aa746896cd662ee4cbde25a\": container with ID starting with 05968ea3c10f43a42a1899c03d062b290949438e1aa746896cd662ee4cbde25a not found: ID does not exist" Jan 22 10:15:49 crc kubenswrapper[4811]: I0122 10:15:49.294864 4811 scope.go:117] "RemoveContainer" containerID="e926624e91785c9550281e3c6a9c8c917b4ab6a7f930ef99f9cd2e0963afc1d2" Jan 22 10:15:49 crc kubenswrapper[4811]: E0122 10:15:49.295035 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e926624e91785c9550281e3c6a9c8c917b4ab6a7f930ef99f9cd2e0963afc1d2\": container with ID starting with e926624e91785c9550281e3c6a9c8c917b4ab6a7f930ef99f9cd2e0963afc1d2 not found: ID does not exist" containerID="e926624e91785c9550281e3c6a9c8c917b4ab6a7f930ef99f9cd2e0963afc1d2" Jan 22 10:15:49 crc kubenswrapper[4811]: I0122 10:15:49.295050 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e926624e91785c9550281e3c6a9c8c917b4ab6a7f930ef99f9cd2e0963afc1d2"} err="failed to get container status \"e926624e91785c9550281e3c6a9c8c917b4ab6a7f930ef99f9cd2e0963afc1d2\": rpc error: code = NotFound desc = could not find container \"e926624e91785c9550281e3c6a9c8c917b4ab6a7f930ef99f9cd2e0963afc1d2\": container with ID starting with e926624e91785c9550281e3c6a9c8c917b4ab6a7f930ef99f9cd2e0963afc1d2 not found: ID does not exist" Jan 22 10:15:50 crc kubenswrapper[4811]: I0122 10:15:50.000669 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2b5489c-1fbe-47ba-a7f9-8b96240d3c8e" path="/var/lib/kubelet/pods/f2b5489c-1fbe-47ba-a7f9-8b96240d3c8e/volumes" Jan 22 10:16:01 crc kubenswrapper[4811]: I0122 10:16:01.660981 4811 scope.go:117] "RemoveContainer" containerID="90d2ccce569a0ca7e62f8c145184f711438116f9ee4f27afec59b76a25667d2c" Jan 22 10:16:05 crc kubenswrapper[4811]: I0122 10:16:05.501251 4811 patch_prober.go:28] interesting pod/machine-config-daemon-txvcq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 10:16:05 crc kubenswrapper[4811]: I0122 10:16:05.502124 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 10:16:35 crc kubenswrapper[4811]: I0122 10:16:35.501785 4811 patch_prober.go:28] interesting pod/machine-config-daemon-txvcq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 10:16:35 crc kubenswrapper[4811]: I0122 10:16:35.502153 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 10:16:52 crc kubenswrapper[4811]: I0122 10:16:52.408927 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-rl9k7_346ed4cd-2bb8-470d-a275-6c297994fb3f/controller/0.log" Jan 22 10:16:52 crc kubenswrapper[4811]: I0122 10:16:52.415363 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-rl9k7_346ed4cd-2bb8-470d-a275-6c297994fb3f/kube-rbac-proxy/0.log" Jan 22 10:16:52 crc kubenswrapper[4811]: I0122 10:16:52.433485 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rnvr5_d2a3b77d-fd7c-421e-aa23-d206419b1c7d/controller/0.log" Jan 22 10:16:52 crc kubenswrapper[4811]: I0122 10:16:52.861303 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-xvdbh_97d60b95-f52c-4946-919a-e8fd73251ed5/cert-manager-controller/0.log" Jan 22 10:16:52 crc kubenswrapper[4811]: I0122 10:16:52.891863 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-jbj4c_4dbc71dd-a371-4735-bc7e-6c29eb855fbd/cert-manager-cainjector/0.log" Jan 22 10:16:52 crc kubenswrapper[4811]: I0122 10:16:52.899847 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-sgnhm_34e7fc62-f1c1-41cb-b44c-2ef705fa2a15/cert-manager-webhook/0.log" Jan 22 10:16:53 crc kubenswrapper[4811]: I0122 10:16:53.684494 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rnvr5_d2a3b77d-fd7c-421e-aa23-d206419b1c7d/frr/0.log" Jan 22 10:16:53 crc kubenswrapper[4811]: I0122 10:16:53.706553 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rnvr5_d2a3b77d-fd7c-421e-aa23-d206419b1c7d/reloader/0.log" Jan 22 10:16:53 crc kubenswrapper[4811]: I0122 10:16:53.713748 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rnvr5_d2a3b77d-fd7c-421e-aa23-d206419b1c7d/frr-metrics/0.log" Jan 22 10:16:53 crc kubenswrapper[4811]: I0122 10:16:53.721302 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rnvr5_d2a3b77d-fd7c-421e-aa23-d206419b1c7d/kube-rbac-proxy/0.log" Jan 22 10:16:53 crc kubenswrapper[4811]: I0122 10:16:53.727101 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rnvr5_d2a3b77d-fd7c-421e-aa23-d206419b1c7d/kube-rbac-proxy-frr/0.log" Jan 22 10:16:53 crc kubenswrapper[4811]: I0122 10:16:53.732491 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rnvr5_d2a3b77d-fd7c-421e-aa23-d206419b1c7d/cp-frr-files/0.log" Jan 22 10:16:53 crc kubenswrapper[4811]: I0122 10:16:53.739259 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rnvr5_d2a3b77d-fd7c-421e-aa23-d206419b1c7d/cp-reloader/0.log" Jan 22 10:16:53 crc kubenswrapper[4811]: I0122 10:16:53.746103 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rnvr5_d2a3b77d-fd7c-421e-aa23-d206419b1c7d/cp-metrics/0.log" Jan 22 10:16:53 crc kubenswrapper[4811]: I0122 10:16:53.757071 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-zfggh_2f6eae9c-374b-4ac3-b5d7-04267fe9bf73/frr-k8s-webhook-server/0.log" Jan 22 10:16:53 crc kubenswrapper[4811]: I0122 10:16:53.773134 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-64bd67c58d-k58sk_13617657-7245-4223-9b20-03a56378edaf/manager/0.log" Jan 22 10:16:53 crc kubenswrapper[4811]: I0122 10:16:53.788155 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-5bc67d6df-ckhh4_1008e895-ec53-4fdd-9423-bbb4d249a6b9/webhook-server/0.log" Jan 22 10:16:54 crc kubenswrapper[4811]: I0122 10:16:54.028505 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3078a1b977323d6d8f95a4018ef06f377c198eb1741282ac05e933b60384v48_d014441f-4913-4583-ba8e-b1c20aaeed47/extract/0.log" Jan 22 10:16:54 crc kubenswrapper[4811]: I0122 10:16:54.037181 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-k88w9_faa36c07-3c7a-4b4a-a04e-58b43a178890/speaker/0.log" Jan 22 10:16:54 crc kubenswrapper[4811]: I0122 10:16:54.040996 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3078a1b977323d6d8f95a4018ef06f377c198eb1741282ac05e933b60384v48_d014441f-4913-4583-ba8e-b1c20aaeed47/util/0.log" Jan 22 10:16:54 crc kubenswrapper[4811]: I0122 10:16:54.046560 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-k88w9_faa36c07-3c7a-4b4a-a04e-58b43a178890/kube-rbac-proxy/0.log" Jan 22 10:16:54 crc kubenswrapper[4811]: I0122 10:16:54.052028 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3078a1b977323d6d8f95a4018ef06f377c198eb1741282ac05e933b60384v48_d014441f-4913-4583-ba8e-b1c20aaeed47/pull/0.log" Jan 22 10:16:54 crc kubenswrapper[4811]: I0122 10:16:54.119654 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-59dd8b7cbf-pklcs_09ad3a19-244b-4685-8c96-0bee227b6547/manager/0.log" Jan 22 10:16:54 crc kubenswrapper[4811]: I0122 10:16:54.172390 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-69cf5d4557-rgwhg_e6fe0bc0-30b4-4a2f-b36d-93d5b288ecf8/manager/0.log" Jan 22 10:16:54 crc kubenswrapper[4811]: I0122 10:16:54.182684 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-b45d7bf98-2ltqr_62aa676a-95ae-40a8-9db5-b5fd24a293c2/manager/0.log" Jan 22 10:16:54 crc kubenswrapper[4811]: I0122 10:16:54.246175 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-78fdd796fd-26vqb_b0f07719-5203-4d79-82b4-995b8af81a00/manager/0.log" Jan 22 10:16:54 crc kubenswrapper[4811]: I0122 10:16:54.254329 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-vbbnq_e019bc4b-f0e7-4a4f-a42c-1486010a63fd/manager/0.log" Jan 22 10:16:54 crc kubenswrapper[4811]: I0122 10:16:54.269322 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-7p5h9_62a9fc61-630e-4f4d-9788-f21e25ab4dda/manager/0.log" Jan 22 10:16:54 crc kubenswrapper[4811]: I0122 10:16:54.505513 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-54ccf4f85d-r6z6t_81d4cd92-880c-4806-ab95-fcb009827075/manager/0.log" Jan 22 10:16:54 crc kubenswrapper[4811]: I0122 10:16:54.518453 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-69d6c9f5b8-fx6zn_ce893825-4e8e-4c9b-b37e-a974d7cfda21/manager/0.log" Jan 22 10:16:54 crc kubenswrapper[4811]: I0122 10:16:54.577382 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b8b6d4659-99m2t_9dfc4274-a6f5-4dcb-8bcf-b2e0ff337574/manager/0.log" Jan 22 10:16:54 crc kubenswrapper[4811]: I0122 10:16:54.648768 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-78c6999f6f-4wtlm_688057d8-0445-42c1-b073-83deb026ab4c/manager/0.log" Jan 22 10:16:54 crc kubenswrapper[4811]: I0122 10:16:54.676602 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-c87fff755-kc9m5_02697a04-4401-498c-9b69-ff0b57ce8f4b/manager/0.log" Jan 22 10:16:54 crc kubenswrapper[4811]: I0122 10:16:54.725055 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5d8f59fb49-tll52_c284b86e-5a0e-4bd4-aa67-4e2c208ea4fa/manager/0.log" Jan 22 10:16:54 crc kubenswrapper[4811]: I0122 10:16:54.799675 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-6b8bc8d87d-h7wzt_b157cb38-af8a-41bf-a29a-2da5b59aa500/manager/0.log" Jan 22 10:16:54 crc kubenswrapper[4811]: I0122 10:16:54.811675 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7bd9774b6-t9djx_a247bb8f-a274-481d-916b-8ad80521af31/manager/0.log" Jan 22 10:16:54 crc kubenswrapper[4811]: I0122 10:16:54.826558 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-7fbbcdb4d6qmzp5_7c209919-fd54-40e8-a741-7006cf8dd361/manager/0.log" Jan 22 10:16:54 crc kubenswrapper[4811]: I0122 10:16:54.942250 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-5cd76577f9-kn8dt_a01a5eb9-0bef-4a6b-af9e-d71281e2ae34/operator/0.log" Jan 22 10:16:55 crc kubenswrapper[4811]: I0122 10:16:55.323481 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-xvdbh_97d60b95-f52c-4946-919a-e8fd73251ed5/cert-manager-controller/0.log" Jan 22 10:16:55 crc kubenswrapper[4811]: I0122 10:16:55.344420 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-jbj4c_4dbc71dd-a371-4735-bc7e-6c29eb855fbd/cert-manager-cainjector/0.log" Jan 22 10:16:55 crc kubenswrapper[4811]: I0122 10:16:55.352129 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-sgnhm_34e7fc62-f1c1-41cb-b44c-2ef705fa2a15/cert-manager-webhook/0.log" Jan 22 10:16:56 crc kubenswrapper[4811]: I0122 10:16:56.139410 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-647bb87bbd-v227g_c0b74933-8fe4-4fb1-82af-eda7df5c3c06/manager/0.log" Jan 22 10:16:56 crc kubenswrapper[4811]: I0122 10:16:56.199546 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-c2sr9_5518af80-1f74-4caf-8bc0-80680646bfca/registry-server/0.log" Jan 22 10:16:56 crc kubenswrapper[4811]: I0122 10:16:56.231771 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-9pxnj_8b7094aa-cc4a-49eb-be77-715a4efbc1d0/control-plane-machine-set-operator/0.log" Jan 22 10:16:56 crc kubenswrapper[4811]: I0122 10:16:56.250208 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-rx42r_8a9d91fa-d887-4128-af43-cfe3cad79784/kube-rbac-proxy/0.log" Jan 22 10:16:56 crc kubenswrapper[4811]: I0122 10:16:56.256496 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-55db956ddc-xp5jv_e343c2da-412a-4226-b711-81f83fdbb04b/manager/0.log" Jan 22 10:16:56 crc kubenswrapper[4811]: I0122 10:16:56.257940 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-rx42r_8a9d91fa-d887-4128-af43-cfe3cad79784/machine-api-operator/0.log" Jan 22 10:16:56 crc kubenswrapper[4811]: I0122 10:16:56.284501 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5d646b7d76-rvsm7_b579b636-697b-4a23-9de7-1f9a8537eb94/manager/0.log" Jan 22 10:16:56 crc kubenswrapper[4811]: I0122 10:16:56.301058 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-2l9vr_c624375e-a5cf-49b8-a54a-5770a6c7e738/operator/0.log" Jan 22 10:16:56 crc kubenswrapper[4811]: I0122 10:16:56.311050 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-547cbdb99f-4xkc8_b3409f06-ef57-4717-b3e2-9b4f788fd7f0/manager/0.log" Jan 22 10:16:56 crc kubenswrapper[4811]: I0122 10:16:56.383148 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-85cd9769bb-627nz_42323d0d-05b6-4a0d-a809-405dec7c2893/manager/0.log" Jan 22 10:16:56 crc kubenswrapper[4811]: I0122 10:16:56.392117 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-69797bbcbd-mg5cr_d990df50-3df1-46b6-b6df-5b84bf8eeb20/manager/0.log" Jan 22 10:16:56 crc kubenswrapper[4811]: I0122 10:16:56.399211 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-5ffb9c6597-kxs2j_15eb97d5-2508-4c32-8b7e-65f1015767cf/manager/0.log" Jan 22 10:16:56 crc kubenswrapper[4811]: I0122 10:16:56.871180 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3078a1b977323d6d8f95a4018ef06f377c198eb1741282ac05e933b60384v48_d014441f-4913-4583-ba8e-b1c20aaeed47/extract/0.log" Jan 22 10:16:56 crc kubenswrapper[4811]: I0122 10:16:56.878132 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3078a1b977323d6d8f95a4018ef06f377c198eb1741282ac05e933b60384v48_d014441f-4913-4583-ba8e-b1c20aaeed47/util/0.log" Jan 22 10:16:56 crc kubenswrapper[4811]: I0122 10:16:56.883824 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3078a1b977323d6d8f95a4018ef06f377c198eb1741282ac05e933b60384v48_d014441f-4913-4583-ba8e-b1c20aaeed47/pull/0.log" Jan 22 10:16:56 crc kubenswrapper[4811]: I0122 10:16:56.950162 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-59dd8b7cbf-pklcs_09ad3a19-244b-4685-8c96-0bee227b6547/manager/0.log" Jan 22 10:16:56 crc kubenswrapper[4811]: I0122 10:16:56.998015 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-69cf5d4557-rgwhg_e6fe0bc0-30b4-4a2f-b36d-93d5b288ecf8/manager/0.log" Jan 22 10:16:57 crc kubenswrapper[4811]: I0122 10:16:57.011737 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-b45d7bf98-2ltqr_62aa676a-95ae-40a8-9db5-b5fd24a293c2/manager/0.log" Jan 22 10:16:57 crc kubenswrapper[4811]: I0122 10:16:57.087576 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-78fdd796fd-26vqb_b0f07719-5203-4d79-82b4-995b8af81a00/manager/0.log" Jan 22 10:16:57 crc kubenswrapper[4811]: I0122 10:16:57.097143 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-vbbnq_e019bc4b-f0e7-4a4f-a42c-1486010a63fd/manager/0.log" Jan 22 10:16:57 crc kubenswrapper[4811]: I0122 10:16:57.109864 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-7p5h9_62a9fc61-630e-4f4d-9788-f21e25ab4dda/manager/0.log" Jan 22 10:16:57 crc kubenswrapper[4811]: I0122 10:16:57.345293 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-54ccf4f85d-r6z6t_81d4cd92-880c-4806-ab95-fcb009827075/manager/0.log" Jan 22 10:16:57 crc kubenswrapper[4811]: I0122 10:16:57.353365 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-69d6c9f5b8-fx6zn_ce893825-4e8e-4c9b-b37e-a974d7cfda21/manager/0.log" Jan 22 10:16:57 crc kubenswrapper[4811]: I0122 10:16:57.408076 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b8b6d4659-99m2t_9dfc4274-a6f5-4dcb-8bcf-b2e0ff337574/manager/0.log" Jan 22 10:16:57 crc kubenswrapper[4811]: I0122 10:16:57.456820 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-78c6999f6f-4wtlm_688057d8-0445-42c1-b073-83deb026ab4c/manager/0.log" Jan 22 10:16:57 crc kubenswrapper[4811]: I0122 10:16:57.479457 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-c87fff755-kc9m5_02697a04-4401-498c-9b69-ff0b57ce8f4b/manager/0.log" Jan 22 10:16:57 crc kubenswrapper[4811]: I0122 10:16:57.519529 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5d8f59fb49-tll52_c284b86e-5a0e-4bd4-aa67-4e2c208ea4fa/manager/0.log" Jan 22 10:16:57 crc kubenswrapper[4811]: I0122 10:16:57.575023 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-6b8bc8d87d-h7wzt_b157cb38-af8a-41bf-a29a-2da5b59aa500/manager/0.log" Jan 22 10:16:57 crc kubenswrapper[4811]: I0122 10:16:57.583152 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7bd9774b6-t9djx_a247bb8f-a274-481d-916b-8ad80521af31/manager/0.log" Jan 22 10:16:57 crc kubenswrapper[4811]: I0122 10:16:57.596564 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-7fbbcdb4d6qmzp5_7c209919-fd54-40e8-a741-7006cf8dd361/manager/0.log" Jan 22 10:16:57 crc kubenswrapper[4811]: I0122 10:16:57.634408 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-22qhf_d63fcc2e-ef3c-4a10-9444-43070aa0dc77/nmstate-console-plugin/0.log" Jan 22 10:16:57 crc kubenswrapper[4811]: I0122 10:16:57.653230 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-tvjnz_74fc22de-195f-452c-b18c-f12c53f2465f/nmstate-handler/0.log" Jan 22 10:16:57 crc kubenswrapper[4811]: I0122 10:16:57.662343 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-nmrg4_66e8ec28-33fd-440b-9064-dd5c40cf4b61/nmstate-metrics/0.log" Jan 22 10:16:57 crc kubenswrapper[4811]: I0122 10:16:57.669550 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-nmrg4_66e8ec28-33fd-440b-9064-dd5c40cf4b61/kube-rbac-proxy/0.log" Jan 22 10:16:57 crc kubenswrapper[4811]: I0122 10:16:57.689722 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-v76qn_952cfa08-4a5f-43b8-aa83-58839cc92523/nmstate-operator/0.log" Jan 22 10:16:57 crc kubenswrapper[4811]: I0122 10:16:57.705576 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-tnk97_9e9c9633-f916-440c-b02c-5bb58eb51e76/nmstate-webhook/0.log" Jan 22 10:16:57 crc kubenswrapper[4811]: I0122 10:16:57.715595 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-5cd76577f9-kn8dt_a01a5eb9-0bef-4a6b-af9e-d71281e2ae34/operator/0.log" Jan 22 10:16:58 crc kubenswrapper[4811]: I0122 10:16:58.931573 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-647bb87bbd-v227g_c0b74933-8fe4-4fb1-82af-eda7df5c3c06/manager/0.log" Jan 22 10:16:58 crc kubenswrapper[4811]: I0122 10:16:58.998266 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-c2sr9_5518af80-1f74-4caf-8bc0-80680646bfca/registry-server/0.log" Jan 22 10:16:59 crc kubenswrapper[4811]: I0122 10:16:59.040688 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-55db956ddc-xp5jv_e343c2da-412a-4226-b711-81f83fdbb04b/manager/0.log" Jan 22 10:16:59 crc kubenswrapper[4811]: I0122 10:16:59.067603 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5d646b7d76-rvsm7_b579b636-697b-4a23-9de7-1f9a8537eb94/manager/0.log" Jan 22 10:16:59 crc kubenswrapper[4811]: I0122 10:16:59.089547 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-2l9vr_c624375e-a5cf-49b8-a54a-5770a6c7e738/operator/0.log" Jan 22 10:16:59 crc kubenswrapper[4811]: I0122 10:16:59.108421 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-547cbdb99f-4xkc8_b3409f06-ef57-4717-b3e2-9b4f788fd7f0/manager/0.log" Jan 22 10:16:59 crc kubenswrapper[4811]: I0122 10:16:59.159845 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-85cd9769bb-627nz_42323d0d-05b6-4a0d-a809-405dec7c2893/manager/0.log" Jan 22 10:16:59 crc kubenswrapper[4811]: I0122 10:16:59.168220 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-69797bbcbd-mg5cr_d990df50-3df1-46b6-b6df-5b84bf8eeb20/manager/0.log" Jan 22 10:16:59 crc kubenswrapper[4811]: I0122 10:16:59.176336 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-5ffb9c6597-kxs2j_15eb97d5-2508-4c32-8b7e-65f1015767cf/manager/0.log" Jan 22 10:17:00 crc kubenswrapper[4811]: I0122 10:17:00.468757 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-9g4j8_3d23c9c9-89ca-4db5-99dc-1e5b9f80be38/kube-multus-additional-cni-plugins/0.log" Jan 22 10:17:00 crc kubenswrapper[4811]: I0122 10:17:00.476605 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-9g4j8_3d23c9c9-89ca-4db5-99dc-1e5b9f80be38/egress-router-binary-copy/0.log" Jan 22 10:17:00 crc kubenswrapper[4811]: I0122 10:17:00.482609 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-9g4j8_3d23c9c9-89ca-4db5-99dc-1e5b9f80be38/cni-plugins/0.log" Jan 22 10:17:00 crc kubenswrapper[4811]: I0122 10:17:00.488326 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-9g4j8_3d23c9c9-89ca-4db5-99dc-1e5b9f80be38/bond-cni-plugin/0.log" Jan 22 10:17:00 crc kubenswrapper[4811]: I0122 10:17:00.493488 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-9g4j8_3d23c9c9-89ca-4db5-99dc-1e5b9f80be38/routeoverride-cni/0.log" Jan 22 10:17:00 crc kubenswrapper[4811]: I0122 10:17:00.499347 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-9g4j8_3d23c9c9-89ca-4db5-99dc-1e5b9f80be38/whereabouts-cni-bincopy/0.log" Jan 22 10:17:00 crc kubenswrapper[4811]: I0122 10:17:00.504736 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-9g4j8_3d23c9c9-89ca-4db5-99dc-1e5b9f80be38/whereabouts-cni/0.log" Jan 22 10:17:00 crc kubenswrapper[4811]: I0122 10:17:00.535159 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-857f4d67dd-78zjg_cb22b2ae-6c13-482b-b827-5200e2be87ca/multus-admission-controller/0.log" Jan 22 10:17:00 crc kubenswrapper[4811]: I0122 10:17:00.539243 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-857f4d67dd-78zjg_cb22b2ae-6c13-482b-b827-5200e2be87ca/kube-rbac-proxy/0.log" Jan 22 10:17:00 crc kubenswrapper[4811]: I0122 10:17:00.577529 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-kfqgt_f2555861-d1bb-4f21-be4a-165ed9212932/kube-multus/2.log" Jan 22 10:17:00 crc kubenswrapper[4811]: I0122 10:17:00.653863 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-kfqgt_f2555861-d1bb-4f21-be4a-165ed9212932/kube-multus/3.log" Jan 22 10:17:00 crc kubenswrapper[4811]: I0122 10:17:00.678212 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-bhj4l_de4b38a0-0c7a-4693-9f92-40fefd6bc9b4/network-metrics-daemon/0.log" Jan 22 10:17:00 crc kubenswrapper[4811]: I0122 10:17:00.683412 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-bhj4l_de4b38a0-0c7a-4693-9f92-40fefd6bc9b4/kube-rbac-proxy/0.log" Jan 22 10:17:05 crc kubenswrapper[4811]: I0122 10:17:05.501705 4811 patch_prober.go:28] interesting pod/machine-config-daemon-txvcq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 10:17:05 crc kubenswrapper[4811]: I0122 10:17:05.502026 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 10:17:05 crc kubenswrapper[4811]: I0122 10:17:05.502063 4811 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" Jan 22 10:17:05 crc kubenswrapper[4811]: I0122 10:17:05.502446 4811 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"538ba6db70fd8847ece098eb9e97c7fe7aec7e85ea8714fabcb34e1d372c3323"} pod="openshift-machine-config-operator/machine-config-daemon-txvcq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 10:17:05 crc kubenswrapper[4811]: I0122 10:17:05.502486 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" containerName="machine-config-daemon" containerID="cri-o://538ba6db70fd8847ece098eb9e97c7fe7aec7e85ea8714fabcb34e1d372c3323" gracePeriod=600 Jan 22 10:17:05 crc kubenswrapper[4811]: E0122 10:17:05.620445 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-txvcq_openshift-machine-config-operator(84068a6b-e189-419b-87f5-f31428f6eafe)\"" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" Jan 22 10:17:05 crc kubenswrapper[4811]: I0122 10:17:05.888575 4811 generic.go:334] "Generic (PLEG): container finished" podID="84068a6b-e189-419b-87f5-f31428f6eafe" containerID="538ba6db70fd8847ece098eb9e97c7fe7aec7e85ea8714fabcb34e1d372c3323" exitCode=0 Jan 22 10:17:05 crc kubenswrapper[4811]: I0122 10:17:05.888621 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" event={"ID":"84068a6b-e189-419b-87f5-f31428f6eafe","Type":"ContainerDied","Data":"538ba6db70fd8847ece098eb9e97c7fe7aec7e85ea8714fabcb34e1d372c3323"} Jan 22 10:17:05 crc kubenswrapper[4811]: I0122 10:17:05.888689 4811 scope.go:117] "RemoveContainer" containerID="539809bd9b7bc1eb866c0acd9085f828d6b9e023dbd6aba5bf80b7ae0173d007" Jan 22 10:17:05 crc kubenswrapper[4811]: I0122 10:17:05.889176 4811 scope.go:117] "RemoveContainer" containerID="538ba6db70fd8847ece098eb9e97c7fe7aec7e85ea8714fabcb34e1d372c3323" Jan 22 10:17:05 crc kubenswrapper[4811]: E0122 10:17:05.891009 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-txvcq_openshift-machine-config-operator(84068a6b-e189-419b-87f5-f31428f6eafe)\"" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" Jan 22 10:17:19 crc kubenswrapper[4811]: I0122 10:17:19.992442 4811 scope.go:117] "RemoveContainer" containerID="538ba6db70fd8847ece098eb9e97c7fe7aec7e85ea8714fabcb34e1d372c3323" Jan 22 10:17:19 crc kubenswrapper[4811]: E0122 10:17:19.994461 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-txvcq_openshift-machine-config-operator(84068a6b-e189-419b-87f5-f31428f6eafe)\"" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" Jan 22 10:17:33 crc kubenswrapper[4811]: I0122 10:17:33.991905 4811 scope.go:117] "RemoveContainer" containerID="538ba6db70fd8847ece098eb9e97c7fe7aec7e85ea8714fabcb34e1d372c3323" Jan 22 10:17:33 crc kubenswrapper[4811]: E0122 10:17:33.993068 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-txvcq_openshift-machine-config-operator(84068a6b-e189-419b-87f5-f31428f6eafe)\"" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" Jan 22 10:17:48 crc kubenswrapper[4811]: I0122 10:17:48.992929 4811 scope.go:117] "RemoveContainer" containerID="538ba6db70fd8847ece098eb9e97c7fe7aec7e85ea8714fabcb34e1d372c3323" Jan 22 10:17:48 crc kubenswrapper[4811]: E0122 10:17:48.993462 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-txvcq_openshift-machine-config-operator(84068a6b-e189-419b-87f5-f31428f6eafe)\"" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" Jan 22 10:18:02 crc kubenswrapper[4811]: I0122 10:18:02.992455 4811 scope.go:117] "RemoveContainer" containerID="538ba6db70fd8847ece098eb9e97c7fe7aec7e85ea8714fabcb34e1d372c3323" Jan 22 10:18:02 crc kubenswrapper[4811]: E0122 10:18:02.993010 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-txvcq_openshift-machine-config-operator(84068a6b-e189-419b-87f5-f31428f6eafe)\"" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" Jan 22 10:18:16 crc kubenswrapper[4811]: I0122 10:18:16.991916 4811 scope.go:117] "RemoveContainer" containerID="538ba6db70fd8847ece098eb9e97c7fe7aec7e85ea8714fabcb34e1d372c3323" Jan 22 10:18:16 crc kubenswrapper[4811]: E0122 10:18:16.992472 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-txvcq_openshift-machine-config-operator(84068a6b-e189-419b-87f5-f31428f6eafe)\"" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" Jan 22 10:18:31 crc kubenswrapper[4811]: I0122 10:18:31.992389 4811 scope.go:117] "RemoveContainer" containerID="538ba6db70fd8847ece098eb9e97c7fe7aec7e85ea8714fabcb34e1d372c3323" Jan 22 10:18:31 crc kubenswrapper[4811]: E0122 10:18:31.993013 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-txvcq_openshift-machine-config-operator(84068a6b-e189-419b-87f5-f31428f6eafe)\"" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" Jan 22 10:18:45 crc kubenswrapper[4811]: I0122 10:18:45.997646 4811 scope.go:117] "RemoveContainer" containerID="538ba6db70fd8847ece098eb9e97c7fe7aec7e85ea8714fabcb34e1d372c3323" Jan 22 10:18:45 crc kubenswrapper[4811]: E0122 10:18:45.999192 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-txvcq_openshift-machine-config-operator(84068a6b-e189-419b-87f5-f31428f6eafe)\"" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" Jan 22 10:18:59 crc kubenswrapper[4811]: I0122 10:18:59.992110 4811 scope.go:117] "RemoveContainer" containerID="538ba6db70fd8847ece098eb9e97c7fe7aec7e85ea8714fabcb34e1d372c3323" Jan 22 10:18:59 crc kubenswrapper[4811]: E0122 10:18:59.993109 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-txvcq_openshift-machine-config-operator(84068a6b-e189-419b-87f5-f31428f6eafe)\"" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" Jan 22 10:19:12 crc kubenswrapper[4811]: I0122 10:19:12.991718 4811 scope.go:117] "RemoveContainer" containerID="538ba6db70fd8847ece098eb9e97c7fe7aec7e85ea8714fabcb34e1d372c3323" Jan 22 10:19:12 crc kubenswrapper[4811]: E0122 10:19:12.992245 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-txvcq_openshift-machine-config-operator(84068a6b-e189-419b-87f5-f31428f6eafe)\"" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" Jan 22 10:19:26 crc kubenswrapper[4811]: I0122 10:19:26.992376 4811 scope.go:117] "RemoveContainer" containerID="538ba6db70fd8847ece098eb9e97c7fe7aec7e85ea8714fabcb34e1d372c3323" Jan 22 10:19:26 crc kubenswrapper[4811]: E0122 10:19:26.993756 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-txvcq_openshift-machine-config-operator(84068a6b-e189-419b-87f5-f31428f6eafe)\"" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" Jan 22 10:19:40 crc kubenswrapper[4811]: I0122 10:19:40.992867 4811 scope.go:117] "RemoveContainer" containerID="538ba6db70fd8847ece098eb9e97c7fe7aec7e85ea8714fabcb34e1d372c3323" Jan 22 10:19:40 crc kubenswrapper[4811]: E0122 10:19:40.993671 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-txvcq_openshift-machine-config-operator(84068a6b-e189-419b-87f5-f31428f6eafe)\"" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" Jan 22 10:19:53 crc kubenswrapper[4811]: I0122 10:19:53.992579 4811 scope.go:117] "RemoveContainer" containerID="538ba6db70fd8847ece098eb9e97c7fe7aec7e85ea8714fabcb34e1d372c3323" Jan 22 10:19:53 crc kubenswrapper[4811]: E0122 10:19:53.993421 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-txvcq_openshift-machine-config-operator(84068a6b-e189-419b-87f5-f31428f6eafe)\"" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" Jan 22 10:20:01 crc kubenswrapper[4811]: I0122 10:20:01.798990 4811 scope.go:117] "RemoveContainer" containerID="acdb5fa372c9ef804240f2ba49d5c2c15a5e5ce36d4f818590c9963b6009aaf9" Jan 22 10:20:06 crc kubenswrapper[4811]: I0122 10:20:06.001221 4811 scope.go:117] "RemoveContainer" containerID="538ba6db70fd8847ece098eb9e97c7fe7aec7e85ea8714fabcb34e1d372c3323" Jan 22 10:20:06 crc kubenswrapper[4811]: E0122 10:20:06.001861 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-txvcq_openshift-machine-config-operator(84068a6b-e189-419b-87f5-f31428f6eafe)\"" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" Jan 22 10:20:16 crc kubenswrapper[4811]: I0122 10:20:16.992479 4811 scope.go:117] "RemoveContainer" containerID="538ba6db70fd8847ece098eb9e97c7fe7aec7e85ea8714fabcb34e1d372c3323" Jan 22 10:20:16 crc kubenswrapper[4811]: E0122 10:20:16.993074 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-txvcq_openshift-machine-config-operator(84068a6b-e189-419b-87f5-f31428f6eafe)\"" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" Jan 22 10:20:29 crc kubenswrapper[4811]: I0122 10:20:29.991808 4811 scope.go:117] "RemoveContainer" containerID="538ba6db70fd8847ece098eb9e97c7fe7aec7e85ea8714fabcb34e1d372c3323" Jan 22 10:20:29 crc kubenswrapper[4811]: E0122 10:20:29.992364 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-txvcq_openshift-machine-config-operator(84068a6b-e189-419b-87f5-f31428f6eafe)\"" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" Jan 22 10:20:44 crc kubenswrapper[4811]: I0122 10:20:44.992435 4811 scope.go:117] "RemoveContainer" containerID="538ba6db70fd8847ece098eb9e97c7fe7aec7e85ea8714fabcb34e1d372c3323" Jan 22 10:20:44 crc kubenswrapper[4811]: E0122 10:20:44.993065 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-txvcq_openshift-machine-config-operator(84068a6b-e189-419b-87f5-f31428f6eafe)\"" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" Jan 22 10:20:57 crc kubenswrapper[4811]: I0122 10:20:57.992014 4811 scope.go:117] "RemoveContainer" containerID="538ba6db70fd8847ece098eb9e97c7fe7aec7e85ea8714fabcb34e1d372c3323" Jan 22 10:20:57 crc kubenswrapper[4811]: E0122 10:20:57.993982 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-txvcq_openshift-machine-config-operator(84068a6b-e189-419b-87f5-f31428f6eafe)\"" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" Jan 22 10:21:09 crc kubenswrapper[4811]: I0122 10:21:09.992812 4811 scope.go:117] "RemoveContainer" containerID="538ba6db70fd8847ece098eb9e97c7fe7aec7e85ea8714fabcb34e1d372c3323" Jan 22 10:21:09 crc kubenswrapper[4811]: E0122 10:21:09.993486 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-txvcq_openshift-machine-config-operator(84068a6b-e189-419b-87f5-f31428f6eafe)\"" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" Jan 22 10:21:24 crc kubenswrapper[4811]: I0122 10:21:24.992182 4811 scope.go:117] "RemoveContainer" containerID="538ba6db70fd8847ece098eb9e97c7fe7aec7e85ea8714fabcb34e1d372c3323" Jan 22 10:21:24 crc kubenswrapper[4811]: E0122 10:21:24.992755 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-txvcq_openshift-machine-config-operator(84068a6b-e189-419b-87f5-f31428f6eafe)\"" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" Jan 22 10:21:29 crc kubenswrapper[4811]: I0122 10:21:29.150488 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-p6tqg"] Jan 22 10:21:29 crc kubenswrapper[4811]: E0122 10:21:29.151180 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2b5489c-1fbe-47ba-a7f9-8b96240d3c8e" containerName="registry-server" Jan 22 10:21:29 crc kubenswrapper[4811]: I0122 10:21:29.151194 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2b5489c-1fbe-47ba-a7f9-8b96240d3c8e" containerName="registry-server" Jan 22 10:21:29 crc kubenswrapper[4811]: E0122 10:21:29.151209 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2b5489c-1fbe-47ba-a7f9-8b96240d3c8e" containerName="extract-utilities" Jan 22 10:21:29 crc kubenswrapper[4811]: I0122 10:21:29.151215 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2b5489c-1fbe-47ba-a7f9-8b96240d3c8e" containerName="extract-utilities" Jan 22 10:21:29 crc kubenswrapper[4811]: E0122 10:21:29.151228 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be4c5e24-9f96-4205-9484-8ae0c6ff3842" containerName="extract-content" Jan 22 10:21:29 crc kubenswrapper[4811]: I0122 10:21:29.151234 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="be4c5e24-9f96-4205-9484-8ae0c6ff3842" containerName="extract-content" Jan 22 10:21:29 crc kubenswrapper[4811]: E0122 10:21:29.151252 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be4c5e24-9f96-4205-9484-8ae0c6ff3842" containerName="extract-utilities" Jan 22 10:21:29 crc kubenswrapper[4811]: I0122 10:21:29.151257 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="be4c5e24-9f96-4205-9484-8ae0c6ff3842" containerName="extract-utilities" Jan 22 10:21:29 crc kubenswrapper[4811]: E0122 10:21:29.151270 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2b5489c-1fbe-47ba-a7f9-8b96240d3c8e" containerName="extract-content" Jan 22 10:21:29 crc kubenswrapper[4811]: I0122 10:21:29.151275 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2b5489c-1fbe-47ba-a7f9-8b96240d3c8e" containerName="extract-content" Jan 22 10:21:29 crc kubenswrapper[4811]: E0122 10:21:29.151288 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be4c5e24-9f96-4205-9484-8ae0c6ff3842" containerName="registry-server" Jan 22 10:21:29 crc kubenswrapper[4811]: I0122 10:21:29.151293 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="be4c5e24-9f96-4205-9484-8ae0c6ff3842" containerName="registry-server" Jan 22 10:21:29 crc kubenswrapper[4811]: I0122 10:21:29.151468 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2b5489c-1fbe-47ba-a7f9-8b96240d3c8e" containerName="registry-server" Jan 22 10:21:29 crc kubenswrapper[4811]: I0122 10:21:29.151483 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="be4c5e24-9f96-4205-9484-8ae0c6ff3842" containerName="registry-server" Jan 22 10:21:29 crc kubenswrapper[4811]: I0122 10:21:29.153597 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p6tqg" Jan 22 10:21:29 crc kubenswrapper[4811]: I0122 10:21:29.161055 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-p6tqg"] Jan 22 10:21:29 crc kubenswrapper[4811]: I0122 10:21:29.171543 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/975e0ea5-214c-4e39-9355-10a38f8fbfaa-utilities\") pod \"community-operators-p6tqg\" (UID: \"975e0ea5-214c-4e39-9355-10a38f8fbfaa\") " pod="openshift-marketplace/community-operators-p6tqg" Jan 22 10:21:29 crc kubenswrapper[4811]: I0122 10:21:29.171601 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/975e0ea5-214c-4e39-9355-10a38f8fbfaa-catalog-content\") pod \"community-operators-p6tqg\" (UID: \"975e0ea5-214c-4e39-9355-10a38f8fbfaa\") " pod="openshift-marketplace/community-operators-p6tqg" Jan 22 10:21:29 crc kubenswrapper[4811]: I0122 10:21:29.171667 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wrmj\" (UniqueName: \"kubernetes.io/projected/975e0ea5-214c-4e39-9355-10a38f8fbfaa-kube-api-access-9wrmj\") pod \"community-operators-p6tqg\" (UID: \"975e0ea5-214c-4e39-9355-10a38f8fbfaa\") " pod="openshift-marketplace/community-operators-p6tqg" Jan 22 10:21:29 crc kubenswrapper[4811]: I0122 10:21:29.273253 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wrmj\" (UniqueName: \"kubernetes.io/projected/975e0ea5-214c-4e39-9355-10a38f8fbfaa-kube-api-access-9wrmj\") pod \"community-operators-p6tqg\" (UID: \"975e0ea5-214c-4e39-9355-10a38f8fbfaa\") " pod="openshift-marketplace/community-operators-p6tqg" Jan 22 10:21:29 crc kubenswrapper[4811]: I0122 10:21:29.273473 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/975e0ea5-214c-4e39-9355-10a38f8fbfaa-utilities\") pod \"community-operators-p6tqg\" (UID: \"975e0ea5-214c-4e39-9355-10a38f8fbfaa\") " pod="openshift-marketplace/community-operators-p6tqg" Jan 22 10:21:29 crc kubenswrapper[4811]: I0122 10:21:29.273504 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/975e0ea5-214c-4e39-9355-10a38f8fbfaa-catalog-content\") pod \"community-operators-p6tqg\" (UID: \"975e0ea5-214c-4e39-9355-10a38f8fbfaa\") " pod="openshift-marketplace/community-operators-p6tqg" Jan 22 10:21:29 crc kubenswrapper[4811]: I0122 10:21:29.273921 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/975e0ea5-214c-4e39-9355-10a38f8fbfaa-catalog-content\") pod \"community-operators-p6tqg\" (UID: \"975e0ea5-214c-4e39-9355-10a38f8fbfaa\") " pod="openshift-marketplace/community-operators-p6tqg" Jan 22 10:21:29 crc kubenswrapper[4811]: I0122 10:21:29.274011 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/975e0ea5-214c-4e39-9355-10a38f8fbfaa-utilities\") pod \"community-operators-p6tqg\" (UID: \"975e0ea5-214c-4e39-9355-10a38f8fbfaa\") " pod="openshift-marketplace/community-operators-p6tqg" Jan 22 10:21:29 crc kubenswrapper[4811]: I0122 10:21:29.289733 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wrmj\" (UniqueName: \"kubernetes.io/projected/975e0ea5-214c-4e39-9355-10a38f8fbfaa-kube-api-access-9wrmj\") pod \"community-operators-p6tqg\" (UID: \"975e0ea5-214c-4e39-9355-10a38f8fbfaa\") " pod="openshift-marketplace/community-operators-p6tqg" Jan 22 10:21:29 crc kubenswrapper[4811]: I0122 10:21:29.468168 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p6tqg" Jan 22 10:21:29 crc kubenswrapper[4811]: I0122 10:21:29.924601 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-p6tqg"] Jan 22 10:21:30 crc kubenswrapper[4811]: I0122 10:21:30.075468 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p6tqg" event={"ID":"975e0ea5-214c-4e39-9355-10a38f8fbfaa","Type":"ContainerStarted","Data":"7691639bb55535d2d2a66ee0c612186ffc12daa27de987d1ecf2843aab85a1df"} Jan 22 10:21:31 crc kubenswrapper[4811]: I0122 10:21:31.051281 4811 generic.go:334] "Generic (PLEG): container finished" podID="975e0ea5-214c-4e39-9355-10a38f8fbfaa" containerID="d92ae4a9374b985956e303e36d0b9fdf72e284915fff4e5f3e1b371b11afd4cc" exitCode=0 Jan 22 10:21:31 crc kubenswrapper[4811]: I0122 10:21:31.052505 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p6tqg" event={"ID":"975e0ea5-214c-4e39-9355-10a38f8fbfaa","Type":"ContainerDied","Data":"d92ae4a9374b985956e303e36d0b9fdf72e284915fff4e5f3e1b371b11afd4cc"} Jan 22 10:21:31 crc kubenswrapper[4811]: I0122 10:21:31.053255 4811 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 10:21:32 crc kubenswrapper[4811]: I0122 10:21:32.060688 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p6tqg" event={"ID":"975e0ea5-214c-4e39-9355-10a38f8fbfaa","Type":"ContainerStarted","Data":"d0127dd6a300ae6d805443c378069c7811dc381b6f51aac63229e737f08ae198"} Jan 22 10:21:33 crc kubenswrapper[4811]: I0122 10:21:33.069874 4811 generic.go:334] "Generic (PLEG): container finished" podID="975e0ea5-214c-4e39-9355-10a38f8fbfaa" containerID="d0127dd6a300ae6d805443c378069c7811dc381b6f51aac63229e737f08ae198" exitCode=0 Jan 22 10:21:33 crc kubenswrapper[4811]: I0122 10:21:33.069909 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p6tqg" event={"ID":"975e0ea5-214c-4e39-9355-10a38f8fbfaa","Type":"ContainerDied","Data":"d0127dd6a300ae6d805443c378069c7811dc381b6f51aac63229e737f08ae198"} Jan 22 10:21:34 crc kubenswrapper[4811]: I0122 10:21:34.079678 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p6tqg" event={"ID":"975e0ea5-214c-4e39-9355-10a38f8fbfaa","Type":"ContainerStarted","Data":"020073bccf2a0fcbbf8ccfb275649be80f4ccbffd13c9d0c923525139cc116cf"} Jan 22 10:21:34 crc kubenswrapper[4811]: I0122 10:21:34.098125 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-p6tqg" podStartSLOduration=2.582850997 podStartE2EDuration="5.098104384s" podCreationTimestamp="2026-01-22 10:21:29 +0000 UTC" firstStartedPulling="2026-01-22 10:21:31.053054097 +0000 UTC m=+4535.375241219" lastFinishedPulling="2026-01-22 10:21:33.568307483 +0000 UTC m=+4537.890494606" observedRunningTime="2026-01-22 10:21:34.092898116 +0000 UTC m=+4538.415085239" watchObservedRunningTime="2026-01-22 10:21:34.098104384 +0000 UTC m=+4538.420291507" Jan 22 10:21:36 crc kubenswrapper[4811]: I0122 10:21:36.993177 4811 scope.go:117] "RemoveContainer" containerID="538ba6db70fd8847ece098eb9e97c7fe7aec7e85ea8714fabcb34e1d372c3323" Jan 22 10:21:36 crc kubenswrapper[4811]: E0122 10:21:36.993686 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-txvcq_openshift-machine-config-operator(84068a6b-e189-419b-87f5-f31428f6eafe)\"" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" Jan 22 10:21:39 crc kubenswrapper[4811]: I0122 10:21:39.468473 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-p6tqg" Jan 22 10:21:39 crc kubenswrapper[4811]: I0122 10:21:39.468739 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-p6tqg" Jan 22 10:21:39 crc kubenswrapper[4811]: I0122 10:21:39.510754 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-p6tqg" Jan 22 10:21:40 crc kubenswrapper[4811]: I0122 10:21:40.154314 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-p6tqg" Jan 22 10:21:40 crc kubenswrapper[4811]: I0122 10:21:40.944094 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-p6tqg"] Jan 22 10:21:42 crc kubenswrapper[4811]: I0122 10:21:42.137453 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-p6tqg" podUID="975e0ea5-214c-4e39-9355-10a38f8fbfaa" containerName="registry-server" containerID="cri-o://020073bccf2a0fcbbf8ccfb275649be80f4ccbffd13c9d0c923525139cc116cf" gracePeriod=2 Jan 22 10:21:43 crc kubenswrapper[4811]: I0122 10:21:43.020997 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p6tqg" Jan 22 10:21:43 crc kubenswrapper[4811]: I0122 10:21:43.140236 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/975e0ea5-214c-4e39-9355-10a38f8fbfaa-catalog-content\") pod \"975e0ea5-214c-4e39-9355-10a38f8fbfaa\" (UID: \"975e0ea5-214c-4e39-9355-10a38f8fbfaa\") " Jan 22 10:21:43 crc kubenswrapper[4811]: I0122 10:21:43.140538 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9wrmj\" (UniqueName: \"kubernetes.io/projected/975e0ea5-214c-4e39-9355-10a38f8fbfaa-kube-api-access-9wrmj\") pod \"975e0ea5-214c-4e39-9355-10a38f8fbfaa\" (UID: \"975e0ea5-214c-4e39-9355-10a38f8fbfaa\") " Jan 22 10:21:43 crc kubenswrapper[4811]: I0122 10:21:43.140616 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/975e0ea5-214c-4e39-9355-10a38f8fbfaa-utilities\") pod \"975e0ea5-214c-4e39-9355-10a38f8fbfaa\" (UID: \"975e0ea5-214c-4e39-9355-10a38f8fbfaa\") " Jan 22 10:21:43 crc kubenswrapper[4811]: I0122 10:21:43.141693 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/975e0ea5-214c-4e39-9355-10a38f8fbfaa-utilities" (OuterVolumeSpecName: "utilities") pod "975e0ea5-214c-4e39-9355-10a38f8fbfaa" (UID: "975e0ea5-214c-4e39-9355-10a38f8fbfaa"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:21:43 crc kubenswrapper[4811]: I0122 10:21:43.154570 4811 generic.go:334] "Generic (PLEG): container finished" podID="975e0ea5-214c-4e39-9355-10a38f8fbfaa" containerID="020073bccf2a0fcbbf8ccfb275649be80f4ccbffd13c9d0c923525139cc116cf" exitCode=0 Jan 22 10:21:43 crc kubenswrapper[4811]: I0122 10:21:43.154681 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p6tqg" event={"ID":"975e0ea5-214c-4e39-9355-10a38f8fbfaa","Type":"ContainerDied","Data":"020073bccf2a0fcbbf8ccfb275649be80f4ccbffd13c9d0c923525139cc116cf"} Jan 22 10:21:43 crc kubenswrapper[4811]: I0122 10:21:43.154743 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p6tqg" event={"ID":"975e0ea5-214c-4e39-9355-10a38f8fbfaa","Type":"ContainerDied","Data":"7691639bb55535d2d2a66ee0c612186ffc12daa27de987d1ecf2843aab85a1df"} Jan 22 10:21:43 crc kubenswrapper[4811]: I0122 10:21:43.154771 4811 scope.go:117] "RemoveContainer" containerID="020073bccf2a0fcbbf8ccfb275649be80f4ccbffd13c9d0c923525139cc116cf" Jan 22 10:21:43 crc kubenswrapper[4811]: I0122 10:21:43.155005 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p6tqg" Jan 22 10:21:43 crc kubenswrapper[4811]: I0122 10:21:43.159911 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/975e0ea5-214c-4e39-9355-10a38f8fbfaa-kube-api-access-9wrmj" (OuterVolumeSpecName: "kube-api-access-9wrmj") pod "975e0ea5-214c-4e39-9355-10a38f8fbfaa" (UID: "975e0ea5-214c-4e39-9355-10a38f8fbfaa"). InnerVolumeSpecName "kube-api-access-9wrmj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:21:43 crc kubenswrapper[4811]: I0122 10:21:43.188180 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/975e0ea5-214c-4e39-9355-10a38f8fbfaa-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "975e0ea5-214c-4e39-9355-10a38f8fbfaa" (UID: "975e0ea5-214c-4e39-9355-10a38f8fbfaa"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:21:43 crc kubenswrapper[4811]: I0122 10:21:43.214371 4811 scope.go:117] "RemoveContainer" containerID="d0127dd6a300ae6d805443c378069c7811dc381b6f51aac63229e737f08ae198" Jan 22 10:21:43 crc kubenswrapper[4811]: I0122 10:21:43.237484 4811 scope.go:117] "RemoveContainer" containerID="d92ae4a9374b985956e303e36d0b9fdf72e284915fff4e5f3e1b371b11afd4cc" Jan 22 10:21:43 crc kubenswrapper[4811]: I0122 10:21:43.243822 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9wrmj\" (UniqueName: \"kubernetes.io/projected/975e0ea5-214c-4e39-9355-10a38f8fbfaa-kube-api-access-9wrmj\") on node \"crc\" DevicePath \"\"" Jan 22 10:21:43 crc kubenswrapper[4811]: I0122 10:21:43.243852 4811 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/975e0ea5-214c-4e39-9355-10a38f8fbfaa-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 10:21:43 crc kubenswrapper[4811]: I0122 10:21:43.243863 4811 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/975e0ea5-214c-4e39-9355-10a38f8fbfaa-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 10:21:43 crc kubenswrapper[4811]: I0122 10:21:43.263870 4811 scope.go:117] "RemoveContainer" containerID="020073bccf2a0fcbbf8ccfb275649be80f4ccbffd13c9d0c923525139cc116cf" Jan 22 10:21:43 crc kubenswrapper[4811]: E0122 10:21:43.264302 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"020073bccf2a0fcbbf8ccfb275649be80f4ccbffd13c9d0c923525139cc116cf\": container with ID starting with 020073bccf2a0fcbbf8ccfb275649be80f4ccbffd13c9d0c923525139cc116cf not found: ID does not exist" containerID="020073bccf2a0fcbbf8ccfb275649be80f4ccbffd13c9d0c923525139cc116cf" Jan 22 10:21:43 crc kubenswrapper[4811]: I0122 10:21:43.264350 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"020073bccf2a0fcbbf8ccfb275649be80f4ccbffd13c9d0c923525139cc116cf"} err="failed to get container status \"020073bccf2a0fcbbf8ccfb275649be80f4ccbffd13c9d0c923525139cc116cf\": rpc error: code = NotFound desc = could not find container \"020073bccf2a0fcbbf8ccfb275649be80f4ccbffd13c9d0c923525139cc116cf\": container with ID starting with 020073bccf2a0fcbbf8ccfb275649be80f4ccbffd13c9d0c923525139cc116cf not found: ID does not exist" Jan 22 10:21:43 crc kubenswrapper[4811]: I0122 10:21:43.264382 4811 scope.go:117] "RemoveContainer" containerID="d0127dd6a300ae6d805443c378069c7811dc381b6f51aac63229e737f08ae198" Jan 22 10:21:43 crc kubenswrapper[4811]: E0122 10:21:43.264700 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d0127dd6a300ae6d805443c378069c7811dc381b6f51aac63229e737f08ae198\": container with ID starting with d0127dd6a300ae6d805443c378069c7811dc381b6f51aac63229e737f08ae198 not found: ID does not exist" containerID="d0127dd6a300ae6d805443c378069c7811dc381b6f51aac63229e737f08ae198" Jan 22 10:21:43 crc kubenswrapper[4811]: I0122 10:21:43.264728 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0127dd6a300ae6d805443c378069c7811dc381b6f51aac63229e737f08ae198"} err="failed to get container status \"d0127dd6a300ae6d805443c378069c7811dc381b6f51aac63229e737f08ae198\": rpc error: code = NotFound desc = could not find container \"d0127dd6a300ae6d805443c378069c7811dc381b6f51aac63229e737f08ae198\": container with ID starting with d0127dd6a300ae6d805443c378069c7811dc381b6f51aac63229e737f08ae198 not found: ID does not exist" Jan 22 10:21:43 crc kubenswrapper[4811]: I0122 10:21:43.264743 4811 scope.go:117] "RemoveContainer" containerID="d92ae4a9374b985956e303e36d0b9fdf72e284915fff4e5f3e1b371b11afd4cc" Jan 22 10:21:43 crc kubenswrapper[4811]: E0122 10:21:43.265188 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d92ae4a9374b985956e303e36d0b9fdf72e284915fff4e5f3e1b371b11afd4cc\": container with ID starting with d92ae4a9374b985956e303e36d0b9fdf72e284915fff4e5f3e1b371b11afd4cc not found: ID does not exist" containerID="d92ae4a9374b985956e303e36d0b9fdf72e284915fff4e5f3e1b371b11afd4cc" Jan 22 10:21:43 crc kubenswrapper[4811]: I0122 10:21:43.265221 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d92ae4a9374b985956e303e36d0b9fdf72e284915fff4e5f3e1b371b11afd4cc"} err="failed to get container status \"d92ae4a9374b985956e303e36d0b9fdf72e284915fff4e5f3e1b371b11afd4cc\": rpc error: code = NotFound desc = could not find container \"d92ae4a9374b985956e303e36d0b9fdf72e284915fff4e5f3e1b371b11afd4cc\": container with ID starting with d92ae4a9374b985956e303e36d0b9fdf72e284915fff4e5f3e1b371b11afd4cc not found: ID does not exist" Jan 22 10:21:43 crc kubenswrapper[4811]: I0122 10:21:43.497378 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-p6tqg"] Jan 22 10:21:43 crc kubenswrapper[4811]: I0122 10:21:43.503340 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-p6tqg"] Jan 22 10:21:44 crc kubenswrapper[4811]: I0122 10:21:44.004761 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="975e0ea5-214c-4e39-9355-10a38f8fbfaa" path="/var/lib/kubelet/pods/975e0ea5-214c-4e39-9355-10a38f8fbfaa/volumes" Jan 22 10:21:48 crc kubenswrapper[4811]: I0122 10:21:48.991612 4811 scope.go:117] "RemoveContainer" containerID="538ba6db70fd8847ece098eb9e97c7fe7aec7e85ea8714fabcb34e1d372c3323" Jan 22 10:21:48 crc kubenswrapper[4811]: E0122 10:21:48.992945 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-txvcq_openshift-machine-config-operator(84068a6b-e189-419b-87f5-f31428f6eafe)\"" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" Jan 22 10:22:03 crc kubenswrapper[4811]: I0122 10:22:03.991952 4811 scope.go:117] "RemoveContainer" containerID="538ba6db70fd8847ece098eb9e97c7fe7aec7e85ea8714fabcb34e1d372c3323" Jan 22 10:22:03 crc kubenswrapper[4811]: E0122 10:22:03.992485 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-txvcq_openshift-machine-config-operator(84068a6b-e189-419b-87f5-f31428f6eafe)\"" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" Jan 22 10:22:18 crc kubenswrapper[4811]: I0122 10:22:18.992535 4811 scope.go:117] "RemoveContainer" containerID="538ba6db70fd8847ece098eb9e97c7fe7aec7e85ea8714fabcb34e1d372c3323" Jan 22 10:22:19 crc kubenswrapper[4811]: I0122 10:22:19.402954 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" event={"ID":"84068a6b-e189-419b-87f5-f31428f6eafe","Type":"ContainerStarted","Data":"5eaae3f44e64e292135c7792057530f50afa70f95d86e85ac2d4c2ac5ec52088"} Jan 22 10:24:33 crc kubenswrapper[4811]: I0122 10:24:33.622366 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-j29z4"] Jan 22 10:24:33 crc kubenswrapper[4811]: E0122 10:24:33.622985 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="975e0ea5-214c-4e39-9355-10a38f8fbfaa" containerName="registry-server" Jan 22 10:24:33 crc kubenswrapper[4811]: I0122 10:24:33.622997 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="975e0ea5-214c-4e39-9355-10a38f8fbfaa" containerName="registry-server" Jan 22 10:24:33 crc kubenswrapper[4811]: E0122 10:24:33.623013 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="975e0ea5-214c-4e39-9355-10a38f8fbfaa" containerName="extract-utilities" Jan 22 10:24:33 crc kubenswrapper[4811]: I0122 10:24:33.623018 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="975e0ea5-214c-4e39-9355-10a38f8fbfaa" containerName="extract-utilities" Jan 22 10:24:33 crc kubenswrapper[4811]: E0122 10:24:33.623033 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="975e0ea5-214c-4e39-9355-10a38f8fbfaa" containerName="extract-content" Jan 22 10:24:33 crc kubenswrapper[4811]: I0122 10:24:33.623039 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="975e0ea5-214c-4e39-9355-10a38f8fbfaa" containerName="extract-content" Jan 22 10:24:33 crc kubenswrapper[4811]: I0122 10:24:33.623189 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="975e0ea5-214c-4e39-9355-10a38f8fbfaa" containerName="registry-server" Jan 22 10:24:33 crc kubenswrapper[4811]: I0122 10:24:33.624275 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j29z4" Jan 22 10:24:33 crc kubenswrapper[4811]: I0122 10:24:33.684246 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-j29z4"] Jan 22 10:24:33 crc kubenswrapper[4811]: I0122 10:24:33.706107 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnswq\" (UniqueName: \"kubernetes.io/projected/6444dcfa-080b-4910-b77e-bd1c6ce0353a-kube-api-access-rnswq\") pod \"redhat-marketplace-j29z4\" (UID: \"6444dcfa-080b-4910-b77e-bd1c6ce0353a\") " pod="openshift-marketplace/redhat-marketplace-j29z4" Jan 22 10:24:33 crc kubenswrapper[4811]: I0122 10:24:33.706177 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6444dcfa-080b-4910-b77e-bd1c6ce0353a-utilities\") pod \"redhat-marketplace-j29z4\" (UID: \"6444dcfa-080b-4910-b77e-bd1c6ce0353a\") " pod="openshift-marketplace/redhat-marketplace-j29z4" Jan 22 10:24:33 crc kubenswrapper[4811]: I0122 10:24:33.706242 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6444dcfa-080b-4910-b77e-bd1c6ce0353a-catalog-content\") pod \"redhat-marketplace-j29z4\" (UID: \"6444dcfa-080b-4910-b77e-bd1c6ce0353a\") " pod="openshift-marketplace/redhat-marketplace-j29z4" Jan 22 10:24:33 crc kubenswrapper[4811]: I0122 10:24:33.807845 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6444dcfa-080b-4910-b77e-bd1c6ce0353a-catalog-content\") pod \"redhat-marketplace-j29z4\" (UID: \"6444dcfa-080b-4910-b77e-bd1c6ce0353a\") " pod="openshift-marketplace/redhat-marketplace-j29z4" Jan 22 10:24:33 crc kubenswrapper[4811]: I0122 10:24:33.808016 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rnswq\" (UniqueName: \"kubernetes.io/projected/6444dcfa-080b-4910-b77e-bd1c6ce0353a-kube-api-access-rnswq\") pod \"redhat-marketplace-j29z4\" (UID: \"6444dcfa-080b-4910-b77e-bd1c6ce0353a\") " pod="openshift-marketplace/redhat-marketplace-j29z4" Jan 22 10:24:33 crc kubenswrapper[4811]: I0122 10:24:33.808070 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6444dcfa-080b-4910-b77e-bd1c6ce0353a-utilities\") pod \"redhat-marketplace-j29z4\" (UID: \"6444dcfa-080b-4910-b77e-bd1c6ce0353a\") " pod="openshift-marketplace/redhat-marketplace-j29z4" Jan 22 10:24:33 crc kubenswrapper[4811]: I0122 10:24:33.808315 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6444dcfa-080b-4910-b77e-bd1c6ce0353a-catalog-content\") pod \"redhat-marketplace-j29z4\" (UID: \"6444dcfa-080b-4910-b77e-bd1c6ce0353a\") " pod="openshift-marketplace/redhat-marketplace-j29z4" Jan 22 10:24:33 crc kubenswrapper[4811]: I0122 10:24:33.808446 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6444dcfa-080b-4910-b77e-bd1c6ce0353a-utilities\") pod \"redhat-marketplace-j29z4\" (UID: \"6444dcfa-080b-4910-b77e-bd1c6ce0353a\") " pod="openshift-marketplace/redhat-marketplace-j29z4" Jan 22 10:24:33 crc kubenswrapper[4811]: I0122 10:24:33.825831 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnswq\" (UniqueName: \"kubernetes.io/projected/6444dcfa-080b-4910-b77e-bd1c6ce0353a-kube-api-access-rnswq\") pod \"redhat-marketplace-j29z4\" (UID: \"6444dcfa-080b-4910-b77e-bd1c6ce0353a\") " pod="openshift-marketplace/redhat-marketplace-j29z4" Jan 22 10:24:33 crc kubenswrapper[4811]: I0122 10:24:33.944397 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j29z4" Jan 22 10:24:34 crc kubenswrapper[4811]: I0122 10:24:34.400814 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-j29z4"] Jan 22 10:24:35 crc kubenswrapper[4811]: I0122 10:24:35.305004 4811 generic.go:334] "Generic (PLEG): container finished" podID="6444dcfa-080b-4910-b77e-bd1c6ce0353a" containerID="f0fbcb21af6498b0dc25ddff6e472bd99279522f7267861e2e49463676231c45" exitCode=0 Jan 22 10:24:35 crc kubenswrapper[4811]: I0122 10:24:35.305099 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j29z4" event={"ID":"6444dcfa-080b-4910-b77e-bd1c6ce0353a","Type":"ContainerDied","Data":"f0fbcb21af6498b0dc25ddff6e472bd99279522f7267861e2e49463676231c45"} Jan 22 10:24:35 crc kubenswrapper[4811]: I0122 10:24:35.305959 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j29z4" event={"ID":"6444dcfa-080b-4910-b77e-bd1c6ce0353a","Type":"ContainerStarted","Data":"e3ed037b6adb95088131f5e1cf18749c455c2b7c7f74435c3b8e2e50ef154bf7"} Jan 22 10:24:35 crc kubenswrapper[4811]: I0122 10:24:35.501116 4811 patch_prober.go:28] interesting pod/machine-config-daemon-txvcq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 10:24:35 crc kubenswrapper[4811]: I0122 10:24:35.501175 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 10:24:36 crc kubenswrapper[4811]: I0122 10:24:36.312903 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j29z4" event={"ID":"6444dcfa-080b-4910-b77e-bd1c6ce0353a","Type":"ContainerStarted","Data":"79c47f05b8c9cbc79774997664dc2aac2998c26705a09cd865e5b514a5878505"} Jan 22 10:24:37 crc kubenswrapper[4811]: I0122 10:24:37.319559 4811 generic.go:334] "Generic (PLEG): container finished" podID="6444dcfa-080b-4910-b77e-bd1c6ce0353a" containerID="79c47f05b8c9cbc79774997664dc2aac2998c26705a09cd865e5b514a5878505" exitCode=0 Jan 22 10:24:37 crc kubenswrapper[4811]: I0122 10:24:37.319745 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j29z4" event={"ID":"6444dcfa-080b-4910-b77e-bd1c6ce0353a","Type":"ContainerDied","Data":"79c47f05b8c9cbc79774997664dc2aac2998c26705a09cd865e5b514a5878505"} Jan 22 10:24:38 crc kubenswrapper[4811]: I0122 10:24:38.328000 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j29z4" event={"ID":"6444dcfa-080b-4910-b77e-bd1c6ce0353a","Type":"ContainerStarted","Data":"4a5c01fd7bd85da7fb67dcd335b2154dc50dcf39d46fa072c65bd0854c508c17"} Jan 22 10:24:38 crc kubenswrapper[4811]: I0122 10:24:38.341482 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-j29z4" podStartSLOduration=2.871526851 podStartE2EDuration="5.341470966s" podCreationTimestamp="2026-01-22 10:24:33 +0000 UTC" firstStartedPulling="2026-01-22 10:24:35.306695687 +0000 UTC m=+4719.628882811" lastFinishedPulling="2026-01-22 10:24:37.776639803 +0000 UTC m=+4722.098826926" observedRunningTime="2026-01-22 10:24:38.339541789 +0000 UTC m=+4722.661728912" watchObservedRunningTime="2026-01-22 10:24:38.341470966 +0000 UTC m=+4722.663658090" Jan 22 10:24:43 crc kubenswrapper[4811]: I0122 10:24:43.945119 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-j29z4" Jan 22 10:24:43 crc kubenswrapper[4811]: I0122 10:24:43.945533 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-j29z4" Jan 22 10:24:43 crc kubenswrapper[4811]: I0122 10:24:43.977486 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-j29z4" Jan 22 10:24:44 crc kubenswrapper[4811]: I0122 10:24:44.396462 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-j29z4" Jan 22 10:24:44 crc kubenswrapper[4811]: I0122 10:24:44.435815 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-j29z4"] Jan 22 10:24:46 crc kubenswrapper[4811]: I0122 10:24:46.376218 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-j29z4" podUID="6444dcfa-080b-4910-b77e-bd1c6ce0353a" containerName="registry-server" containerID="cri-o://4a5c01fd7bd85da7fb67dcd335b2154dc50dcf39d46fa072c65bd0854c508c17" gracePeriod=2 Jan 22 10:24:46 crc kubenswrapper[4811]: I0122 10:24:46.889417 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j29z4" Jan 22 10:24:47 crc kubenswrapper[4811]: I0122 10:24:47.018648 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnswq\" (UniqueName: \"kubernetes.io/projected/6444dcfa-080b-4910-b77e-bd1c6ce0353a-kube-api-access-rnswq\") pod \"6444dcfa-080b-4910-b77e-bd1c6ce0353a\" (UID: \"6444dcfa-080b-4910-b77e-bd1c6ce0353a\") " Jan 22 10:24:47 crc kubenswrapper[4811]: I0122 10:24:47.018925 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6444dcfa-080b-4910-b77e-bd1c6ce0353a-utilities\") pod \"6444dcfa-080b-4910-b77e-bd1c6ce0353a\" (UID: \"6444dcfa-080b-4910-b77e-bd1c6ce0353a\") " Jan 22 10:24:47 crc kubenswrapper[4811]: I0122 10:24:47.019069 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6444dcfa-080b-4910-b77e-bd1c6ce0353a-catalog-content\") pod \"6444dcfa-080b-4910-b77e-bd1c6ce0353a\" (UID: \"6444dcfa-080b-4910-b77e-bd1c6ce0353a\") " Jan 22 10:24:47 crc kubenswrapper[4811]: I0122 10:24:47.019734 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6444dcfa-080b-4910-b77e-bd1c6ce0353a-utilities" (OuterVolumeSpecName: "utilities") pod "6444dcfa-080b-4910-b77e-bd1c6ce0353a" (UID: "6444dcfa-080b-4910-b77e-bd1c6ce0353a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:24:47 crc kubenswrapper[4811]: I0122 10:24:47.024131 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6444dcfa-080b-4910-b77e-bd1c6ce0353a-kube-api-access-rnswq" (OuterVolumeSpecName: "kube-api-access-rnswq") pod "6444dcfa-080b-4910-b77e-bd1c6ce0353a" (UID: "6444dcfa-080b-4910-b77e-bd1c6ce0353a"). InnerVolumeSpecName "kube-api-access-rnswq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:24:47 crc kubenswrapper[4811]: I0122 10:24:47.038696 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6444dcfa-080b-4910-b77e-bd1c6ce0353a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6444dcfa-080b-4910-b77e-bd1c6ce0353a" (UID: "6444dcfa-080b-4910-b77e-bd1c6ce0353a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:24:47 crc kubenswrapper[4811]: I0122 10:24:47.121636 4811 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6444dcfa-080b-4910-b77e-bd1c6ce0353a-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 10:24:47 crc kubenswrapper[4811]: I0122 10:24:47.121660 4811 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6444dcfa-080b-4910-b77e-bd1c6ce0353a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 10:24:47 crc kubenswrapper[4811]: I0122 10:24:47.121672 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnswq\" (UniqueName: \"kubernetes.io/projected/6444dcfa-080b-4910-b77e-bd1c6ce0353a-kube-api-access-rnswq\") on node \"crc\" DevicePath \"\"" Jan 22 10:24:47 crc kubenswrapper[4811]: I0122 10:24:47.384162 4811 generic.go:334] "Generic (PLEG): container finished" podID="6444dcfa-080b-4910-b77e-bd1c6ce0353a" containerID="4a5c01fd7bd85da7fb67dcd335b2154dc50dcf39d46fa072c65bd0854c508c17" exitCode=0 Jan 22 10:24:47 crc kubenswrapper[4811]: I0122 10:24:47.384206 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j29z4" event={"ID":"6444dcfa-080b-4910-b77e-bd1c6ce0353a","Type":"ContainerDied","Data":"4a5c01fd7bd85da7fb67dcd335b2154dc50dcf39d46fa072c65bd0854c508c17"} Jan 22 10:24:47 crc kubenswrapper[4811]: I0122 10:24:47.384230 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j29z4" event={"ID":"6444dcfa-080b-4910-b77e-bd1c6ce0353a","Type":"ContainerDied","Data":"e3ed037b6adb95088131f5e1cf18749c455c2b7c7f74435c3b8e2e50ef154bf7"} Jan 22 10:24:47 crc kubenswrapper[4811]: I0122 10:24:47.384247 4811 scope.go:117] "RemoveContainer" containerID="4a5c01fd7bd85da7fb67dcd335b2154dc50dcf39d46fa072c65bd0854c508c17" Jan 22 10:24:47 crc kubenswrapper[4811]: I0122 10:24:47.384359 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j29z4" Jan 22 10:24:47 crc kubenswrapper[4811]: I0122 10:24:47.401209 4811 scope.go:117] "RemoveContainer" containerID="79c47f05b8c9cbc79774997664dc2aac2998c26705a09cd865e5b514a5878505" Jan 22 10:24:47 crc kubenswrapper[4811]: I0122 10:24:47.413087 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-j29z4"] Jan 22 10:24:47 crc kubenswrapper[4811]: I0122 10:24:47.419738 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-j29z4"] Jan 22 10:24:47 crc kubenswrapper[4811]: I0122 10:24:47.434124 4811 scope.go:117] "RemoveContainer" containerID="f0fbcb21af6498b0dc25ddff6e472bd99279522f7267861e2e49463676231c45" Jan 22 10:24:47 crc kubenswrapper[4811]: I0122 10:24:47.458256 4811 scope.go:117] "RemoveContainer" containerID="4a5c01fd7bd85da7fb67dcd335b2154dc50dcf39d46fa072c65bd0854c508c17" Jan 22 10:24:47 crc kubenswrapper[4811]: E0122 10:24:47.458545 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a5c01fd7bd85da7fb67dcd335b2154dc50dcf39d46fa072c65bd0854c508c17\": container with ID starting with 4a5c01fd7bd85da7fb67dcd335b2154dc50dcf39d46fa072c65bd0854c508c17 not found: ID does not exist" containerID="4a5c01fd7bd85da7fb67dcd335b2154dc50dcf39d46fa072c65bd0854c508c17" Jan 22 10:24:47 crc kubenswrapper[4811]: I0122 10:24:47.458572 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a5c01fd7bd85da7fb67dcd335b2154dc50dcf39d46fa072c65bd0854c508c17"} err="failed to get container status \"4a5c01fd7bd85da7fb67dcd335b2154dc50dcf39d46fa072c65bd0854c508c17\": rpc error: code = NotFound desc = could not find container \"4a5c01fd7bd85da7fb67dcd335b2154dc50dcf39d46fa072c65bd0854c508c17\": container with ID starting with 4a5c01fd7bd85da7fb67dcd335b2154dc50dcf39d46fa072c65bd0854c508c17 not found: ID does not exist" Jan 22 10:24:47 crc kubenswrapper[4811]: I0122 10:24:47.458590 4811 scope.go:117] "RemoveContainer" containerID="79c47f05b8c9cbc79774997664dc2aac2998c26705a09cd865e5b514a5878505" Jan 22 10:24:47 crc kubenswrapper[4811]: E0122 10:24:47.458966 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79c47f05b8c9cbc79774997664dc2aac2998c26705a09cd865e5b514a5878505\": container with ID starting with 79c47f05b8c9cbc79774997664dc2aac2998c26705a09cd865e5b514a5878505 not found: ID does not exist" containerID="79c47f05b8c9cbc79774997664dc2aac2998c26705a09cd865e5b514a5878505" Jan 22 10:24:47 crc kubenswrapper[4811]: I0122 10:24:47.458998 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79c47f05b8c9cbc79774997664dc2aac2998c26705a09cd865e5b514a5878505"} err="failed to get container status \"79c47f05b8c9cbc79774997664dc2aac2998c26705a09cd865e5b514a5878505\": rpc error: code = NotFound desc = could not find container \"79c47f05b8c9cbc79774997664dc2aac2998c26705a09cd865e5b514a5878505\": container with ID starting with 79c47f05b8c9cbc79774997664dc2aac2998c26705a09cd865e5b514a5878505 not found: ID does not exist" Jan 22 10:24:47 crc kubenswrapper[4811]: I0122 10:24:47.459037 4811 scope.go:117] "RemoveContainer" containerID="f0fbcb21af6498b0dc25ddff6e472bd99279522f7267861e2e49463676231c45" Jan 22 10:24:47 crc kubenswrapper[4811]: E0122 10:24:47.459321 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f0fbcb21af6498b0dc25ddff6e472bd99279522f7267861e2e49463676231c45\": container with ID starting with f0fbcb21af6498b0dc25ddff6e472bd99279522f7267861e2e49463676231c45 not found: ID does not exist" containerID="f0fbcb21af6498b0dc25ddff6e472bd99279522f7267861e2e49463676231c45" Jan 22 10:24:47 crc kubenswrapper[4811]: I0122 10:24:47.459352 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0fbcb21af6498b0dc25ddff6e472bd99279522f7267861e2e49463676231c45"} err="failed to get container status \"f0fbcb21af6498b0dc25ddff6e472bd99279522f7267861e2e49463676231c45\": rpc error: code = NotFound desc = could not find container \"f0fbcb21af6498b0dc25ddff6e472bd99279522f7267861e2e49463676231c45\": container with ID starting with f0fbcb21af6498b0dc25ddff6e472bd99279522f7267861e2e49463676231c45 not found: ID does not exist" Jan 22 10:24:48 crc kubenswrapper[4811]: I0122 10:24:48.001766 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6444dcfa-080b-4910-b77e-bd1c6ce0353a" path="/var/lib/kubelet/pods/6444dcfa-080b-4910-b77e-bd1c6ce0353a/volumes" Jan 22 10:24:51 crc kubenswrapper[4811]: I0122 10:24:51.411017 4811 generic.go:334] "Generic (PLEG): container finished" podID="3d9af97d-9592-4cf2-bab0-2667139c49a6" containerID="1ba8ce47c4708062574c94c00457f07fbaeac1a0dd462d5e7fd38bb86e51507f" exitCode=0 Jan 22 10:24:51 crc kubenswrapper[4811]: I0122 10:24:51.411333 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-l6gl7/must-gather-vq4qc" event={"ID":"3d9af97d-9592-4cf2-bab0-2667139c49a6","Type":"ContainerDied","Data":"1ba8ce47c4708062574c94c00457f07fbaeac1a0dd462d5e7fd38bb86e51507f"} Jan 22 10:24:51 crc kubenswrapper[4811]: I0122 10:24:51.411824 4811 scope.go:117] "RemoveContainer" containerID="1ba8ce47c4708062574c94c00457f07fbaeac1a0dd462d5e7fd38bb86e51507f" Jan 22 10:24:52 crc kubenswrapper[4811]: I0122 10:24:52.148715 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-l6gl7_must-gather-vq4qc_3d9af97d-9592-4cf2-bab0-2667139c49a6/gather/0.log" Jan 22 10:25:01 crc kubenswrapper[4811]: I0122 10:25:01.743070 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-l6gl7/must-gather-vq4qc"] Jan 22 10:25:01 crc kubenswrapper[4811]: I0122 10:25:01.743763 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-l6gl7/must-gather-vq4qc" podUID="3d9af97d-9592-4cf2-bab0-2667139c49a6" containerName="copy" containerID="cri-o://79603dc68c309d668fb4681e9a7b35416098964fe6b0364660b06727b20661ef" gracePeriod=2 Jan 22 10:25:01 crc kubenswrapper[4811]: I0122 10:25:01.749770 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-l6gl7/must-gather-vq4qc"] Jan 22 10:25:02 crc kubenswrapper[4811]: I0122 10:25:02.235033 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-l6gl7_must-gather-vq4qc_3d9af97d-9592-4cf2-bab0-2667139c49a6/copy/0.log" Jan 22 10:25:02 crc kubenswrapper[4811]: I0122 10:25:02.235611 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-l6gl7/must-gather-vq4qc" Jan 22 10:25:02 crc kubenswrapper[4811]: I0122 10:25:02.349970 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tgwwz\" (UniqueName: \"kubernetes.io/projected/3d9af97d-9592-4cf2-bab0-2667139c49a6-kube-api-access-tgwwz\") pod \"3d9af97d-9592-4cf2-bab0-2667139c49a6\" (UID: \"3d9af97d-9592-4cf2-bab0-2667139c49a6\") " Jan 22 10:25:02 crc kubenswrapper[4811]: I0122 10:25:02.350112 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/3d9af97d-9592-4cf2-bab0-2667139c49a6-must-gather-output\") pod \"3d9af97d-9592-4cf2-bab0-2667139c49a6\" (UID: \"3d9af97d-9592-4cf2-bab0-2667139c49a6\") " Jan 22 10:25:02 crc kubenswrapper[4811]: I0122 10:25:02.354704 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d9af97d-9592-4cf2-bab0-2667139c49a6-kube-api-access-tgwwz" (OuterVolumeSpecName: "kube-api-access-tgwwz") pod "3d9af97d-9592-4cf2-bab0-2667139c49a6" (UID: "3d9af97d-9592-4cf2-bab0-2667139c49a6"). InnerVolumeSpecName "kube-api-access-tgwwz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:25:02 crc kubenswrapper[4811]: I0122 10:25:02.453104 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tgwwz\" (UniqueName: \"kubernetes.io/projected/3d9af97d-9592-4cf2-bab0-2667139c49a6-kube-api-access-tgwwz\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:02 crc kubenswrapper[4811]: I0122 10:25:02.482903 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-l6gl7_must-gather-vq4qc_3d9af97d-9592-4cf2-bab0-2667139c49a6/copy/0.log" Jan 22 10:25:02 crc kubenswrapper[4811]: I0122 10:25:02.483289 4811 generic.go:334] "Generic (PLEG): container finished" podID="3d9af97d-9592-4cf2-bab0-2667139c49a6" containerID="79603dc68c309d668fb4681e9a7b35416098964fe6b0364660b06727b20661ef" exitCode=143 Jan 22 10:25:02 crc kubenswrapper[4811]: I0122 10:25:02.483339 4811 scope.go:117] "RemoveContainer" containerID="79603dc68c309d668fb4681e9a7b35416098964fe6b0364660b06727b20661ef" Jan 22 10:25:02 crc kubenswrapper[4811]: I0122 10:25:02.483448 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-l6gl7/must-gather-vq4qc" Jan 22 10:25:02 crc kubenswrapper[4811]: I0122 10:25:02.498938 4811 scope.go:117] "RemoveContainer" containerID="1ba8ce47c4708062574c94c00457f07fbaeac1a0dd462d5e7fd38bb86e51507f" Jan 22 10:25:02 crc kubenswrapper[4811]: I0122 10:25:02.512605 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d9af97d-9592-4cf2-bab0-2667139c49a6-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "3d9af97d-9592-4cf2-bab0-2667139c49a6" (UID: "3d9af97d-9592-4cf2-bab0-2667139c49a6"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:25:02 crc kubenswrapper[4811]: I0122 10:25:02.529562 4811 scope.go:117] "RemoveContainer" containerID="79603dc68c309d668fb4681e9a7b35416098964fe6b0364660b06727b20661ef" Jan 22 10:25:02 crc kubenswrapper[4811]: E0122 10:25:02.529888 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79603dc68c309d668fb4681e9a7b35416098964fe6b0364660b06727b20661ef\": container with ID starting with 79603dc68c309d668fb4681e9a7b35416098964fe6b0364660b06727b20661ef not found: ID does not exist" containerID="79603dc68c309d668fb4681e9a7b35416098964fe6b0364660b06727b20661ef" Jan 22 10:25:02 crc kubenswrapper[4811]: I0122 10:25:02.529926 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79603dc68c309d668fb4681e9a7b35416098964fe6b0364660b06727b20661ef"} err="failed to get container status \"79603dc68c309d668fb4681e9a7b35416098964fe6b0364660b06727b20661ef\": rpc error: code = NotFound desc = could not find container \"79603dc68c309d668fb4681e9a7b35416098964fe6b0364660b06727b20661ef\": container with ID starting with 79603dc68c309d668fb4681e9a7b35416098964fe6b0364660b06727b20661ef not found: ID does not exist" Jan 22 10:25:02 crc kubenswrapper[4811]: I0122 10:25:02.529949 4811 scope.go:117] "RemoveContainer" containerID="1ba8ce47c4708062574c94c00457f07fbaeac1a0dd462d5e7fd38bb86e51507f" Jan 22 10:25:02 crc kubenswrapper[4811]: E0122 10:25:02.530283 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ba8ce47c4708062574c94c00457f07fbaeac1a0dd462d5e7fd38bb86e51507f\": container with ID starting with 1ba8ce47c4708062574c94c00457f07fbaeac1a0dd462d5e7fd38bb86e51507f not found: ID does not exist" containerID="1ba8ce47c4708062574c94c00457f07fbaeac1a0dd462d5e7fd38bb86e51507f" Jan 22 10:25:02 crc kubenswrapper[4811]: I0122 10:25:02.530311 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ba8ce47c4708062574c94c00457f07fbaeac1a0dd462d5e7fd38bb86e51507f"} err="failed to get container status \"1ba8ce47c4708062574c94c00457f07fbaeac1a0dd462d5e7fd38bb86e51507f\": rpc error: code = NotFound desc = could not find container \"1ba8ce47c4708062574c94c00457f07fbaeac1a0dd462d5e7fd38bb86e51507f\": container with ID starting with 1ba8ce47c4708062574c94c00457f07fbaeac1a0dd462d5e7fd38bb86e51507f not found: ID does not exist" Jan 22 10:25:02 crc kubenswrapper[4811]: I0122 10:25:02.555355 4811 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/3d9af97d-9592-4cf2-bab0-2667139c49a6-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:03 crc kubenswrapper[4811]: I0122 10:25:03.999145 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d9af97d-9592-4cf2-bab0-2667139c49a6" path="/var/lib/kubelet/pods/3d9af97d-9592-4cf2-bab0-2667139c49a6/volumes" Jan 22 10:25:05 crc kubenswrapper[4811]: I0122 10:25:05.501157 4811 patch_prober.go:28] interesting pod/machine-config-daemon-txvcq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 10:25:05 crc kubenswrapper[4811]: I0122 10:25:05.501201 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 10:25:31 crc kubenswrapper[4811]: I0122 10:25:31.945116 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-zbg9n"] Jan 22 10:25:31 crc kubenswrapper[4811]: E0122 10:25:31.945980 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d9af97d-9592-4cf2-bab0-2667139c49a6" containerName="copy" Jan 22 10:25:31 crc kubenswrapper[4811]: I0122 10:25:31.945991 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d9af97d-9592-4cf2-bab0-2667139c49a6" containerName="copy" Jan 22 10:25:31 crc kubenswrapper[4811]: E0122 10:25:31.946005 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6444dcfa-080b-4910-b77e-bd1c6ce0353a" containerName="registry-server" Jan 22 10:25:31 crc kubenswrapper[4811]: I0122 10:25:31.946011 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="6444dcfa-080b-4910-b77e-bd1c6ce0353a" containerName="registry-server" Jan 22 10:25:31 crc kubenswrapper[4811]: E0122 10:25:31.946036 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6444dcfa-080b-4910-b77e-bd1c6ce0353a" containerName="extract-utilities" Jan 22 10:25:31 crc kubenswrapper[4811]: I0122 10:25:31.946041 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="6444dcfa-080b-4910-b77e-bd1c6ce0353a" containerName="extract-utilities" Jan 22 10:25:31 crc kubenswrapper[4811]: E0122 10:25:31.946054 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6444dcfa-080b-4910-b77e-bd1c6ce0353a" containerName="extract-content" Jan 22 10:25:31 crc kubenswrapper[4811]: I0122 10:25:31.946059 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="6444dcfa-080b-4910-b77e-bd1c6ce0353a" containerName="extract-content" Jan 22 10:25:31 crc kubenswrapper[4811]: E0122 10:25:31.946066 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d9af97d-9592-4cf2-bab0-2667139c49a6" containerName="gather" Jan 22 10:25:31 crc kubenswrapper[4811]: I0122 10:25:31.946071 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d9af97d-9592-4cf2-bab0-2667139c49a6" containerName="gather" Jan 22 10:25:31 crc kubenswrapper[4811]: I0122 10:25:31.946227 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d9af97d-9592-4cf2-bab0-2667139c49a6" containerName="copy" Jan 22 10:25:31 crc kubenswrapper[4811]: I0122 10:25:31.946240 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="6444dcfa-080b-4910-b77e-bd1c6ce0353a" containerName="registry-server" Jan 22 10:25:31 crc kubenswrapper[4811]: I0122 10:25:31.946263 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d9af97d-9592-4cf2-bab0-2667139c49a6" containerName="gather" Jan 22 10:25:31 crc kubenswrapper[4811]: I0122 10:25:31.947418 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zbg9n" Jan 22 10:25:31 crc kubenswrapper[4811]: I0122 10:25:31.953877 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zbg9n"] Jan 22 10:25:32 crc kubenswrapper[4811]: I0122 10:25:32.036018 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eff54c35-cbd5-4a3d-9746-350dfe183455-utilities\") pod \"redhat-operators-zbg9n\" (UID: \"eff54c35-cbd5-4a3d-9746-350dfe183455\") " pod="openshift-marketplace/redhat-operators-zbg9n" Jan 22 10:25:32 crc kubenswrapper[4811]: I0122 10:25:32.036056 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfsxq\" (UniqueName: \"kubernetes.io/projected/eff54c35-cbd5-4a3d-9746-350dfe183455-kube-api-access-dfsxq\") pod \"redhat-operators-zbg9n\" (UID: \"eff54c35-cbd5-4a3d-9746-350dfe183455\") " pod="openshift-marketplace/redhat-operators-zbg9n" Jan 22 10:25:32 crc kubenswrapper[4811]: I0122 10:25:32.036281 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eff54c35-cbd5-4a3d-9746-350dfe183455-catalog-content\") pod \"redhat-operators-zbg9n\" (UID: \"eff54c35-cbd5-4a3d-9746-350dfe183455\") " pod="openshift-marketplace/redhat-operators-zbg9n" Jan 22 10:25:32 crc kubenswrapper[4811]: I0122 10:25:32.138415 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eff54c35-cbd5-4a3d-9746-350dfe183455-utilities\") pod \"redhat-operators-zbg9n\" (UID: \"eff54c35-cbd5-4a3d-9746-350dfe183455\") " pod="openshift-marketplace/redhat-operators-zbg9n" Jan 22 10:25:32 crc kubenswrapper[4811]: I0122 10:25:32.138464 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfsxq\" (UniqueName: \"kubernetes.io/projected/eff54c35-cbd5-4a3d-9746-350dfe183455-kube-api-access-dfsxq\") pod \"redhat-operators-zbg9n\" (UID: \"eff54c35-cbd5-4a3d-9746-350dfe183455\") " pod="openshift-marketplace/redhat-operators-zbg9n" Jan 22 10:25:32 crc kubenswrapper[4811]: I0122 10:25:32.138537 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eff54c35-cbd5-4a3d-9746-350dfe183455-catalog-content\") pod \"redhat-operators-zbg9n\" (UID: \"eff54c35-cbd5-4a3d-9746-350dfe183455\") " pod="openshift-marketplace/redhat-operators-zbg9n" Jan 22 10:25:32 crc kubenswrapper[4811]: I0122 10:25:32.139181 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eff54c35-cbd5-4a3d-9746-350dfe183455-utilities\") pod \"redhat-operators-zbg9n\" (UID: \"eff54c35-cbd5-4a3d-9746-350dfe183455\") " pod="openshift-marketplace/redhat-operators-zbg9n" Jan 22 10:25:32 crc kubenswrapper[4811]: I0122 10:25:32.139221 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eff54c35-cbd5-4a3d-9746-350dfe183455-catalog-content\") pod \"redhat-operators-zbg9n\" (UID: \"eff54c35-cbd5-4a3d-9746-350dfe183455\") " pod="openshift-marketplace/redhat-operators-zbg9n" Jan 22 10:25:32 crc kubenswrapper[4811]: I0122 10:25:32.155222 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfsxq\" (UniqueName: \"kubernetes.io/projected/eff54c35-cbd5-4a3d-9746-350dfe183455-kube-api-access-dfsxq\") pod \"redhat-operators-zbg9n\" (UID: \"eff54c35-cbd5-4a3d-9746-350dfe183455\") " pod="openshift-marketplace/redhat-operators-zbg9n" Jan 22 10:25:32 crc kubenswrapper[4811]: I0122 10:25:32.262400 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zbg9n" Jan 22 10:25:32 crc kubenswrapper[4811]: I0122 10:25:32.697285 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zbg9n"] Jan 22 10:25:33 crc kubenswrapper[4811]: I0122 10:25:33.677429 4811 generic.go:334] "Generic (PLEG): container finished" podID="eff54c35-cbd5-4a3d-9746-350dfe183455" containerID="4a7ce7a29c8c68d39d09cc0a1a3e6cf36663b441ddb5cea3646c9d7d37743dc6" exitCode=0 Jan 22 10:25:33 crc kubenswrapper[4811]: I0122 10:25:33.677558 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zbg9n" event={"ID":"eff54c35-cbd5-4a3d-9746-350dfe183455","Type":"ContainerDied","Data":"4a7ce7a29c8c68d39d09cc0a1a3e6cf36663b441ddb5cea3646c9d7d37743dc6"} Jan 22 10:25:33 crc kubenswrapper[4811]: I0122 10:25:33.677844 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zbg9n" event={"ID":"eff54c35-cbd5-4a3d-9746-350dfe183455","Type":"ContainerStarted","Data":"263acf7c7c61dfcbd5fe5146adf69f2412723aebe7dac430cab5e05511050f3e"} Jan 22 10:25:34 crc kubenswrapper[4811]: I0122 10:25:34.687217 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zbg9n" event={"ID":"eff54c35-cbd5-4a3d-9746-350dfe183455","Type":"ContainerStarted","Data":"a894d0835aaa270f51cbf25660f42493b8989dda82533b3b6d6bb364e4e6790b"} Jan 22 10:25:35 crc kubenswrapper[4811]: I0122 10:25:35.331445 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-2bg7t"] Jan 22 10:25:35 crc kubenswrapper[4811]: I0122 10:25:35.333141 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2bg7t" Jan 22 10:25:35 crc kubenswrapper[4811]: I0122 10:25:35.339803 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2bg7t"] Jan 22 10:25:35 crc kubenswrapper[4811]: I0122 10:25:35.399555 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b057da50-8f1d-41dd-9338-da6bf69dea2e-utilities\") pod \"certified-operators-2bg7t\" (UID: \"b057da50-8f1d-41dd-9338-da6bf69dea2e\") " pod="openshift-marketplace/certified-operators-2bg7t" Jan 22 10:25:35 crc kubenswrapper[4811]: I0122 10:25:35.399707 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b057da50-8f1d-41dd-9338-da6bf69dea2e-catalog-content\") pod \"certified-operators-2bg7t\" (UID: \"b057da50-8f1d-41dd-9338-da6bf69dea2e\") " pod="openshift-marketplace/certified-operators-2bg7t" Jan 22 10:25:35 crc kubenswrapper[4811]: I0122 10:25:35.399983 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25zjd\" (UniqueName: \"kubernetes.io/projected/b057da50-8f1d-41dd-9338-da6bf69dea2e-kube-api-access-25zjd\") pod \"certified-operators-2bg7t\" (UID: \"b057da50-8f1d-41dd-9338-da6bf69dea2e\") " pod="openshift-marketplace/certified-operators-2bg7t" Jan 22 10:25:35 crc kubenswrapper[4811]: I0122 10:25:35.500995 4811 patch_prober.go:28] interesting pod/machine-config-daemon-txvcq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 10:25:35 crc kubenswrapper[4811]: I0122 10:25:35.501048 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 10:25:35 crc kubenswrapper[4811]: I0122 10:25:35.501091 4811 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" Jan 22 10:25:35 crc kubenswrapper[4811]: I0122 10:25:35.501775 4811 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5eaae3f44e64e292135c7792057530f50afa70f95d86e85ac2d4c2ac5ec52088"} pod="openshift-machine-config-operator/machine-config-daemon-txvcq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 10:25:35 crc kubenswrapper[4811]: I0122 10:25:35.501835 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" containerName="machine-config-daemon" containerID="cri-o://5eaae3f44e64e292135c7792057530f50afa70f95d86e85ac2d4c2ac5ec52088" gracePeriod=600 Jan 22 10:25:35 crc kubenswrapper[4811]: I0122 10:25:35.502419 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25zjd\" (UniqueName: \"kubernetes.io/projected/b057da50-8f1d-41dd-9338-da6bf69dea2e-kube-api-access-25zjd\") pod \"certified-operators-2bg7t\" (UID: \"b057da50-8f1d-41dd-9338-da6bf69dea2e\") " pod="openshift-marketplace/certified-operators-2bg7t" Jan 22 10:25:35 crc kubenswrapper[4811]: I0122 10:25:35.502574 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b057da50-8f1d-41dd-9338-da6bf69dea2e-utilities\") pod \"certified-operators-2bg7t\" (UID: \"b057da50-8f1d-41dd-9338-da6bf69dea2e\") " pod="openshift-marketplace/certified-operators-2bg7t" Jan 22 10:25:35 crc kubenswrapper[4811]: I0122 10:25:35.502687 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b057da50-8f1d-41dd-9338-da6bf69dea2e-catalog-content\") pod \"certified-operators-2bg7t\" (UID: \"b057da50-8f1d-41dd-9338-da6bf69dea2e\") " pod="openshift-marketplace/certified-operators-2bg7t" Jan 22 10:25:35 crc kubenswrapper[4811]: I0122 10:25:35.503142 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b057da50-8f1d-41dd-9338-da6bf69dea2e-catalog-content\") pod \"certified-operators-2bg7t\" (UID: \"b057da50-8f1d-41dd-9338-da6bf69dea2e\") " pod="openshift-marketplace/certified-operators-2bg7t" Jan 22 10:25:35 crc kubenswrapper[4811]: I0122 10:25:35.503251 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b057da50-8f1d-41dd-9338-da6bf69dea2e-utilities\") pod \"certified-operators-2bg7t\" (UID: \"b057da50-8f1d-41dd-9338-da6bf69dea2e\") " pod="openshift-marketplace/certified-operators-2bg7t" Jan 22 10:25:35 crc kubenswrapper[4811]: I0122 10:25:35.522463 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25zjd\" (UniqueName: \"kubernetes.io/projected/b057da50-8f1d-41dd-9338-da6bf69dea2e-kube-api-access-25zjd\") pod \"certified-operators-2bg7t\" (UID: \"b057da50-8f1d-41dd-9338-da6bf69dea2e\") " pod="openshift-marketplace/certified-operators-2bg7t" Jan 22 10:25:35 crc kubenswrapper[4811]: I0122 10:25:35.648336 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2bg7t" Jan 22 10:25:35 crc kubenswrapper[4811]: I0122 10:25:35.704695 4811 generic.go:334] "Generic (PLEG): container finished" podID="84068a6b-e189-419b-87f5-f31428f6eafe" containerID="5eaae3f44e64e292135c7792057530f50afa70f95d86e85ac2d4c2ac5ec52088" exitCode=0 Jan 22 10:25:35 crc kubenswrapper[4811]: I0122 10:25:35.704958 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" event={"ID":"84068a6b-e189-419b-87f5-f31428f6eafe","Type":"ContainerDied","Data":"5eaae3f44e64e292135c7792057530f50afa70f95d86e85ac2d4c2ac5ec52088"} Jan 22 10:25:35 crc kubenswrapper[4811]: I0122 10:25:35.705017 4811 scope.go:117] "RemoveContainer" containerID="538ba6db70fd8847ece098eb9e97c7fe7aec7e85ea8714fabcb34e1d372c3323" Jan 22 10:25:36 crc kubenswrapper[4811]: I0122 10:25:36.182475 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2bg7t"] Jan 22 10:25:36 crc kubenswrapper[4811]: I0122 10:25:36.712656 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2bg7t" event={"ID":"b057da50-8f1d-41dd-9338-da6bf69dea2e","Type":"ContainerStarted","Data":"d083c0c2f54121ccd276872c1a03632c6b0a428101fa042455693dd4a73cc509"} Jan 22 10:25:36 crc kubenswrapper[4811]: I0122 10:25:36.714657 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" event={"ID":"84068a6b-e189-419b-87f5-f31428f6eafe","Type":"ContainerStarted","Data":"45aea1cc2ea40cce27189c15b2ef036ae75e0da0eedfd9f533b136aafab9e82e"} Jan 22 10:25:37 crc kubenswrapper[4811]: I0122 10:25:37.722086 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2bg7t" event={"ID":"b057da50-8f1d-41dd-9338-da6bf69dea2e","Type":"ContainerStarted","Data":"2b66272b853bbfddc9de310af0fb30f49c826c42369f45a21bf422fda10e2142"} Jan 22 10:25:38 crc kubenswrapper[4811]: I0122 10:25:38.730501 4811 generic.go:334] "Generic (PLEG): container finished" podID="eff54c35-cbd5-4a3d-9746-350dfe183455" containerID="a894d0835aaa270f51cbf25660f42493b8989dda82533b3b6d6bb364e4e6790b" exitCode=0 Jan 22 10:25:38 crc kubenswrapper[4811]: I0122 10:25:38.730550 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zbg9n" event={"ID":"eff54c35-cbd5-4a3d-9746-350dfe183455","Type":"ContainerDied","Data":"a894d0835aaa270f51cbf25660f42493b8989dda82533b3b6d6bb364e4e6790b"} Jan 22 10:25:38 crc kubenswrapper[4811]: I0122 10:25:38.732664 4811 generic.go:334] "Generic (PLEG): container finished" podID="b057da50-8f1d-41dd-9338-da6bf69dea2e" containerID="2b66272b853bbfddc9de310af0fb30f49c826c42369f45a21bf422fda10e2142" exitCode=0 Jan 22 10:25:38 crc kubenswrapper[4811]: I0122 10:25:38.732703 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2bg7t" event={"ID":"b057da50-8f1d-41dd-9338-da6bf69dea2e","Type":"ContainerDied","Data":"2b66272b853bbfddc9de310af0fb30f49c826c42369f45a21bf422fda10e2142"} Jan 22 10:25:39 crc kubenswrapper[4811]: I0122 10:25:39.741731 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2bg7t" event={"ID":"b057da50-8f1d-41dd-9338-da6bf69dea2e","Type":"ContainerStarted","Data":"ee74f1fc80150afa31066d4f2e11d4d2a35c9774c63bc4dd22ebde17585c6521"} Jan 22 10:25:39 crc kubenswrapper[4811]: I0122 10:25:39.744950 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zbg9n" event={"ID":"eff54c35-cbd5-4a3d-9746-350dfe183455","Type":"ContainerStarted","Data":"79bcfcc663a7e53b50e4f3261511ea4266aa3943e5b14be8f1f16c656df3d111"} Jan 22 10:25:40 crc kubenswrapper[4811]: I0122 10:25:40.753514 4811 generic.go:334] "Generic (PLEG): container finished" podID="b057da50-8f1d-41dd-9338-da6bf69dea2e" containerID="ee74f1fc80150afa31066d4f2e11d4d2a35c9774c63bc4dd22ebde17585c6521" exitCode=0 Jan 22 10:25:40 crc kubenswrapper[4811]: I0122 10:25:40.753668 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2bg7t" event={"ID":"b057da50-8f1d-41dd-9338-da6bf69dea2e","Type":"ContainerDied","Data":"ee74f1fc80150afa31066d4f2e11d4d2a35c9774c63bc4dd22ebde17585c6521"} Jan 22 10:25:40 crc kubenswrapper[4811]: I0122 10:25:40.774733 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-zbg9n" podStartSLOduration=4.230501545 podStartE2EDuration="9.7747156s" podCreationTimestamp="2026-01-22 10:25:31 +0000 UTC" firstStartedPulling="2026-01-22 10:25:33.679466143 +0000 UTC m=+4778.001653266" lastFinishedPulling="2026-01-22 10:25:39.223680198 +0000 UTC m=+4783.545867321" observedRunningTime="2026-01-22 10:25:39.772756681 +0000 UTC m=+4784.094943805" watchObservedRunningTime="2026-01-22 10:25:40.7747156 +0000 UTC m=+4785.096902722" Jan 22 10:25:41 crc kubenswrapper[4811]: I0122 10:25:41.762159 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2bg7t" event={"ID":"b057da50-8f1d-41dd-9338-da6bf69dea2e","Type":"ContainerStarted","Data":"23e6724ef1418b108357e6b3ee3848aa69a91b397a22fbfa2bbb84b09bccd8c3"} Jan 22 10:25:42 crc kubenswrapper[4811]: I0122 10:25:42.263509 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zbg9n" Jan 22 10:25:42 crc kubenswrapper[4811]: I0122 10:25:42.263864 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-zbg9n" Jan 22 10:25:43 crc kubenswrapper[4811]: I0122 10:25:43.309901 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zbg9n" podUID="eff54c35-cbd5-4a3d-9746-350dfe183455" containerName="registry-server" probeResult="failure" output=< Jan 22 10:25:43 crc kubenswrapper[4811]: timeout: failed to connect service ":50051" within 1s Jan 22 10:25:43 crc kubenswrapper[4811]: > Jan 22 10:25:45 crc kubenswrapper[4811]: I0122 10:25:45.649473 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-2bg7t" Jan 22 10:25:45 crc kubenswrapper[4811]: I0122 10:25:45.649811 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-2bg7t" Jan 22 10:25:45 crc kubenswrapper[4811]: I0122 10:25:45.683135 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-2bg7t" Jan 22 10:25:45 crc kubenswrapper[4811]: I0122 10:25:45.702536 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-2bg7t" podStartSLOduration=8.219041273 podStartE2EDuration="10.702520634s" podCreationTimestamp="2026-01-22 10:25:35 +0000 UTC" firstStartedPulling="2026-01-22 10:25:38.733707139 +0000 UTC m=+4783.055894262" lastFinishedPulling="2026-01-22 10:25:41.217186499 +0000 UTC m=+4785.539373623" observedRunningTime="2026-01-22 10:25:41.779801594 +0000 UTC m=+4786.101988727" watchObservedRunningTime="2026-01-22 10:25:45.702520634 +0000 UTC m=+4790.024707758" Jan 22 10:25:52 crc kubenswrapper[4811]: I0122 10:25:52.294265 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-zbg9n" Jan 22 10:25:52 crc kubenswrapper[4811]: I0122 10:25:52.328794 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-zbg9n" Jan 22 10:25:52 crc kubenswrapper[4811]: I0122 10:25:52.520398 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zbg9n"] Jan 22 10:25:53 crc kubenswrapper[4811]: I0122 10:25:53.831408 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-zbg9n" podUID="eff54c35-cbd5-4a3d-9746-350dfe183455" containerName="registry-server" containerID="cri-o://79bcfcc663a7e53b50e4f3261511ea4266aa3943e5b14be8f1f16c656df3d111" gracePeriod=2 Jan 22 10:25:54 crc kubenswrapper[4811]: I0122 10:25:54.311394 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zbg9n" Jan 22 10:25:54 crc kubenswrapper[4811]: I0122 10:25:54.340808 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dfsxq\" (UniqueName: \"kubernetes.io/projected/eff54c35-cbd5-4a3d-9746-350dfe183455-kube-api-access-dfsxq\") pod \"eff54c35-cbd5-4a3d-9746-350dfe183455\" (UID: \"eff54c35-cbd5-4a3d-9746-350dfe183455\") " Jan 22 10:25:54 crc kubenswrapper[4811]: I0122 10:25:54.340868 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eff54c35-cbd5-4a3d-9746-350dfe183455-catalog-content\") pod \"eff54c35-cbd5-4a3d-9746-350dfe183455\" (UID: \"eff54c35-cbd5-4a3d-9746-350dfe183455\") " Jan 22 10:25:54 crc kubenswrapper[4811]: I0122 10:25:54.340887 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eff54c35-cbd5-4a3d-9746-350dfe183455-utilities\") pod \"eff54c35-cbd5-4a3d-9746-350dfe183455\" (UID: \"eff54c35-cbd5-4a3d-9746-350dfe183455\") " Jan 22 10:25:54 crc kubenswrapper[4811]: I0122 10:25:54.341463 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eff54c35-cbd5-4a3d-9746-350dfe183455-utilities" (OuterVolumeSpecName: "utilities") pod "eff54c35-cbd5-4a3d-9746-350dfe183455" (UID: "eff54c35-cbd5-4a3d-9746-350dfe183455"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:25:54 crc kubenswrapper[4811]: I0122 10:25:54.345143 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eff54c35-cbd5-4a3d-9746-350dfe183455-kube-api-access-dfsxq" (OuterVolumeSpecName: "kube-api-access-dfsxq") pod "eff54c35-cbd5-4a3d-9746-350dfe183455" (UID: "eff54c35-cbd5-4a3d-9746-350dfe183455"). InnerVolumeSpecName "kube-api-access-dfsxq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:25:54 crc kubenswrapper[4811]: I0122 10:25:54.428317 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eff54c35-cbd5-4a3d-9746-350dfe183455-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "eff54c35-cbd5-4a3d-9746-350dfe183455" (UID: "eff54c35-cbd5-4a3d-9746-350dfe183455"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:25:54 crc kubenswrapper[4811]: I0122 10:25:54.443117 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dfsxq\" (UniqueName: \"kubernetes.io/projected/eff54c35-cbd5-4a3d-9746-350dfe183455-kube-api-access-dfsxq\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:54 crc kubenswrapper[4811]: I0122 10:25:54.443144 4811 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eff54c35-cbd5-4a3d-9746-350dfe183455-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:54 crc kubenswrapper[4811]: I0122 10:25:54.443153 4811 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eff54c35-cbd5-4a3d-9746-350dfe183455-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:54 crc kubenswrapper[4811]: I0122 10:25:54.840907 4811 generic.go:334] "Generic (PLEG): container finished" podID="eff54c35-cbd5-4a3d-9746-350dfe183455" containerID="79bcfcc663a7e53b50e4f3261511ea4266aa3943e5b14be8f1f16c656df3d111" exitCode=0 Jan 22 10:25:54 crc kubenswrapper[4811]: I0122 10:25:54.841138 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zbg9n" Jan 22 10:25:54 crc kubenswrapper[4811]: I0122 10:25:54.841619 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zbg9n" event={"ID":"eff54c35-cbd5-4a3d-9746-350dfe183455","Type":"ContainerDied","Data":"79bcfcc663a7e53b50e4f3261511ea4266aa3943e5b14be8f1f16c656df3d111"} Jan 22 10:25:54 crc kubenswrapper[4811]: I0122 10:25:54.841925 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zbg9n" event={"ID":"eff54c35-cbd5-4a3d-9746-350dfe183455","Type":"ContainerDied","Data":"263acf7c7c61dfcbd5fe5146adf69f2412723aebe7dac430cab5e05511050f3e"} Jan 22 10:25:54 crc kubenswrapper[4811]: I0122 10:25:54.841946 4811 scope.go:117] "RemoveContainer" containerID="79bcfcc663a7e53b50e4f3261511ea4266aa3943e5b14be8f1f16c656df3d111" Jan 22 10:25:54 crc kubenswrapper[4811]: I0122 10:25:54.866277 4811 scope.go:117] "RemoveContainer" containerID="a894d0835aaa270f51cbf25660f42493b8989dda82533b3b6d6bb364e4e6790b" Jan 22 10:25:54 crc kubenswrapper[4811]: I0122 10:25:54.869609 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zbg9n"] Jan 22 10:25:54 crc kubenswrapper[4811]: I0122 10:25:54.876085 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-zbg9n"] Jan 22 10:25:54 crc kubenswrapper[4811]: I0122 10:25:54.882306 4811 scope.go:117] "RemoveContainer" containerID="4a7ce7a29c8c68d39d09cc0a1a3e6cf36663b441ddb5cea3646c9d7d37743dc6" Jan 22 10:25:54 crc kubenswrapper[4811]: I0122 10:25:54.911311 4811 scope.go:117] "RemoveContainer" containerID="79bcfcc663a7e53b50e4f3261511ea4266aa3943e5b14be8f1f16c656df3d111" Jan 22 10:25:54 crc kubenswrapper[4811]: E0122 10:25:54.911613 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79bcfcc663a7e53b50e4f3261511ea4266aa3943e5b14be8f1f16c656df3d111\": container with ID starting with 79bcfcc663a7e53b50e4f3261511ea4266aa3943e5b14be8f1f16c656df3d111 not found: ID does not exist" containerID="79bcfcc663a7e53b50e4f3261511ea4266aa3943e5b14be8f1f16c656df3d111" Jan 22 10:25:54 crc kubenswrapper[4811]: I0122 10:25:54.911662 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79bcfcc663a7e53b50e4f3261511ea4266aa3943e5b14be8f1f16c656df3d111"} err="failed to get container status \"79bcfcc663a7e53b50e4f3261511ea4266aa3943e5b14be8f1f16c656df3d111\": rpc error: code = NotFound desc = could not find container \"79bcfcc663a7e53b50e4f3261511ea4266aa3943e5b14be8f1f16c656df3d111\": container with ID starting with 79bcfcc663a7e53b50e4f3261511ea4266aa3943e5b14be8f1f16c656df3d111 not found: ID does not exist" Jan 22 10:25:54 crc kubenswrapper[4811]: I0122 10:25:54.911681 4811 scope.go:117] "RemoveContainer" containerID="a894d0835aaa270f51cbf25660f42493b8989dda82533b3b6d6bb364e4e6790b" Jan 22 10:25:54 crc kubenswrapper[4811]: E0122 10:25:54.911907 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a894d0835aaa270f51cbf25660f42493b8989dda82533b3b6d6bb364e4e6790b\": container with ID starting with a894d0835aaa270f51cbf25660f42493b8989dda82533b3b6d6bb364e4e6790b not found: ID does not exist" containerID="a894d0835aaa270f51cbf25660f42493b8989dda82533b3b6d6bb364e4e6790b" Jan 22 10:25:54 crc kubenswrapper[4811]: I0122 10:25:54.911928 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a894d0835aaa270f51cbf25660f42493b8989dda82533b3b6d6bb364e4e6790b"} err="failed to get container status \"a894d0835aaa270f51cbf25660f42493b8989dda82533b3b6d6bb364e4e6790b\": rpc error: code = NotFound desc = could not find container \"a894d0835aaa270f51cbf25660f42493b8989dda82533b3b6d6bb364e4e6790b\": container with ID starting with a894d0835aaa270f51cbf25660f42493b8989dda82533b3b6d6bb364e4e6790b not found: ID does not exist" Jan 22 10:25:54 crc kubenswrapper[4811]: I0122 10:25:54.911943 4811 scope.go:117] "RemoveContainer" containerID="4a7ce7a29c8c68d39d09cc0a1a3e6cf36663b441ddb5cea3646c9d7d37743dc6" Jan 22 10:25:54 crc kubenswrapper[4811]: E0122 10:25:54.912205 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a7ce7a29c8c68d39d09cc0a1a3e6cf36663b441ddb5cea3646c9d7d37743dc6\": container with ID starting with 4a7ce7a29c8c68d39d09cc0a1a3e6cf36663b441ddb5cea3646c9d7d37743dc6 not found: ID does not exist" containerID="4a7ce7a29c8c68d39d09cc0a1a3e6cf36663b441ddb5cea3646c9d7d37743dc6" Jan 22 10:25:54 crc kubenswrapper[4811]: I0122 10:25:54.912225 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a7ce7a29c8c68d39d09cc0a1a3e6cf36663b441ddb5cea3646c9d7d37743dc6"} err="failed to get container status \"4a7ce7a29c8c68d39d09cc0a1a3e6cf36663b441ddb5cea3646c9d7d37743dc6\": rpc error: code = NotFound desc = could not find container \"4a7ce7a29c8c68d39d09cc0a1a3e6cf36663b441ddb5cea3646c9d7d37743dc6\": container with ID starting with 4a7ce7a29c8c68d39d09cc0a1a3e6cf36663b441ddb5cea3646c9d7d37743dc6 not found: ID does not exist" Jan 22 10:25:55 crc kubenswrapper[4811]: I0122 10:25:55.682098 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-2bg7t" Jan 22 10:25:56 crc kubenswrapper[4811]: I0122 10:25:56.000893 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eff54c35-cbd5-4a3d-9746-350dfe183455" path="/var/lib/kubelet/pods/eff54c35-cbd5-4a3d-9746-350dfe183455/volumes" Jan 22 10:25:57 crc kubenswrapper[4811]: I0122 10:25:57.921915 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2bg7t"] Jan 22 10:25:57 crc kubenswrapper[4811]: I0122 10:25:57.922258 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-2bg7t" podUID="b057da50-8f1d-41dd-9338-da6bf69dea2e" containerName="registry-server" containerID="cri-o://23e6724ef1418b108357e6b3ee3848aa69a91b397a22fbfa2bbb84b09bccd8c3" gracePeriod=2 Jan 22 10:25:58 crc kubenswrapper[4811]: I0122 10:25:58.604333 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2bg7t" Jan 22 10:25:58 crc kubenswrapper[4811]: I0122 10:25:58.718900 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b057da50-8f1d-41dd-9338-da6bf69dea2e-catalog-content\") pod \"b057da50-8f1d-41dd-9338-da6bf69dea2e\" (UID: \"b057da50-8f1d-41dd-9338-da6bf69dea2e\") " Jan 22 10:25:58 crc kubenswrapper[4811]: I0122 10:25:58.718956 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b057da50-8f1d-41dd-9338-da6bf69dea2e-utilities\") pod \"b057da50-8f1d-41dd-9338-da6bf69dea2e\" (UID: \"b057da50-8f1d-41dd-9338-da6bf69dea2e\") " Jan 22 10:25:58 crc kubenswrapper[4811]: I0122 10:25:58.719157 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-25zjd\" (UniqueName: \"kubernetes.io/projected/b057da50-8f1d-41dd-9338-da6bf69dea2e-kube-api-access-25zjd\") pod \"b057da50-8f1d-41dd-9338-da6bf69dea2e\" (UID: \"b057da50-8f1d-41dd-9338-da6bf69dea2e\") " Jan 22 10:25:58 crc kubenswrapper[4811]: I0122 10:25:58.720438 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b057da50-8f1d-41dd-9338-da6bf69dea2e-utilities" (OuterVolumeSpecName: "utilities") pod "b057da50-8f1d-41dd-9338-da6bf69dea2e" (UID: "b057da50-8f1d-41dd-9338-da6bf69dea2e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:25:58 crc kubenswrapper[4811]: I0122 10:25:58.723562 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b057da50-8f1d-41dd-9338-da6bf69dea2e-kube-api-access-25zjd" (OuterVolumeSpecName: "kube-api-access-25zjd") pod "b057da50-8f1d-41dd-9338-da6bf69dea2e" (UID: "b057da50-8f1d-41dd-9338-da6bf69dea2e"). InnerVolumeSpecName "kube-api-access-25zjd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:25:58 crc kubenswrapper[4811]: I0122 10:25:58.754382 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b057da50-8f1d-41dd-9338-da6bf69dea2e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b057da50-8f1d-41dd-9338-da6bf69dea2e" (UID: "b057da50-8f1d-41dd-9338-da6bf69dea2e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:25:58 crc kubenswrapper[4811]: I0122 10:25:58.821314 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-25zjd\" (UniqueName: \"kubernetes.io/projected/b057da50-8f1d-41dd-9338-da6bf69dea2e-kube-api-access-25zjd\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:58 crc kubenswrapper[4811]: I0122 10:25:58.821451 4811 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b057da50-8f1d-41dd-9338-da6bf69dea2e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:58 crc kubenswrapper[4811]: I0122 10:25:58.821507 4811 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b057da50-8f1d-41dd-9338-da6bf69dea2e-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:58 crc kubenswrapper[4811]: I0122 10:25:58.866323 4811 generic.go:334] "Generic (PLEG): container finished" podID="b057da50-8f1d-41dd-9338-da6bf69dea2e" containerID="23e6724ef1418b108357e6b3ee3848aa69a91b397a22fbfa2bbb84b09bccd8c3" exitCode=0 Jan 22 10:25:58 crc kubenswrapper[4811]: I0122 10:25:58.866448 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2bg7t" event={"ID":"b057da50-8f1d-41dd-9338-da6bf69dea2e","Type":"ContainerDied","Data":"23e6724ef1418b108357e6b3ee3848aa69a91b397a22fbfa2bbb84b09bccd8c3"} Jan 22 10:25:58 crc kubenswrapper[4811]: I0122 10:25:58.866567 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2bg7t" event={"ID":"b057da50-8f1d-41dd-9338-da6bf69dea2e","Type":"ContainerDied","Data":"d083c0c2f54121ccd276872c1a03632c6b0a428101fa042455693dd4a73cc509"} Jan 22 10:25:58 crc kubenswrapper[4811]: I0122 10:25:58.866671 4811 scope.go:117] "RemoveContainer" containerID="23e6724ef1418b108357e6b3ee3848aa69a91b397a22fbfa2bbb84b09bccd8c3" Jan 22 10:25:58 crc kubenswrapper[4811]: I0122 10:25:58.866850 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2bg7t" Jan 22 10:25:58 crc kubenswrapper[4811]: I0122 10:25:58.894808 4811 scope.go:117] "RemoveContainer" containerID="ee74f1fc80150afa31066d4f2e11d4d2a35c9774c63bc4dd22ebde17585c6521" Jan 22 10:25:58 crc kubenswrapper[4811]: I0122 10:25:58.897700 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2bg7t"] Jan 22 10:25:58 crc kubenswrapper[4811]: I0122 10:25:58.904223 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-2bg7t"] Jan 22 10:25:58 crc kubenswrapper[4811]: I0122 10:25:58.911327 4811 scope.go:117] "RemoveContainer" containerID="2b66272b853bbfddc9de310af0fb30f49c826c42369f45a21bf422fda10e2142" Jan 22 10:25:58 crc kubenswrapper[4811]: I0122 10:25:58.925278 4811 scope.go:117] "RemoveContainer" containerID="23e6724ef1418b108357e6b3ee3848aa69a91b397a22fbfa2bbb84b09bccd8c3" Jan 22 10:25:58 crc kubenswrapper[4811]: E0122 10:25:58.925771 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"23e6724ef1418b108357e6b3ee3848aa69a91b397a22fbfa2bbb84b09bccd8c3\": container with ID starting with 23e6724ef1418b108357e6b3ee3848aa69a91b397a22fbfa2bbb84b09bccd8c3 not found: ID does not exist" containerID="23e6724ef1418b108357e6b3ee3848aa69a91b397a22fbfa2bbb84b09bccd8c3" Jan 22 10:25:58 crc kubenswrapper[4811]: I0122 10:25:58.925796 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23e6724ef1418b108357e6b3ee3848aa69a91b397a22fbfa2bbb84b09bccd8c3"} err="failed to get container status \"23e6724ef1418b108357e6b3ee3848aa69a91b397a22fbfa2bbb84b09bccd8c3\": rpc error: code = NotFound desc = could not find container \"23e6724ef1418b108357e6b3ee3848aa69a91b397a22fbfa2bbb84b09bccd8c3\": container with ID starting with 23e6724ef1418b108357e6b3ee3848aa69a91b397a22fbfa2bbb84b09bccd8c3 not found: ID does not exist" Jan 22 10:25:58 crc kubenswrapper[4811]: I0122 10:25:58.925817 4811 scope.go:117] "RemoveContainer" containerID="ee74f1fc80150afa31066d4f2e11d4d2a35c9774c63bc4dd22ebde17585c6521" Jan 22 10:25:58 crc kubenswrapper[4811]: E0122 10:25:58.926022 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee74f1fc80150afa31066d4f2e11d4d2a35c9774c63bc4dd22ebde17585c6521\": container with ID starting with ee74f1fc80150afa31066d4f2e11d4d2a35c9774c63bc4dd22ebde17585c6521 not found: ID does not exist" containerID="ee74f1fc80150afa31066d4f2e11d4d2a35c9774c63bc4dd22ebde17585c6521" Jan 22 10:25:58 crc kubenswrapper[4811]: I0122 10:25:58.926040 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee74f1fc80150afa31066d4f2e11d4d2a35c9774c63bc4dd22ebde17585c6521"} err="failed to get container status \"ee74f1fc80150afa31066d4f2e11d4d2a35c9774c63bc4dd22ebde17585c6521\": rpc error: code = NotFound desc = could not find container \"ee74f1fc80150afa31066d4f2e11d4d2a35c9774c63bc4dd22ebde17585c6521\": container with ID starting with ee74f1fc80150afa31066d4f2e11d4d2a35c9774c63bc4dd22ebde17585c6521 not found: ID does not exist" Jan 22 10:25:58 crc kubenswrapper[4811]: I0122 10:25:58.926055 4811 scope.go:117] "RemoveContainer" containerID="2b66272b853bbfddc9de310af0fb30f49c826c42369f45a21bf422fda10e2142" Jan 22 10:25:58 crc kubenswrapper[4811]: E0122 10:25:58.926243 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b66272b853bbfddc9de310af0fb30f49c826c42369f45a21bf422fda10e2142\": container with ID starting with 2b66272b853bbfddc9de310af0fb30f49c826c42369f45a21bf422fda10e2142 not found: ID does not exist" containerID="2b66272b853bbfddc9de310af0fb30f49c826c42369f45a21bf422fda10e2142" Jan 22 10:25:58 crc kubenswrapper[4811]: I0122 10:25:58.926258 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b66272b853bbfddc9de310af0fb30f49c826c42369f45a21bf422fda10e2142"} err="failed to get container status \"2b66272b853bbfddc9de310af0fb30f49c826c42369f45a21bf422fda10e2142\": rpc error: code = NotFound desc = could not find container \"2b66272b853bbfddc9de310af0fb30f49c826c42369f45a21bf422fda10e2142\": container with ID starting with 2b66272b853bbfddc9de310af0fb30f49c826c42369f45a21bf422fda10e2142 not found: ID does not exist" Jan 22 10:26:00 crc kubenswrapper[4811]: I0122 10:26:00.007074 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b057da50-8f1d-41dd-9338-da6bf69dea2e" path="/var/lib/kubelet/pods/b057da50-8f1d-41dd-9338-da6bf69dea2e/volumes" Jan 22 10:27:35 crc kubenswrapper[4811]: I0122 10:27:35.501206 4811 patch_prober.go:28] interesting pod/machine-config-daemon-txvcq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 10:27:35 crc kubenswrapper[4811]: I0122 10:27:35.501519 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-txvcq" podUID="84068a6b-e189-419b-87f5-f31428f6eafe" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused"